“How Come We Never Thought of This?”
Cybernet has emerged as ground zero for the commercialization of gesture interface technology. I’ve stared at a computer screen for countless hours, but on this morning inside the company’s offices, things somehow look different. On the screen is a typical assortment of folders and program icons. When I look at the Internet Explorer icon in the upper left-hand corner, however, something strange happens. The cursor moves toward where I’m looking. No mouse. No keyboards. My hands are resting at my sides. It’s like a Ouija board.
I’m using Navigaze, a new interface based entirely on eye movement. Instead of double-clicking, for example, you double-blink; with Navigaze, Christopher Reeve could surf the Web. Cybernet will roll out Navigaze this spring, along with an improved version of a gaming technology called Use Your Head-a system (first introduced in 2000) that lets you input directional instructions by bobbing your noggin. A camera tracks a player’s head motion, and the on-screen image changes accordingly: lean left, and your field of vision turns left; lean right, and the view shifts the other way.
Cybernet made its name in the late 1980s in force feedback, the haptic technology now available for video games as well as in the automotive and medical industries. Cohen sees gesture recognition as another field ready to bloom. “Gesture recognition is in the stage that force feedback was in ten years ago,” he says.
One of Cybernet’s earliest forays into gesture recognition came in 1998, when the U.S. Army contracted with the company to create a gesture-based computerized training system: a trainee could command a troop of simulated soldiers by making a variety of hand movements. NASA commissioned the company to create a gesture-based information kiosk for the public, but that project didn’t get far. “Students kept putting their gum on the kiosk and messing it up,” Cohen says.
So far, the closest the company has come to finding the killer app for gesture interface is a military system that enables the manipulation of images on command-and-control maps. After reading a press release about the work, a television station expressed interest in adapting the technology for its meteorologist. “It was perfect!” Cohen recalls. “How come we never thought of this?”
The TV weather application was perfect for one primary reason: its surrounding environment didn’t have to be engineered. EyeToy, by contrast, works only if you stand in a certain place relative to the camera; if someone blocks the camera’s view, everything goes haywire. Because a TV meteorologist stands in front of a consistent, unobstructed background, there would be no such disruptions to contend with.