Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Virtual Keyboards and Beyond

The clouds have parted. The rain has ceased. As I finish my round of GestureStorm theatrics, I decide to shoo away the clouds and let Detroit return to its peace and calm once again.

Over lunch at a nearby Italian restaurant, Cybernet’s Cohen suggests that the mission of gesture recognition is not necessarily to supplant the old keyboard and mouse but, rather, to supplement them. “I won’t say gesture recognition is the be-all and end-all,” he says.

Indeed, one intriguing application illustrates the way that gesture technology could dovetail with conventional interfaces. A device from San Jose, CA-based Canesta-due out later this year-brings gesture recognition to personal digital assistants. The device projects an image of a keyboard onto a flat surface, such as a desk, through a tiny lens inside the PDA. An infrared light beam directed at the zone just above the projected keyboard senses precisely where the user’s fingers are at any instant: the device monitors the time it takes for a pulse of infrared light to leave the emitter, bounce off the moving fingertips, and return to a sensor in the PDA. A pulse’s round-trip travel time corresponds with a specific distance, providing a 3-D map of the fingertips’ position over the keys, so whatever the user types on the virtual keyboard is captured digitally inside the PDA.

The Canesta device operates at more than 50 frames per second, so it can keep up with even the speediest typist. Because Canesta’s technology uses infrared light to measure the distance to the object, it could potentially alleviate one of the problems facing Sony and Cybernet: how to perceive gestures against a bright or busy background. With the current configuration of the EyeToy, for example, I’d seriously mess up my daughter’s game of Wishi Washi if I passed in front of the camera’s background while she’s playing. If Canesta’s infrared light were trained on her, and her alone, the game wouldn’t register my interruption. Canesta considers the $11 billion video game industry to be a future target area and says it has talked with a number of major players in the electronic-entertainment business. Later this year, a Jerusalem, Israel, company called VKB will introduce a competing virtual keyboard that employs technology similar to Canesta’s.

Beyond keyboards, weather forecasting, and games, gesture recognition technology could transform the way people interact with computers in a variety of settings. Universities have been working on the technology for years. Researchers at the Georgia Institute of Technology, for example, have explored how gesture recognition may help reduce automobile accidents. A group led by Thad Starner has created what it calls a “gesture panel” in place of a standard dashboard control. The driver adjusts the car’s temperature or sound system volume by maneuvering her hand over a designated area, without having to take her eyes off the road.

Researchers at MIT’s Media Laboratory have studied ways in which gestures could be used to enhance various entertainment devices. A “StoryMat,” for example, could recognize and react to movements of particular toys on a child’s play mat. A “conversational humanoid” senses and responds to a person’s motions, as reported by a wearable, electromagnetic tracking device. Other projects examine the emotional messages that gestures and posture convey. Research has shown that it’s possible to program machines to discern the interest-or lack thereof-that children display when interacting with educational software, says Rosalind W. Picard, director of the lab’s affective-computing research group. A program that incorporated such inadvertent user input could respond accordingly-perhaps by switching activities when the user slumped in apparent boredom.

Not surprisingly, some effort has also gone toward endowing Microsoft products with gesture interfaces. During the 1990s, researchers at the University of Cambridge in England developed an experimental system called Jester that employed gesture recognition for surfing through Windows; it never made it out of the lab. Another truly killer application would be a gesture interface for PowerPoint-the ubiquitous presentation software. At Cybernet, Cohen is working on such an interface himself. It could require the presenter to slip on a glove that would be recognized by the computer’s eye. One can only imagine the fashion possibilities.

For now, however, there’s nothing quite as efficient and responsive as the keyboard I’m typing on at the moment. It works in any shade of light. It doesn’t get confused if my kid darts into the room. And with the help of a mouse, it lets me call up my files quicker than I can blink.

“Whenever you want to introduce a new user interface,” analyst Laszlo says, “simplicity and intuitiveness are key. When the mouse was introduced, the learning curve wasn’t steep.”

And that gives companies like Cybernet some hope. Because there’s nothing more intuitive than a wave of the hand.

0 comments about this story. Start the discussion »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me