Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Today, the way to interact with a mobile phone is by tapping its keypad or screen with your fingers. But researchers are exploring ways to use mobile devices that would be far less limited.

Patrick Baudisch, professor of computer science at the Hasso Plattner Institute in Postdam, Germany, and his research student, Sean Gustafson, are developing a prototype interface for mobile phones that requires no touch screen, keyboard, or any other physical input device. A small video recorder and microprocessor attached to a person’s clothing can capture and analyze their hand gestures, sending an outline of each gesture to a computer display.

The idea is that a person could use an “imaginary interface” to augment a phone conversation by tracing shapes with their fingers in the air. Baudisch and Gustafson have built a prototype device in which the camera is about the size of a large broach, but they predict that within a few years, components will have shrunk, allowing for a much smaller system.

The idea of interacting with computers through hand gestures is nothing new. Sony already sells EyeToy, a video camera and software that capture gestures for its PlayStation game consoles; Microsoft has developed a more sophisticated gesture-sensing system, called Project Natal, for the Xbox 360 games console. And a gesture-based research project called SixthSense, developed by Pattie Maes, a professor at MIT, and her student Pranav Mistry uses a wearable camera to record a person’s gestures and a small projector to create an ad-hoc display on any surface.

Baudisch and Gustafson say their system is simpler than SixthSense, requiring fewer components, which should make it cheaper. A person “opens up” the interface by making an “L” shape with her left or right hand. This creates a two dimensional spatial surface, a boundary for the forthcoming finger traces. Baudisch says that a person could use this space to clarify spatial situations, such as how to get from one place to another. “Users start drawing in midair,” he says. “There is no setup effort here, no need to whip out a mobile device or stylus.” The researchers also found that users were even able to go back to an imaginary sketch to extend or annotate it, thanks to their visual memory

A paper detailing the setup and user studies will be presented at the 2010 symposium on User Interface Software and Technology in New York in October.

Andy Wilson, a senior researcher at Microsoft who led the development of Surface, an experimental touch- screen table, says the work could be a sign of things to come. “I think it’s quite interesting in the sense that it really is the ultimate in thinking about when devices shrink down to nothing–when you don’t even have a display,” he says.

Wilson notes that the interface draws on the fact that people naturally use their hands to explain spatial ideas. “That’s a quite powerful concept, and it hasn’t been explored,” he says. “I think they’re onto something.”

5 comments. Share your thoughts »

Credit: Hasso Plattner Institute

Tagged: Communications, mobile devices, cell phones, user interfaces, computer vision, hand gestures

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me