MIT Technology Review Subscribe

An Invisible Touch for Mobile Devices

A simple gesture-sensing interface could add new meaning to mobile-phone conversations.

Today, the way to interact with a mobile phone is by tapping its keypad or screen with your fingers. But researchers are exploring ways to use mobile devices that would be far less limited.

Imagine this: A person (top) draws a curved line with his finger, and the gesture is captured by a wearable camera (bottom). The line is transferred to a mobile device, which sends it to a recipient’s screen for display.

Patrick Baudisch, professor of computer science at the Hasso Plattner Institute in Postdam, Germany, and his research student, Sean Gustafson, are developing a prototype interface for mobile phones that requires no touch screen, keyboard, or any other physical input device. A small video recorder and microprocessor attached to a person’s clothing can capture and analyze their hand gestures, sending an outline of each gesture to a computer display.

Advertisement

The idea is that a person could use an “imaginary interface” to augment a phone conversation by tracing shapes with their fingers in the air. Baudisch and Gustafson have built a prototype device in which the camera is about the size of a large broach, but they predict that within a few years, components will have shrunk, allowing for a much smaller system.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

The idea of interacting with computers through hand gestures is nothing new. Sony already sells EyeToy, a video camera and software that capture gestures for its PlayStation game consoles; Microsoft has developed a more sophisticated gesture-sensing system, called Project Natal, for the Xbox 360 games console. And a gesture-based research project called SixthSense, developed by Pattie Maes, a professor at MIT, and her student Pranav Mistry uses a wearable camera to record a person’s gestures and a small projector to create an ad-hoc display on any surface.

Baudisch and Gustafson say their system is simpler than SixthSense, requiring fewer components, which should make it cheaper. A person “opens up” the interface by making an “L” shape with her left or right hand. This creates a two dimensional spatial surface, a boundary for the forthcoming finger traces. Baudisch says that a person could use this space to clarify spatial situations, such as how to get from one place to another. “Users start drawing in midair,” he says. “There is no setup effort here, no need to whip out a mobile device or stylus.” The researchers also found that users were even able to go back to an imaginary sketch to extend or annotate it, thanks to their visual memory

A paper detailing the setup and user studies will be presented at the 2010 symposium on User Interface Software and Technology in New York in October.

Andy Wilson, a senior researcher at Microsoft who led the development of Surface, an experimental touch- screen table, says the work could be a sign of things to come. “I think it’s quite interesting in the sense that it really is the ultimate in thinking about when devices shrink down to nothing–when you don’t even have a display,” he says.

Wilson notes that the interface draws on the fact that people naturally use their hands to explain spatial ideas. “That’s a quite powerful concept, and it hasn’t been explored,” he says. “I think they’re onto something.”

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement