An Invisible Touch for Mobile Devices
Today, the way to interact with a mobile phone is by tapping its keypad or screen with your fingers. But researchers are exploring ways to use mobile devices that would be far less limited.

Patrick Baudisch, professor of computer science at the Hasso Plattner Institute in Postdam, Germany, and his research student, Sean Gustafson, are developing a prototype interface for mobile phones that requires no touch screen, keyboard, or any other physical input device. A small video recorder and microprocessor attached to a person’s clothing can capture and analyze their hand gestures, sending an outline of each gesture to a computer display.
The idea is that a person could use an “imaginary interface” to augment a phone conversation by tracing shapes with their fingers in the air. Baudisch and Gustafson have built a prototype device in which the camera is about the size of a large broach, but they predict that within a few years, components will have shrunk, allowing for a much smaller system.
The idea of interacting with computers through hand gestures is nothing new. Sony already sells EyeToy, a video camera and software that capture gestures for its PlayStation game consoles; Microsoft has developed a more sophisticated gesture-sensing system, called Project Natal, for the Xbox 360 games console. And a gesture-based research project called SixthSense, developed by Pattie Maes, a professor at MIT, and her student Pranav Mistry uses a wearable camera to record a person’s gestures and a small projector to create an ad-hoc display on any surface.
Baudisch and Gustafson say their system is simpler than SixthSense, requiring fewer components, which should make it cheaper. A person “opens up” the interface by making an “L” shape with her left or right hand. This creates a two dimensional spatial surface, a boundary for the forthcoming finger traces. Baudisch says that a person could use this space to clarify spatial situations, such as how to get from one place to another. “Users start drawing in midair,” he says. “There is no setup effort here, no need to whip out a mobile device or stylus.” The researchers also found that users were even able to go back to an imaginary sketch to extend or annotate it, thanks to their visual memory
A paper detailing the setup and user studies will be presented at the 2010 symposium on User Interface Software and Technology in New York in October.
Andy Wilson, a senior researcher at Microsoft who led the development of Surface, an experimental touch- screen table, says the work could be a sign of things to come. “I think it’s quite interesting in the sense that it really is the ultimate in thinking about when devices shrink down to nothing–when you don’t even have a display,” he says.
Wilson notes that the interface draws on the fact that people naturally use their hands to explain spatial ideas. “That’s a quite powerful concept, and it hasn’t been explored,” he says. “I think they’re onto something.”
Deep Dive
Uncategorized

It will soon be easy for self-driving cars to hide in plain sight. We shouldn’t let them.
If they ever hit our roads for real, other drivers need to know exactly what they are.

Maximize business value with data-driven strategies
Every organization is now collecting data, but few are truly data driven. Here are five ways data can transform your business.

Cryptocurrency fuels new business opportunities
As adoption of digital assets accelerates, companies are investing in innovative products and services.

Where to get abortion pills and how to use them
New US restrictions could turn abortion into do-it-yourself medicine, but there might be legal risks.
Stay connected

Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.