Skip to Content

The Invisible iPhone

A new interface lets you keep your phone in your pocket and use apps or answer calls by tapping your hand.

Over time, using your smart-phone touch screen becomes second nature, to the point where you can even do some tasks without looking. Researchers in Germany are now working on a system that would let you perform such actions without even holding the phone—instead you’d tap your palm, and the movements would be interpreted by an “imaginary phone” system that would relay the request to your actual phone.

Point and click: The “imaginary phone” determines which iPhone app a person wants to use by matching his or her finger position to the position of the app on the screen.

The concept relies on a depth-sensitive camera to pick up the tapping and sliding interactions on a palm,  software to analyze the video, and a wireless radio to send the instructions back to the iPhone. Patrick Baudisch, professor of computer science at the Hasso Plattner Institute in Potsdam, Germany, says the imaginary phone prototype “serves as a shortcut that frees users from the necessity to retrieve the actual physical device.”

Baudisch and his team envision someone doing dishes when his smart phone rings. Instead of quickly drying his hands and fumbling to answer, the imaginary phone lets him simply slide a finger across his palm to answer it remotely.

The imaginary phone project, developed by Baudisch and his team, which includes Hasso Plattner Institute students Sean Gustafson and Christian Holz, is reminiscent of a gesture-based interface called SixthSense developed by Pattie Maes and Pranav Mistry of MIT, but it differs in a couple of significant ways. First, there are no new gestures to learn—the invisible phone concept simply transfers the iPhone screen onto a hand. Second, there’s no feedback, unlike SixthSense, which uses a projector to provide an interface on any surface. Lack of visual feedback limits the imaginary phone, but it isn’t intended to completely replace the device, just to make certain interactions more convenient.

Last year, Baudisch and Gustafson developed an interface in which a wearable camera captures gestures that a person makes in the air and translates them to drawings on a screen.

For the current project, the researchers used a depth camera similar to the one used in Microsoft’s Kinect for Xbox, but bulkier and positioned on a tripod. (Ultimately, a smaller, wearable depth camera could be used.) The camera “subtracts” the background and tracks the finger position on the palm. It works well in various lighting conditions, including direct sunlight. Software interprets finger positions and movements and correlates it to the position of icons on a person’s iPhone. A Wi-Fi radio transmits these movements to the phone.

In a study that has been submitted to the User Interface Software and Technology conference in October, the researchers found that participants could accurately recall the position of about two-thirds of their iPhone apps on a blank phone and with similar accuracy on their palm. The position of apps used more frequently was recalled with up to 80 percent accuracy.

Finger mouse: A depth camera picks up finger position and subtracts the background images to correctly interpret interactions.

“It’s a little bit like learning to touch type on a keyboard, but without any formal system or the benefit of the feel of the keys,” says Daniel Vogel, postdoctoral fellow at the University of Waterloo. Vogel wasn’t involved in the research. He notes that “it’s possible that voice control could serve the same purpose, but the imaginary approach would work in noisy locations and is much more subtle than announcing, ‘iPhone, open my e-mail.’ “

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.