Skip to Content
Uncategorized

Google Explores "Eyes-Free" Phones

An adaptive interface with tactile and audio feedback could make it easier to ignore a small screen.

The screens on many mobile phones can leave a user feeling distinctly vision impaired, especially if her attention is divided between tapping virtual buttons and walking or driving. Fortunately, engineers at Google are experimenting with interfaces for Android-powered mobile phones that require no visual attention at all. At Google I/O, the company’s annual developer conference held in San Francisco last week, T.V. Raman, a research scientist at Google, demonstrated an adaptive, circular interface for phones that provides audio and tactile feedback.

Circular motion: The eyes-free interface for Android phones is based on a radial menu of numbers and letters.

“We are building a user interface that goes over and beyond the screen,” says Raman. Often, eyes-free interfaces are employed for blind users, but Raman, who himself is blind, assures that these interfaces have much broader implications. “This is not just about the blind user,” he says. “This is about how to use these devices if you’re not in a position to look at the machine.”

Eyes-free interfaces aren’t new. In fact, in 1994, Bill Buxton, a researcher at Microsoft, explored the idea of marking menus–round menus that were meant to be easier to use without the benefit of looking than a pull-down list. In recent years, Patrick Baudisch, another Microsoft researcher, who is also a professor at the Hasso Plattner Institute, in Germany, has applied the approach to MP3 menus that also provide audio feedback.

Some mobile phones already support vibrational feedback, but for the most part, gadget interfaces require intensive visual attention. According to Google’s Raman, Android could be one of the first phone platforms to enable a broad range of eyes-free interfaces. The Android platform supports vibrational and audio feedback, and at the conference, Raman and his colleague Charles Chen demonstrated that an eyes-free alternative can be added to almost any Android application with just a few lines of code.

The researchers showed off their interface as a way to dial numbers and search through contacts on a phone. One problem with most graphical user interfaces, says Raman, is that the buttons are in a fixed location, which is inconvenient if you can’t feel them. To address this problem, his interface appears as soon as a finger touches the screen, so that it is centered on this initial touch.

When configured as a numeric keypad, the first touch sits directly on the number “5.” Swiping it to the upper right produces a “3,” and to the lower left a “7.” Each time, as the finger passes over a number, the phone vibrates, and when the finger is raised, indicating that a selection has been made, a computerized voice repeats the number.

To navigate through the phone’s address book, a user touches the screen to produce a circular set of eight letters. (See a video of the interface in action here.) Swiping to the upper left, where the “A” is located, opens a new circular menu of eight more letters: “B,” “C,” “D,” and so on. Employing this approach, says Raman, a user only needs to move his finger, at most three times, to access any letter.

Android also supports text-to-speech capabilities so that developers can design apps to verbalize the text that appears on a screen, but this doesn’t help users input information.

Microsoft’s Baudisch says that it will be exciting if these sorts of interfaces are to find their way outside of research labs. “It’s wonderful that [the Google researchers] are doing it, and they implemented it nicely,” he says. “Marking menus are great, and it’s time that somebody puts this into the products that it belongs in.”

Raman acknowledges that it’s still early days for eyes-free interfaces and that there is much to learn about what consumers will find useful. One possible way to improve eyes-free interactions would be to have the phone predict a user’s intent, he says. For instance, a person might regularly check the arrival times for a bus after work each day. Given that, the phone could respond to a certain gesture, such as tracing the letter “B” after 4:15 on weekdays, by telling the user when the next bus is due.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.