Google Explores "Eyes-Free" Phones
The screens on many mobile phones can leave a user feeling distinctly vision impaired, especially if her attention is divided between tapping virtual buttons and walking or driving. Fortunately, engineers at Google are experimenting with interfaces for Android-powered mobile phones that require no visual attention at all. At Google I/O, the company’s annual developer conference held in San Francisco last week, T.V. Raman, a research scientist at Google, demonstrated an adaptive, circular interface for phones that provides audio and tactile feedback.

“We are building a user interface that goes over and beyond the screen,” says Raman. Often, eyes-free interfaces are employed for blind users, but Raman, who himself is blind, assures that these interfaces have much broader implications. “This is not just about the blind user,” he says. “This is about how to use these devices if you’re not in a position to look at the machine.”
Eyes-free interfaces aren’t new. In fact, in 1994, Bill Buxton, a researcher at Microsoft, explored the idea of marking menus–round menus that were meant to be easier to use without the benefit of looking than a pull-down list. In recent years, Patrick Baudisch, another Microsoft researcher, who is also a professor at the Hasso Plattner Institute, in Germany, has applied the approach to MP3 menus that also provide audio feedback.
Some mobile phones already support vibrational feedback, but for the most part, gadget interfaces require intensive visual attention. According to Google’s Raman, Android could be one of the first phone platforms to enable a broad range of eyes-free interfaces. The Android platform supports vibrational and audio feedback, and at the conference, Raman and his colleague Charles Chen demonstrated that an eyes-free alternative can be added to almost any Android application with just a few lines of code.
The researchers showed off their interface as a way to dial numbers and search through contacts on a phone. One problem with most graphical user interfaces, says Raman, is that the buttons are in a fixed location, which is inconvenient if you can’t feel them. To address this problem, his interface appears as soon as a finger touches the screen, so that it is centered on this initial touch.
When configured as a numeric keypad, the first touch sits directly on the number “5.” Swiping it to the upper right produces a “3,” and to the lower left a “7.” Each time, as the finger passes over a number, the phone vibrates, and when the finger is raised, indicating that a selection has been made, a computerized voice repeats the number.
To navigate through the phone’s address book, a user touches the screen to produce a circular set of eight letters. (See a video of the interface in action here.) Swiping to the upper left, where the “A” is located, opens a new circular menu of eight more letters: “B,” “C,” “D,” and so on. Employing this approach, says Raman, a user only needs to move his finger, at most three times, to access any letter.
Android also supports text-to-speech capabilities so that developers can design apps to verbalize the text that appears on a screen, but this doesn’t help users input information.
Microsoft’s Baudisch says that it will be exciting if these sorts of interfaces are to find their way outside of research labs. “It’s wonderful that [the Google researchers] are doing it, and they implemented it nicely,” he says. “Marking menus are great, and it’s time that somebody puts this into the products that it belongs in.”
Raman acknowledges that it’s still early days for eyes-free interfaces and that there is much to learn about what consumers will find useful. One possible way to improve eyes-free interactions would be to have the phone predict a user’s intent, he says. For instance, a person might regularly check the arrival times for a bus after work each day. Given that, the phone could respond to a certain gesture, such as tracing the letter “B” after 4:15 on weekdays, by telling the user when the next bus is due.
Keep Reading
Most Popular
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.