Augmented Reality Interface Exploits Human Nervous System
The most important function of the brain is figuring out what to ignore: Research suggests that we can process only about one percent of the visual information we take in at any given moment. That’s one reason why, as augmented reality (AR) inches ever closer to prime time, researchers at the University of Tokyo tackled an issue that could be distracting and even dangerous: Clutter in the narrow portion of our visual field that is high resolution – literally, the center of our attention.

Their solution is as straightforward as it is ingenious: display objects that demand attention in the user’s peripheral vision as simple icons that can be processed even by the limited visual acuity of our peripheral vision. If a user wants more information, for example to read an email represented in the peripheral vision by an icon, simply concentrating on the object brings up a higher-resolution instance of it with as much attached information as necessary.
Here’s a diagram that shows the acuity of various portions of the human visual field:

To pull off this trick, the AR system the researchers used needed to go beyond typical systems, which simply display information in the user’s visual field, as in a pair of glasses. By adding an eye-tracking system, their AR interface allows interaction to be driven entirely by gaze direction.
This allows an AR interface that keeps information in peripheral vision, in a simplified form that can be processed by the brain. It’s a bit like existing application-switching interfaces: a single source of information (or none at all) can be front and center, while other running applications are represented by icons that only come to the fore when directed.

In this example, the AR system recognizes an object that has come into view. It could be anything – a building, a face – but in this case it’s a microchip. An icon appears in the user’s peripheral vision (a) indicating that more information is available about this object. The icon is large and simple enough that the user can recognize it without making it the center of his or her visual field (and attention). As soon as the user directs their attention to the icon, however, as in (b) and (c ), a more detailed read-out becomes available.
A system like this allows hands-free interaction with an AR system, which is exactly the sort required in a mobile context.
“Peripheral vision annotation: noninterference information presentation method for mobile augmented reality,” Proceedings of the 2nd Augmented Human International Conference
Keep Reading
Most Popular
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.