Augmented Reality Interface Exploits Human Nervous System
The most important function of the brain is figuring out what to ignore: Research suggests that we can process only about one percent of the visual information we take in at any given moment. That’s one reason why, as augmented reality (AR) inches ever closer to prime time, researchers at the University of Tokyo tackled an issue that could be distracting and even dangerous: Clutter in the narrow portion of our visual field that is high resolution – literally, the center of our attention.

Their solution is as straightforward as it is ingenious: display objects that demand attention in the user’s peripheral vision as simple icons that can be processed even by the limited visual acuity of our peripheral vision. If a user wants more information, for example to read an email represented in the peripheral vision by an icon, simply concentrating on the object brings up a higher-resolution instance of it with as much attached information as necessary.
Here’s a diagram that shows the acuity of various portions of the human visual field:

To pull off this trick, the AR system the researchers used needed to go beyond typical systems, which simply display information in the user’s visual field, as in a pair of glasses. By adding an eye-tracking system, their AR interface allows interaction to be driven entirely by gaze direction.
This allows an AR interface that keeps information in peripheral vision, in a simplified form that can be processed by the brain. It’s a bit like existing application-switching interfaces: a single source of information (or none at all) can be front and center, while other running applications are represented by icons that only come to the fore when directed.

In this example, the AR system recognizes an object that has come into view. It could be anything – a building, a face – but in this case it’s a microchip. An icon appears in the user’s peripheral vision (a) indicating that more information is available about this object. The icon is large and simple enough that the user can recognize it without making it the center of his or her visual field (and attention). As soon as the user directs their attention to the icon, however, as in (b) and (c ), a more detailed read-out becomes available.
A system like this allows hands-free interaction with an AR system, which is exactly the sort required in a mobile context.
“Peripheral vision annotation: noninterference information presentation method for mobile augmented reality,” Proceedings of the 2nd Augmented Human International Conference
Keep Reading
Most Popular
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.