Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Combining AR with eye-tracking allows a hands-free interface

The most important function of the brain is figuring out what to ignore: Research suggests that we can process only about one percent of the visual information we take in at any given moment. That’s one reason why, as augmented reality (AR) inches ever closer to prime time, researchers at the University of Tokyo tackled an issue that could be distracting and even dangerous: Clutter in the narrow portion of our visual field that is high resolution – literally, the center of our attention.

Their solution is as straightforward as it is ingenious: display objects that demand attention in the user’s peripheral vision as simple icons that can be processed even by the limited visual acuity of our peripheral vision. If a user wants more information, for example to read an email represented in the peripheral vision by an icon, simply concentrating on the object brings up a higher-resolution instance of it with as much attached information as necessary.

Here’s a diagram that shows the acuity of various portions of the human visual field:

To pull off this trick, the AR system the researchers used needed to go beyond typical systems, which simply display information in the user’s visual field, as in a pair of glasses. By adding an eye-tracking system, their AR interface allows interaction to be driven entirely by gaze direction.

This allows an AR interface that keeps information in peripheral vision, in a simplified form that can be processed by the brain. It’s a bit like existing application-switching interfaces: a single source of information (or none at all) can be front and center, while other running applications are represented by icons that only come to the fore when directed.

In this example, the AR system recognizes an object that has come into view. It could be anything – a building, a face – but in this case it’s a microchip. An icon appears in the user’s peripheral vision (a) indicating that more information is available about this object. The icon is large and simple enough that the user can recognize it without making it the center of his or her visual field (and attention). As soon as the user directs their attention to the icon, however, as in (b) and (c ), a more detailed read-out becomes available.

A system like this allows hands-free interaction with an AR system, which is exactly the sort required in a mobile context.

Peripheral vision annotation: noninterference information presentation method for mobile augmented reality,” Proceedings of the 2nd Augmented Human International Conference

Follow Mims on Twitter or contact him via email.

0 comments about this story. Start the discussion »

Tagged: Computing, augmented reality

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me