Skip to Content

AR Goggles Restore Depth Perception To People Blind in One Eye

Software written for augmented reality glasses creates and projects images for the healthy eye, giving a wearer the feeling of depth.
January 18, 2013

Being able to see with both eyes comes with a perk: the ability to judge distance in 3D. Say, between a plate of food on the table and the saltshaker, or the space between the front of your car and the bumprt of the vehicle ahead of you.

People who’ve lost sight in one eye can still see with the other, but they lack binocular depth perception.

Some of them could benefit from a pair of augmented reality glasses being built at the University of Yamanashi in Japan, that artificially introduces a feeling of depth in a person’s healthy eye.

The group, led by Xiaoyang Mao, started out with a pair of commercially available 3D glasses, the daintily named Wrap 920AR, manufactured by Vuzix Corporation. (Vuzix is also building another AR headset called the M100 that on first sight looks like quite the competitor to to Google Glass.)

The Wrap 920AR looks like a pair of regular tinted glasses, but with small cameras poking out of each lens. The lenses are transparent and the device, Vuzix explains on its website, both captures and projects images, giving the wearer of the device front-row seats to a 2D or 3D AR show transmitted from a computer.

The group at Yamanashi have created software that makes use of the twin cameras. When a person puts the glasses on, each camera scopes out the scene that each eye would see. The images are funneled into software on a computer, which combines the perspective of both cameras and creates a “defocus” effect. That is, some objects to stay in focus while others stay out of focus, resulting in a feeling of depth. That version of the scene in front of them is projected to the single healthy eye of the wearer.

8 volunteers with two healthy eyes each tested the setup. They had one task, to pick up and place a cylindrical peg in a groove in front of them. All but one of the volunteers did this quicker when a composite image was projected into one lens.

The system isn’t quite ready to be taken for spin around town yet. It’s bulky still, the creators write, and needs a computer by its side, creating and projecting images in real time. But the creators admit such computing power is likely to be found on mobile devices soon, and when it is, they’ll be ready.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.