Skip to Content

A More Realistic Augmented Reality

It’s not a consumer product (yet), but a startup’s AR headset could give HoloLens a run for its money.
April 7, 2017
Avegant’s developing an augmented-reality headset that it says uses light-field technology and can be easily manufactured.

There are still just a few companies showing off augmented-reality headsets that do a good job blending digital imagery with the real world, among them Microsoft’s HoloLens and Meta’s Meta 2.

One more is now joining the fray. A startup called Avegant, which already sells a funny-looking personal-theater headset called the Glyph for $499, has built a prototype of a headset with a transparent display that it says uses light-field technology to let you view virtual objects as naturally as you do real ones. A light field is the pattern created when rays of light bounce off something, and re-creating this effect is one key to making sharp-looking augmented-reality images that you can comfortably focus on when they are at different depths but in the same scene—like, say, a toy car an arm’s length away and a house off in the distance.

If the idea of light fields in an augmented-reality headset sounds familiar, it may be because the secretive and well-funded startup Magic Leap has been working on such technology for several years now. Back in late 2014, it showed me its then-enormous prototypes, which weren’t yet in a working headset; the company has since opened up a little more about the headset it’s working on, but it hasn’t yet said when it will release a product.

At Avegant’s office in Belmont, California, however, cofounder and chief technical officer Edward Tang recently showed me a headset that is still definitely in the demo stage but doesn’t look too far from a finished product. It was wired to a computer on the floor—though Tang says Avegant has gotten it running on mobile devices, too—and placed on my head in a room resembling a living room with a real couch, some chairs, and a coffee table.

With the headset on, I watched a slow-moving sea turtle paddle past, saw a school of tiny blue fish swim around furniture legs, looked down the center of an asteroid belt curving around a model solar system, and inspected the eyelashes and hair of a life-size woman wearing a weird, green lizard-like suit—all inside the living room.

The images looked crisp up close and at a distance. I had no problem shifting my gaze from a digital image close to me to another one farther back, or vice versa, even with one eye closed. As in real life, the object I focused on was sharp but grew fuzzy as I moved my focus to something else, whether it was a digital object or a real one at a different depth.

For Gordon Wetzstein, an assistant professor at Stanford who heads the Stanford Computational Imaging Lab, such consistency between digital and physical content is very important for augmented reality, since it makes the whole experience easier on the eyes.

Also, he says, “it just looks more realistic.”

But I couldn’t poke at or manipulate anything I saw through Avegant’s headset, as I could in some other augmented-reality experiences I’ve tried. And like HoloLens, Avegant’s headset still has a pretty small field of view, which means you’re looking at this world of mixed real and virtual objects through a rectangular window. That makes it hard to see much at one time.

Avegant won’t explain exactly how the technology behind the headset works. Tang says that apart from the addition of a light-field optical element, it’s similar to the Glyph—which projects light from a three-color LED through a tiny chip filled with itty-bitty mirrors and then onto your retina, where an image is formed.

The startup also won’t say exactly what it plans to do with the headset, though Tang says it is “pretty close to being ready to start manufacturing.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.