There are still just a few companies showing off augmented-reality headsets that do a good job blending digital imagery with the real world, among them Microsoft’s HoloLens and Meta’s Meta 2.
One more is now joining the fray. A startup called Avegant, which already sells a funny-looking personal-theater headset called the Glyph for $499, has built a prototype of a headset with a transparent display that it says uses light-field technology to let you view virtual objects as naturally as you do real ones. A light field is the pattern created when rays of light bounce off something, and re-creating this effect is one key to making sharp-looking augmented-reality images that you can comfortably focus on when they are at different depths but in the same scene—like, say, a toy car an arm’s length away and a house off in the distance.
If the idea of light fields in an augmented-reality headset sounds familiar, it may be because the secretive and well-funded startup Magic Leap has been working on such technology for several years now. Back in late 2014, it showed me its then-enormous prototypes, which weren’t yet in a working headset; the company has since opened up a little more about the headset it’s working on, but it hasn’t yet said when it will release a product.
At Avegant’s office in Belmont, California, however, cofounder and chief technical officer Edward Tang recently showed me a headset that is still definitely in the demo stage but doesn’t look too far from a finished product. It was wired to a computer on the floor—though Tang says Avegant has gotten it running on mobile devices, too—and placed on my head in a room resembling a living room with a real couch, some chairs, and a coffee table.
With the headset on, I watched a slow-moving sea turtle paddle past, saw a school of tiny blue fish swim around furniture legs, looked down the center of an asteroid belt curving around a model solar system, and inspected the eyelashes and hair of a life-size woman wearing a weird, green lizard-like suit—all inside the living room.
The images looked crisp up close and at a distance. I had no problem shifting my gaze from a digital image close to me to another one farther back, or vice versa, even with one eye closed. As in real life, the object I focused on was sharp but grew fuzzy as I moved my focus to something else, whether it was a digital object or a real one at a different depth.
For Gordon Wetzstein, an assistant professor at Stanford who heads the Stanford Computational Imaging Lab, such consistency between digital and physical content is very important for augmented reality, since it makes the whole experience easier on the eyes.
Also, he says, “it just looks more realistic.”
But I couldn’t poke at or manipulate anything I saw through Avegant’s headset, as I could in some other augmented-reality experiences I’ve tried. And like HoloLens, Avegant’s headset still has a pretty small field of view, which means you’re looking at this world of mixed real and virtual objects through a rectangular window. That makes it hard to see much at one time.
Avegant won’t explain exactly how the technology behind the headset works. Tang says that apart from the addition of a light-field optical element, it’s similar to the Glyph—which projects light from a three-color LED through a tiny chip filled with itty-bitty mirrors and then onto your retina, where an image is formed.
The startup also won’t say exactly what it plans to do with the headset, though Tang says it is “pretty close to being ready to start manufacturing.”