Skip to Content

Reality Check: Comparing HoloLens and Magic Leap

After trying demos of Magic Leap and HoloLens, it’s clear that commercializing augmented reality technology will be difficult.
March 20, 2015

I’ve seen two competing visions for a future in which virtual objects are merged seamlessly with the real world. Both were impressive in part, but they also made me wonder whether augmented reality will become a successful commercial reality anytime soon.

A mockup shows HoloLens being used to provide remote help with home repairs.

I’m the only person I know of to have tried both Microsoft’s HoloLens and the system being developed by a secretive startup called Magic Leap.

I got a peek at what Magic Leap is building back in December (see “10 Breakthrough Technologies 2015: Magic Leap”). In that demonstration, 3-D monsters and robots looked amazingly detailed and crisp, fitting in well with the surrounding world, though they were visible only with lenses attached to bulky hardware sitting on a cart, and no release date has yet been revealed.

I had my chance to see HoloLens during a recent visit to Microsoft’s headquarters in Redmond, Washington. HoloLens is a holographic system that the company plans to pack into a visor about the size of a pair of bulbous ski goggles. In January, Microsoft said HoloLens would be available “in the Windows 10 time frame,” and the company said this week that the new operating system will be released this summer.

I experienced three HoloLens demos. The first, HoloStudio, showed the possibilities for 3-D modeling and manipulation. The second let me explore the surface of Mars with a virtually present NASA scientist. And the third gave a sense of how I might use HoloLens in combination with a Skype video chat to get help with a real-world problem (in this case, installing a light switch).

Unlike with some stereoscopic virtual-reality 3-D technologies I’ve tried, such as Oculus Rift, HoloLens did not make me feel nauseous, which bodes well. Microsoft would not say how it works, but a bit of explanation in a Wired piece makes it sound as if it may be doing something somewhat similar to Magic Leap, which is using a tiny projector to shine light at your eyes that blends in very well with the light you get from the real world around you.

The final form Microsoft’s HoloLens will take.

But I was not blown away by what I saw in Redmond. The holograms looked great in a couple of instances, such as when I peered at the underside of a rock on a reconstruction of the surface of Mars, created with data from the Curiosity rover. More often, though, images appeared distractingly transparent and not nearly as crisp as the creatures Magic Leap showed me some months before. What’s more, the relatively narrow viewing area in front of my face meant the 3-D imagery seen through HoloLens was often interrupted by glimpses of the unenhanced world on the periphery. The headset also wasn’t closed off to the world around me, so I still had my natural peripheral vision of the unenhanced room. This was okay when looking at smaller or farther-away 3-D images, like an underwater scene I was shown during my first demo, or while moving around to inspect images close-up from different angles. The illusion got screwed up, though, when it came to looking at something larger than my field of view.

Microsoft is also still working on packing everything into the HoloLens form it has promised. Unlike the untethered headset that the company demonstrated in January, the device I tried was unwieldy and unfinished: it had see-through lenses attached to a heavy mass of electronics and plastic straps, tethered to a softly whirring rectangular box (Microsoft’s holographic processing unit) that I had to wear around my neck and to a nearby computer. I was instructed to touch only a plastic strap that fit over the top of my head; demo minders placed it on me and took it off at the end of each experience.

Even this level of limited mobility was more than I got at Magic Leap, but it’s clear the HoloLens team has a big task in getting the technology to fit into its smaller, consumer-ready design.

For instance, during the Mars demo, the room around me was blanketed with realistic images of the surface of the planet, and a detailed-looking rover sat in front of me, slightly to my right. But I could only see it one rectangle at a time; if my eyes strayed beyond that rectangle in front of me, I’d see bits of the room, but no hologram.

The issues extended to the opacity of the images, too. The demos were all held in rooms that had no windows, but lighting was kept at a normal level of brightness, and the rooms were decorated with furniture, knick-knacks, and other items on and near the walls—not unlike your average living room, and the kind of environment in which you’d be likely to use a HoloLens if you bought one for yourself. Yet I could often see bits of the room peeking through the images themselves in a way that interrupted, rather than worked with, the illusion.

The most impressive part of the HoloLens demos was the use of sensors to track where I was looking and gesturing, as well as what I was saying. My gaze was effectively a mouse, accurately highlighting what I was looking at. An up-and-down motion with my index finger—dubbed an “air tap” by the HoloLens crew—functioned as the mouse click to do things like paint a fish or place a flag in a certain spot on Mars. (I screwed this up a number of times; mostly because I wasn’t holding my finger up high enough.) Simple voice commands like “copy” and “rotate” worked well, too.

HoloLens is also really good at having virtual objects follow the user around. As I chatted with a Microsoft employee over Skype, the simple diagram he drew about how to connect a light switch hovered in the air near the electrical box on the wall, while his video-chat window remained in my field of view, even as I moved about. This fits neatly with the idea that augmented reality could help employees in the field make repairs to things like air conditioners (see “Augmented Reality Gets to Work”).

A key difference compared to Magic Leap was that I was able to walk around some 3-D objects, such as an X-Wing fighter sitting in front of me; it looked fairly solid up close, though not intricately detailed. I was also able to modify 3-D objects, which was pretty cool. Using my gaze, gestures, and voice commands, I enlarged, copied, colored, and changed the angular position of a fish that was part of the ocean scene, for instance. And I could move objects from one spot to another, like a cartoonish pony I seated on a couch between the two HoloLens team members who were in the room with me for a mixed-reality photo.

Despite the issues yet to be overcome, Microsoft is making progress toward getting the technology behind HoloLens into a device you can actually wear. That’s not the same, though, as making an augmented reality device that is so useful and slickly packaged that millions of consumers will want to buy it. To do that, HoloLens, Magic Leap, and any other competitors must do much more.

It’s impossible to compare HoloLens and Magic Leap at this stage and declare one winner, at least given what I’ve seen in two very different demonstrations. What I can say is my experiences illustrate the enormous challenge of creating a truly engaging augmented reality experience in a practical, consumer-ready device.

It’s clearly incredibly hard to make this kind of stuff work in a convincing way on a headset—once you’ve figured out how to make good-looking virtual images, there’s the task of cramming all of the necessary computer hardware into a wearable device, making sure it looks good as the wearer is walking around, and figuring out a way to power it. This raises big questions about how good augmented reality can really get, and how useful it will be in the near future. If it doesn’t wow you, both in form and function, why would you buy it?

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.