Skip to Content

After Trying the Desktop of the Future, I’m Sticking with the Past

Augmented reality may eventually help you work. But a few days with the Meta 2 headset suggest it has a way to go.
October 31, 2017
Leonard Greco

For the past few days I’ve augmented my reality at work, adding virtual displays to my office so that, while wearing a special headset, I can do things like type e-mails and read news and tweets without taking up real estate on my small laptop. I’ve brought virtual objects to my desk, too, like a little pile of logs burning in a charming, heat-free fire.

I did all this with the Meta 2 headset, a $1,495 device from Meta, a Silicon Valley startup that is one of a handful of companies trying to bring augmented reality to the mass market (its founder, Meron Gribetz, was named one of MIT Technology Review’s 35 Innovators Under 35 in 2016). The Meta 2, which is intended for developers, needs to be connected to a beefy computer in order to work, but it’s about half the price of Microsoft’s HoloLens device (also still aimed just at developers), has a larger field of view, and also produces very good-looking 3-D images in real environments.

It sounds pretty cool to add digital elements to the real world, right? It’s something I’ve been writing about, testing, and looking forward to for years now. Sadly, while I wish I could tell you that the desktop of the future is just around the corner, truthfully, it’s at least several blocks away, if not even farther out.

Since I’m curious about how augmented reality could be used for a regular computer-heavy work day, I concentrated on Meta’s Workspace demo app. I imagined using hand gestures to open lots of Web browser windows and placing them all around me, letting YouTube videos play in the background, pulling up a giant Twitter feed, and writing e-mails, all in AR. I figured I’d place all these things around me, interacting with them as needed while continuing to work on the laptop, too.

That’s not exactly how it turned out.

The biggest problem I had was with the app freezing. Many, many times, after a few minutes of use, an object that I was interacting with in Workspace would suddenly stick in front of my face, moving around with me no matter how I turned my head. I tried recalibrating the headset, restarting the app, changing the lighting, and even moving to a new room (in fact, I tried several different work spaces and desks). With assistance from Meta’s tech support, I got a newer version of the SDK; that improved the issue somewhat, but it remained a problem.

When it did work, images looked good. Workspace uses a bookcase-like visualization for its application launcher, with 3-D app icons placed neatly on shelves that you pull out to open. It’s very cool to open a browser window, start watching a video, pause it, and then turn your head to turn your head to concentrate on something else; after all, you can always turn back to it later on.

Meta has a lot of good ideas about how we should interact with virtual elements. Hand gestures aren’t hard to figure out—to grab, say, a virtual Batman figure and move it, you hold your open palm in front of it, facing Batman, and when you see a closed circle appear on the back of your hand you make a fist, move the object where you want, and then open your fist. The same kind of gesture works for opening applications. And you can enlarge an object by first going through these grabbing steps with both hands on an object, and then stretching your fists outward.

At the risk of sounding like a wimp, however, I found these kinds of interactions and others—like poking to try to select a link, for instance—quickly tiring (something I’ve also noticed with the “air tap” gesture Microsoft uses with its AR headset, HoloLens). For the most part, it was easier to just use a wireless mouse and keyboard to click things.

I also found that the headset, which weighs a bit over a pound, was too heavy for me to wear for more than 25 minutes or so at a time without getting a headache. If you’re interested in using an AR headset for specific, limited tasks, like viewing a 3-D model to get a more realistic sense of it than you can on a flat screen, this wouldn’t be a problem; if you want to make it an integrated part of your work day, though, the weight may make it impossible.

Another pain point that crept up on me: I realized that if I wasn’t trying to look at an AR object but at a colleague who walked in or at my (now seemingly old-fashioned) computer monitor, I had to shift my gaze below the headset’s clear visor to get an unobstructed view. This made it hard to switch tasks; I’m not sure how Meta can fix the problem without redesigning the visor’s shape.

When the Meta 2 didn’t work as I wanted it to, I reminded myself that I was using demo software and a not-yet-consumer-ready headset. Still, I got irritated, because I wanted it to work.

I’m tired of waiting for a seamless AR experience that’s not limited to a smartphone; I want to believe the vision Gribetz shared with me about a year and a half ago, when he said he imagines that within five years (so, by roughly mid-2021) AR headsets will be simplified to a “nearly invisible” strip of glass over the eyes.

At this point, I’m skeptical about the timeline, but I think it’s possible in the not-too-distant future. I hope the software being created to make these experiences amazing will get there, too.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.