Skip to Content

The Step Needed to Make Virtual Reality More Real

If virtual reality is going to be truly immersive, holding a game controller could be distracting. Companies will instead try to let you control the action with your eyes, head, or fingers.
February 1, 2016

It’s one thing to play the arcade game Whac-A-Mole by swinging around an oversized mallet; it’s far easier to whack those moles virtually, controlling the mallet with just your gaze.

That’s what I was doing on a recent rainy morning in the Milpitas, California, office of a startup called Eyefluence while wearing an Oculus virtual-reality headset. Eyefluence is building eye-tracking technology that it believes will be good enough to let you do anything in virtual reality, from hitting subterranean mammals to navigating different menus and apps, all by simply looking around.

Eyefluence hopes it has an answer to a big question. The graphics and sound in virtual reality have gotten great, as people will see in March and April when two highly anticipated headsets—Oculus’s Rift and HTC’s Vive—are released to consumers. But for all the progress that virtual reality is making, we still haven’t figured out how best to control and play with the things we will see on those screens.

“People are now really starting to see that interaction in VR is far from a solved problem,” says Evan Suma, a research assistant professor at the University of Southern California’s Institute for Creative Technologies. “This is something the VR research community has been looking at for a number of years, going back decades.”

Oculus and HTC have some solutions. Oculus plans to ship its headset with a wireless Xbox One controller, and to release a more immersive set of button-bedecked, hand-held trackable controllers called Oculus Touch later in the year. HTC’s headset will come with a pair of wand-shaped controllers. But such controllers are not always ideal because they don’t match the ways you use your body when, say, exploring the dark depths of a cave (no buttons to press there, in my experience). They could make virtual-reality exploration feel less immersive. They could also tire you out, especially if you’re waving your arms wildly while holding them.

That’s why companies like Eyefluence are working on other ways to interact with virtual reality. “You’re always looking at something. And you have the display right in front of your eyes. Why wouldn’t you want your eyes to be able to control that?” says David Stiehr, an Eyefluence cofounder.

Eyefluence grew out of technology originally developed by a company called Eye-Com that CEO Jim Marggraff purchased in 2013. The company has developed a flexible circuit that holds a camera, illumination sources, and other tiny bits of hardware. It is meant to fit around a small metal chassis that the company hopes to have embedded in future virtual-reality headsets.

Gest is making a motion-sensing device that can track the motions of individual fingers.

After a quick tutorial with a headset retrofitted with Eyefluence’s technology, I had no trouble selecting different demo apps from the home screen, playing the company’s version of Whac-A-Mole with my eyes, and panning around a virtual space with 40 different displays that I could zoom in and out of to watch videos and such. It felt natural; I was, after all, just doing what I normally do with my eyes, and it worked surprisingly well. Marggraff won’t say when he expects the technology to be added to headsets.

A startup called Gest is trying to take advantage of another body part: your fingers. The San Francisco startup is building a motion-sensor-filled device that wraps around your palm and includes rings that slide around four of your fingers (see “Get to Grips with Virtual Objects Using This Stripped-Down Glove”). The company plans to roll out its gadget in November and will give developers tools to make Gest work with virtual reality and other applications.

Gest cofounder and CEO Mike Pfister sees it being useful not just for playing games in virtual reality but also for getting work done. A designer might want to use Gest to work on a computer-generated model, for instance, or you might want to type on a virtual keyboard simply by moving your fingers around.

While hand- or eye-tracking from Gest and Eyefluence could be a long way off, virtual reality can already be manipulated without wands or video-game controllers. Basic head-tracking technology, which uses sensors to monitor the position of your head and can translate that into actions, will be built into headsets like Rift and Vive. This kind of interaction is even possible with the sensor-laden smartphones that you can use with Samsung’s Gear VR mobile headset and Google Cardboard.

This technology will be used by a number of companies making virtual-reality games and experiences, including a San Francisco startup called Metta. It’s employing head-tracking as the main way you traverse its service for sharing short, homemade virtual-reality videos on a Gear VR or Google Cardboard. On the Gear device, for example, videos are arranged by point of origin on a giant 360-degree map that you navigate by moving your head slightly; to select a video or collection of videos, you simply keep your head steady on a specific spot. For now, the only thing you need to press is the “back” button on the Gear VR, though the company is considering ways to eliminate that step, too.

Metta cofounder Jacob Trefethen says the idea is to cut down on interruptions that remind the viewer that the virtual world is, in fact, not real. “We’re very much trying,” he says, “to kill all of those moments where you have some disbelief.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.