Skip to Content

Oculus’s Hand Controls Are Not Always So Handy in Virtual Reality

Hand controls coming next year from Facebook-owned Oculus are great at making you feel more immersed in virtual reality, but getting comfortable using them will take practice.
September 25, 2015

In the real world, you use your hands constantly to manipulate all kinds of objects and communicate with others. Facebook-owned Oculus hopes a pair of half-moon, button-studded hand controls that will go with its forthcoming consumer headset, the Rift, will let you do likewise in virtual reality.

Oculus Touch hand controls will be one way to interact with Oculus’s forthcoming virtual-reality headset.

Unveiled in June, the Oculus Touch hand controls make it possible to do things like grasp virtual blocks, push buttons, and shoot a slingshot while using the Rift. The Rift is slated to be released in the first quarter of next year; the controls are set to arrive in the second quarter. As with the Rift, pricing and exact availability for Touch have yet to be announced.

At an Oculus developer conference in Los Angeles this week, I set out to figure out how well Oculus Touch works with a range of applications—whether it could really work as an intuitive, simple way to move or throw a digital stapler, play with another person in virtual reality, or make art.

If you’re used to playing with video-game controllers, Oculus Touch will look pretty familiar: for each hand there’s a controller with a joystick and some buttons on top, more buttons on its sides, and, less conventionally, a half-moon that surrounds several of your fingers. You can use the different parts of the controllers to grab, shoot, and select things; as with video games, the capabilities vary from one demo to the next.

Each controller looks like a black half-moon, with a button-studded center stalk that the user grips.

I’m not a gamer, so I tried to start things off slowly with a cartoony office simulation demo in which I had to complete a few tasks that I’d normally be able to accomplish with my eyes closed, like plugging in and turning on a computer. In this case, though, they were tricky; Oculus Touch gave me virtual hands that tracked exceptionally well within my virtual cubicle, but picking up the computer’s power cord and answering a ringing desk phone were harder than I expected. I also screwed up making a cup of coffee, as I could put the cup under the coffee spout but wasn’t sure how to press a button to make the coffee come out (apparently, Oculus Touch can track you poking with your index finger, though it doesn’t appear to track other fingers). Eventually, I figured it out, but then I spilled my java all over the floor while trying to drink it and had to do the whole thing again.

Playing a demo of Bullet Train, a shooting game that takes place on a virtual subway platform and included the ability to teleport myself around, was even harder. I didn’t realize I couldn’t hold a large gun with both hands, which seemed weird, and I kept forgetting which button would pick up a gun and which would fire it (as a result, I tended to press or let go of both, which didn’t always work that well). It probably didn’t help that I was being shot at from multiple directions the whole time.

These demos and another in which I used the controllers to sculpt a virtual clay-like substance with an array of handheld tools showed just how tricky it is to create simple ways to interact with virtual reality. Though the tracking of my hands seemed to work well, I didn’t have the fine control I usually have in the real world, and I often had to stop and think before picking up an object or using it.

Oculus Touch really shone during a demo called Toybox in which I interacted with another person—an actor appearing as a disembodied head and hands—who hung out with me at a table covered in toys. We talked and gesticulated a bit, stacked blocks, knocked them over, tried (and mostly failed) to play ping-pong, and shot paintballs from slingshots toward moving targets. Here the objects seemed big enough and the interaction casual enough that I could relax and play, experimenting with picking things up and throwing them around.

The other demos were exciting, but tinged with stress. This one, though, was pure fun, and I finally felt I was starting to get the hang of it.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.