In the real world, you use your hands constantly to manipulate all kinds of objects and communicate with others. Facebook-owned Oculus hopes a pair of half-moon, button-studded hand controls that will go with its forthcoming consumer headset, the Rift, will let you do likewise in virtual reality.
Unveiled in June, the Oculus Touch hand controls make it possible to do things like grasp virtual blocks, push buttons, and shoot a slingshot while using the Rift. The Rift is slated to be released in the first quarter of next year; the controls are set to arrive in the second quarter. As with the Rift, pricing and exact availability for Touch have yet to be announced.
At an Oculus developer conference in Los Angeles this week, I set out to figure out how well Oculus Touch works with a range of applications—whether it could really work as an intuitive, simple way to move or throw a digital stapler, play with another person in virtual reality, or make art.
If you’re used to playing with video-game controllers, Oculus Touch will look pretty familiar: for each hand there’s a controller with a joystick and some buttons on top, more buttons on its sides, and, less conventionally, a half-moon that surrounds several of your fingers. You can use the different parts of the controllers to grab, shoot, and select things; as with video games, the capabilities vary from one demo to the next.
I’m not a gamer, so I tried to start things off slowly with a cartoony office simulation demo in which I had to complete a few tasks that I’d normally be able to accomplish with my eyes closed, like plugging in and turning on a computer. In this case, though, they were tricky; Oculus Touch gave me virtual hands that tracked exceptionally well within my virtual cubicle, but picking up the computer’s power cord and answering a ringing desk phone were harder than I expected. I also screwed up making a cup of coffee, as I could put the cup under the coffee spout but wasn’t sure how to press a button to make the coffee come out (apparently, Oculus Touch can track you poking with your index finger, though it doesn’t appear to track other fingers). Eventually, I figured it out, but then I spilled my java all over the floor while trying to drink it and had to do the whole thing again.
Playing a demo of Bullet Train, a shooting game that takes place on a virtual subway platform and included the ability to teleport myself around, was even harder. I didn’t realize I couldn’t hold a large gun with both hands, which seemed weird, and I kept forgetting which button would pick up a gun and which would fire it (as a result, I tended to press or let go of both, which didn’t always work that well). It probably didn’t help that I was being shot at from multiple directions the whole time.
These demos and another in which I used the controllers to sculpt a virtual clay-like substance with an array of handheld tools showed just how tricky it is to create simple ways to interact with virtual reality. Though the tracking of my hands seemed to work well, I didn’t have the fine control I usually have in the real world, and I often had to stop and think before picking up an object or using it.
Oculus Touch really shone during a demo called Toybox in which I interacted with another person—an actor appearing as a disembodied head and hands—who hung out with me at a table covered in toys. We talked and gesticulated a bit, stacked blocks, knocked them over, tried (and mostly failed) to play ping-pong, and shot paintballs from slingshots toward moving targets. Here the objects seemed big enough and the interaction casual enough that I could relax and play, experimenting with picking things up and throwing them around.
The other demos were exciting, but tinged with stress. This one, though, was pure fun, and I finally felt I was starting to get the hang of it.
The big new idea for making self-driving cars that can go anywhere
The mainstream approach to driverless cars is slow and difficult. These startups think going all-in on AI will get there faster.
Inside Charm Industrial’s big bet on corn stalks for carbon removal
The startup used plant matter and bio-oil to sequester thousands of tons of carbon. The question now is how reliable, scalable, and economical this approach will prove.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.