Skip to Content

How to Avoid Real Objects While in a Virtual World

Occipital wants to interrupt virtual-reality scenes with what’s coming at you in real life to prevent surprises and spills.
June 12, 2015

How will you walk around virtual worlds without smacking into actual walls?

Startup Occipital interrupts virtual reality with real-world imagery when things get within a few feet.

While many companies are working to bring virtual-reality headsets to store shelves, a 3-D sensing startup called Occipital is trying to figure out how to avoid getting hurt or, at least, surprised by elements of the real world while lost in a virtual one. Right now you can’t move around much while wearing a virtual-reality headset; in order to demo an Oculus headset this spring, for instance, you stood in a small room, barely moving your feet.

Occipital thinks the answer lies in bringing some reality back into virtual reality, and it’s working on software that lets you see nearby objects—a person, perhaps, or a door—layered atop virtual worlds so you can avoid them, if necessary.

“You want to be able to immerse yourself, but you also want to be able to know if something is approaching you,” said Occipital cofounder and CEO Jeff Powers. “Otherwise it’s very disconcerting, because you may run into a wall.”

The company already sells a $379 sensor similar to Microsoft’s Kinect that straps to an iPad and scans rooms and objects in 3-D. It can be used with the company’s software developer kit to build and interact with virtual- and augmented-reality apps, and includes the ability to track your position (whether you’re walking forward, backward, or crouching) without mapping the world around you in advance.

Occipital’s sensor works by projecting a laser pattern onto your immediate environment. Its infrared camera picks up that pattern and uses it to measure the distance to objects in the scene so that software can rebuild those objects in three dimensions. To add bits of the real world to a virtual one, Occipital takes video captured by the regular camera on the iPad (or, in a demonstration I saw using a different kind of sensor mount, an iPhone) and measures its depth; when an object in the real world—say a trash can—comes within a preset distance, the software will basically cut out the image of the can and insert it atop the 3-D virtual scene.

Occipital’s 3-D sensor, shown mounted to an iPhone attached to a headset.

Occipital plans to make the reality-adding feature available to developers within the next few weeks, Powers said, and will also let them know how to obtain a 3-D printed mount for the iPhone.

I got a look at what Occipital is trying to do during the Augmented World Expo in Santa Clara, California, this week. Powers placed a Homido headset onto my head—basically a generic version of Samsung’s Gear VR headset, which requires a smartphone to show virtual reality apps—into which he had slid an iPhone 6 that was connected to an Occipital 3-D sensor and had a wide-angle lens atop its normal rear camera. The view within Occipital’s virtual world was odd: I was standing within a vast, mostly empty gray room with yellowy dust particles slowly falling all around me, but after walking forward a few feet, the gray expanse was interrupted by a pixelated-looking black railing that appeared in front of me. Unlike the room, the railing was the real thing, marking the edge of the floor we were standing on in the Santa Clara Convention Center.

A moment later, Powers walked in front of me, and when he got within three or four feet, I could see him from head to torso, roughly, through the goggles: pixelated and rendered in black and white, but looking more like an actual human than an animated one. It was strange, but compelling; I completed much of the rest of our interview with the headset over my eyes.

Though Powers, the railing, and everything else from the real world that invaded my virtual space was shown in black and white, Powers said it could be in color, and the distance at which physical objects start to appear is arbitrary (he illustrated this second point by pulling out a small Bluetooth remote control that, with the push of a button, showed us real objects that were farther out).

Yet while pixelated-looking people may be okay if you’re just trying to avoid smacking into things while playing virtual-reality games, Occipital will have to vastly improve the resolution of the reality it pipes in if it’s going to work for more extensive interactions between one world and another, such as in games. And users may want to ignore an object that they know is in front of them so they don’t have a railing inside the entire game.

Powers understands there’s still a long way to go, though. “This is a starting point,” he said.  

Keep Reading

Most Popular

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Sam Altman says helpful agents are poised to become AI’s killer function

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Taking AI to the next level in manufacturing

Reducing data, talent, and organizational barriers to achieve scale.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.