Skip to Content

Facebook’s Live-Action Camera Systems Let You Take Steps in Virtual Places

New VR cameras will be great for live events and virtual tourism. Oh, and probably porn, too.
April 19, 2017

Virtual reality has an image problem.

While plenty of cameras out there capture spherical footage of real-life scenes that you can then look at in VR (see “10 Breakthrough Technologies 2017: The 360-Degree Selfie”), most of them don’t also capture depth data. This means that if you’re wearing a high-end virtual-reality headset like the Oculus Rift and looking at a spherical video of, say, the Eiffel Tower (rather than a computer-rendered version of it), the image will move with you when you crouch, jump, or step from side to side. This is annoying at best, and nausea-inducing at worst.

Facebook, which has been one of the biggest proponents of virtual reality since purchasing headset maker Oculus in 2014, is aiming to fix this with two new spherical camera systems unveiled on Wednesday. Both shoot live-action footage that lets you move around in about a meter and a half of space in virtual reality. The company plans to get them into production later this year.

Called X24 and X6 (the numbers refer to the quantity of individual cameras in each model), the camera systems could make virtual experiences like watching concerts, visiting famous landmarks, or exploring museums much more engaging, whether you do so on your own or with another person. You’ll still need a headset that can track your position in space and the rotation of your head, though, to really take advantage of the footage.

“What we are trying to do with VR in general is bring people up the immersion curve,” said Facebook chief technology officer Mike Schroepfer. “The end vision is [to get you] as close as you can to feeling like you’re actually there.”

A handful of companies have already built high-end cameras for capturing live-action virtual reality, such as Nokia’s Ozo camera, Google’s Jump, and Lytro’s Immerge (see “Lytro Is Building a Camera to Capture Live-Action Virtual Reality”). But only a couple (Lytro’s being one of them) purport to film footage and record depth information as Facebook says its X24 and X6 do, and no company has yet popularized such a device.

While the X24 and X6 are meant for professionals, Schroepfer said the technology will eventually lead to consumer products as well.

The new camera systems, which were introduced at Facebook’s annual F8 developer conference, come a year after the social network rolled out its UFO-like Surround 360, which had 17 cameras and was meant to capture crisp, spherical 3-D images. Facebook didn’t sell the Surround 360, but it made the technology available via GitHub to anyone who wanted to make one with off-the-shelf parts.

The Surround 360 didn’t include the kind of 3-D information that the X24 and X6 will record, though. With their lens positions and accompanying software, the new systems can reconstruct what the world should look like as you move around, Schroepfer said.

In demos of raw footage shot with the X24 that I viewed through an Oculus Rift headset last week, I saw a spherical scene of a lush rainforest exhibit from the vantage point of a catwalk near the top of the exhibit, with butterflies flitting by, as well as a tunnel inside an aquarium with fish swimming around. Because the footage included depth information and the headset can track movement with six degrees of freedom, I could move around in the scenes, checking out the trees in the rainforest, the tourists on benches in the tunnel, and fish wandering above and around us.

Much of the footage looked crisp, and it was very cool to be able to move around freely in a lifelike scene. Yet the technology still needs work, or at least some editing to clean up the footage: the foliage in the rainforest looked streaky, for example, and I noticed some shimmering elsewhere, too.

Brian Cabral, Facebook’s director of engineering and leader of the team that made the devices, said that the physical arrangement of the cameras makes it possible to capture each pixel in a given scene from many different angles, and then math is used to estimate its depth.

“Once you know where it is in the scene, you can move around,” he said, adding that the shimmering will be cleaned up as the systems go into production.

Cabral said Facebook will license the technology to several as-yet-unnamed partners so they can make cameras.

“The idea is to have multiple models of growing the ecosystem,” he said.

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.