Skip to Content

Lytro Is Building a Camera to Capture Live-Action Virtual Reality

Lytro, maker of a shifting-focus camera, claims its upcoming virtual-reality camera will let you move around in live-action videos.
November 5, 2015

Startup Lytro, which has struggled to popularize its cameras, which let you refocus photos after taking them, says it’s building a new camera for taking live-action 3-D videos that you can view in virtual reality.

Lytro’s Immerge camera for filming live-action virtual reality, shown here in a rendering, will be about the size of a beach ball and sit atop a tripod.

Called Immerge, the camera looks like a black beach-ball-sized sphere made up of hundreds of closely packed high-resolution 4K cameras arrayed in concentric circles—at least, according to renderings the company showed me; it’s not showing off even a complete prototype yet, nor actual footage shot with it.

The camera is slated for release sometime between January and March, and will be aimed at professional filmmakers, with a price tag to match. Lytro CEO Jason Rosenthal says a system including the camera and a custom server and software will cost “in the hundreds of thousands of dollars.” The gear will also be available to rent for somewhere in the “low thousands of dollars per day,” he says.

The company claims that, beyond producing high-quality images, the camera will also solve a big problem in the nascent field of virtual reality: the currently impossible task of moving around while watching a live-action film on a VR headset. You won’t be able to move too much, though; just within a cubic meter, which is the volume of the Immerge camera overall. 

Since the idea of making content for virtual reality is still so new, live-action videos—footage of a football game, for instance, as opposed to computer-generated content like a video game—tend to be shot with a bunch of cameras mounted in a spherical arrangement in order to capture 360 degrees of images in all directions. Then the footage from the cameras is stitched together into a visual sphere you can watch with a headset to feel like you’re immersed in a new environment. Google has developed such a camera, and started showcasing 3-D virtual reality footage made with it on YouTube today (see “Google Aims to Make VR Hardware Irrelevant Before It Even Gets Going”).

A rendering of Lytro’s upcoming camera for filming live-action virtual reality, which will have five concentric rings filled with high-resolution cameras.

However, live-action virtual-reality videos aren’t that immersive once you realize that, unlike with computer-generated 3-D content, they can’t track your changing position (as measured by some headsets) and adjust what you see on your headset in accordance. Stand up while watching a virtual-reality concert, and the virtual world will rise, too.

Lytro thinks Immerge videos can offer users the ability to move up, down, right, left, forward, and backward by capturing as many light rays as possible in the world around the camera—something that hearkens back to the light-field technology behind Lytro’s still cameras. Doing this, and doing it at a high-enough frame rate, means the mass of resulting data can be used to form an accurate 3-D model of the real world, which you can simulate in a virtual-reality headset.

“We have so many camera viewpoints that anywhere you look in the sphere we have an actual physical view that matches kind of exactly where you’re looking,” Rosenthal says.

In renderings he showed me, the camera sat atop a tripod, looking like a spherical black sculpture, circumnavigated by five parallel rings that house cameras.

Rosenthal says films made with Immerge should be viewable on any of the major headsets slated for release starting next year, like the Facebook-owned Oculus Rift. They will also be watchable with devices that don’t have positional tracking, like Google Cardboard.

So far, he says, Lytro has built a bunch of prototype portions of the camera, and about a month ago was able to start creating test video captures. The company plans to start shooting video in late November, and several virtual-reality filmmaking companies will try it out in December.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.