Skip to Content

Why You Want a Light Field Camera

Lytro’s technology is never out of focus and allows 3D from a single lens
June 22, 2011

News of a camera that promises to put an end to unfocussed photos forever broke overnight, as Silicon Valley startup Lytro announced a camera for consumers based on research at Stanford University.

The company is building a “plenoptic” or “light field” camera, which features an array of small lenses between the conventional lens and the sensor. That enables the camera to collect more light, from a wider range of directions than a conventional one. Researchers have been tinkering with the idea for years and shown that the rich information captured that way allows for features cameras don’t have today. Read on for the three main benefits.

Shoot first, focus later

Lytro’s camera will record the light information it collects in a special file format that allows a photographer to choose what depth they want to focus to on their computer. Click on the different things in the photo above to try it.

Cleaner images

Lytro’s founder, Ren Ng, modified a professional camera into a plenoptic one while at Stanford. A technical report on the research details that prototype, and explains that because more light is captured images can be clearer under the same conditions. The left and middle images below were taken with an unmodified version of the camera with a small aperture (little light let in; large depth of field) and large aperture (lots of light let in; small depth of field) respectively. The image on the right was taken with the plenoptic version and a large aperture. Lytro claims its camera will be able to take photos in very low light conditions, such as night clubs.

3D from a single lens

Each of the micro-lenses in a plenoptic camera’s array views a scene from a slightly different angle and those views can be compared to deduce the distance to the objects in front of the camera.

Last year Tom Bishop and Paolo Favaro at Heriot-Watt University in Edinburgh, UK, published a method to extract the depth of every pixel in an image from a plenoptic camera. The top image below shows an image taken by a prototype camera they built, and the lower image the depth map extracted from it (bottom). Areas closer to the camera are shaded darker. A depth map can be combined with the color captured by the camera to represent the image in full.

For a casual photographer to make use of such a feature they would require a way to display and view 3-D images, such as the Nintendo 3DS portable gaming console.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.