Skip to Content

Shoot Now, Focus Later

A startup’s new camera lets you refocus photos and capture 3-D images.
October 19, 2011

Point and shoot: Lytro’s new camera uses software to refocus images later.

Click on the image above to refocus on different parts.
Credit: Lytro.

Looking over a haul of digital photos can involve as much regret over fudged shots as reminiscing over golden moments. A camera from Silicon Valley startup Lytro promises to change that by allowing a user to focus a photo after it has been taken. The camera also has a novel “lightfield” sensor that enables photos to be viewed in 3-D. It is available to order today and will start shipping next year.

The camera has a novel design reminiscent of a telescope. It features only two buttons: one to turn the device on or off, and one to take a photo. Only after a photo is taken does the user need to worry about focusing the resulting image.

The photos are dynamic and interactive. When viewing them on the camera, a user taps on the device’s touch screen to choose the object or area that should be in focus. Everything closer or farther away is artfully blurred. A photo can also be set to show everything in sharp focus. The same experience is possible when viewing an image on a PC, with Lytro’s software, or online, with tools for sharing images via Facebook or embedding them in a Web page. (See a gallery of interactive images taken with a Lytro camera.)

Rather than just a convenience for bad photographers, photos that can be refocused also allow more playful and creative photography, says Ren Ng, who founded Lytro to commercialize research he began at Stanford University. “Refocusing the image becomes a new way to tell the story,” he says. “It injects a drama into the viewing moment, like when you discover a face that was out of focus in the background.”

It seems ambitious for a startup to take on the camera industry, but Ng says Lytro is more than a camera maker. “It’s not just a consumer electronics company—it’s a Web 2.0 company as well,” he says, referring to the Facebook sharing tools and other online features. Ng says he expects word of mouth to drive interest in Lytro when people encounter and “like” the photos it produces.

The light sensor is what makes Lytro’s product different from any other consumer camera. In a conventional camera, the sensor’s pixels come in three versions that record red, green, and blue light to build a full-color image. On Lytro’s “lightfield” sensor, pixels are more discriminating. As well as being specialized to red, green, or blue, each detects only light coming from a particular angle.


Knowing the angles that different rays of light travel allows the camera’s software to simulate the photo that would be produced by a virtual cameras focused in a particular way. When a person interacts with a Lytro photo, software tweaks the settings of that virtual camera to produce the new, refocused image.

Lytro’s sensor is made by bonding a carefully etched sheet of glass on top of a conventional digital-camera sensor. The glass is patterned with tiny lenses, ensuring that specific pixels can receive light only from the specified angles. That gives Lytro’s software the information it needs to refocus photos.

Another consequence of this design is that the camera records depth, which makes it possible to reproduce 3-D images. “We’re not going to be emphasizing it from the start, but these pictures are inherently 3-D,” says Ng, who showed Technology Review images from a Lytro camera on a laptop with 3-D-capable screen.

Lytro’s approach to camera design and photography emerges from a relatively young area of research known as computational photography. Researchers in that field use various computing and mathematical techniques to achieve novel feats of photography and videography, including taking cell-phone photos in very low light or even taking pictures around corners.

Ramesh Raskar, who heads the computational photography research group at MIT’s Media Lab, says that Lytro is the first company to try to commercialize computational photography. “The camera industry looks at what we do as very new and experimental,” he says. “If Lytro are even partially successful, they will make people realize that computational photography can be practical.” Raskar says that Lytro’s basic design approach is sound and that he believes users of conventional cameras will be interested in the ability to focus after the fact.

However, Raskar adds that Lytro’s sensor design causes its output to be of lower resolution than an equivalent sensor configured normally, because of the need to restrict pixels to receive light only from certain angles. Raskar’s own research group have an alternative design that places a sheet perforated with small holes slightly in front of a camera’s sensor. That arrangement doesn’t have the effect of specializing pixels to certain directions of light as in Lytro’s sensor, but it does attenuate light rays in a known way such that the path of different light rays can be mathematically worked out from what the sensor records. An image can then be refocussed as with Lytro’s design.

Most importantly, the MIT lab’s approach cuts the resolution of photos less, and in a way proportional to the amount of depth range a person chooses to be in focus, says Raskar. By contrast, Lytro’s resolution penalty is always the same and likely means a cut of at least ten times in a sensor’s output in each dimension, he says. Raskar says there is strong interest in commercializing his group’s design, although he is far from ready to launch a competing product to Lytro’s.

Ng says camera sensors are today so high-resolution that any resolution penatly should not be a problem. He argues that marketing efforts by camera manufacturers have led to consumers to believe they need more megapixels than they do. “Most of photos that are shared are a tiny fraction of a camera’s ability,” he says. Ng wouldn’t say what the output quality of Lytro images is, preferring to say that his sensor captures 11 million light rays of data (or 11 “megarays”). The largest images shown by the company online are 800 pixels square. A standard six-by-four-inch photograph requires a digital photo that is 1,800 by 1,200 pixels in size.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.