The Journey from Lab to Market
Advances in computational photography are just beginning to find their way into mainstream cameras
Much of the data processing involved in computational photography is still too slow and cumbersome for an average photographer, says Kari Pulli, a research fellow at Nokia Research Center in Palo Alto, CA, who is collaborating with MIT researchers. Right now, for instance, it takes time and effort to de-blur a photo after it has been shot: it still must be uploaded to a program running on a computer, because the technology has yet to be incorporated into a camera in a user-friendly fashion.
But camera technology is steadily improving. “The current cell phones have the computational power of your laptop five or seven years ago, and image quality has increased a lot,” says Pulli. And even though cameras can’t yet do things like change the lighting of a scene on the fly, some concepts drawn from computational photography are slowly finding their way into the market.
Adobe’s Photoshop, for example, now has tools that allow a user to combine a series of photographs taken of a scene at various exposures. The effect is an eerie layering of the contrast information from all the exposures, yielding darker blacks and brighter whites.
Meanwhile, camera manufacturers are taking advantage of computational techniques to correct distortions or chromatic aberrations caused by imperfect lenses. “Lens makers try to come up with lens designs that fight problems as best they can,” says Frédo Durand, an associate professor of electrical engineering. But now, he says, those problems can be fixed after a picture is taken. “That means there’s not as much pressure on the manufacturers to get that right,” Durand says. It also means that consumers can expect to pay less for a camera that takes high-quality images.
Keep Reading
Most Popular
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.