Advances in computational photography are just beginning to find their way into mainstream cameras
Much of the data processing involved in computational photography is still too slow and cumbersome for an average photographer, says Kari Pulli, a research fellow at Nokia Research Center in Palo Alto, CA, who is collaborating with MIT researchers. Right now, for instance, it takes time and effort to de-blur a photo after it has been shot: it still must be uploaded to a program running on a computer, because the technology has yet to be incorporated into a camera in a user-friendly fashion.
But camera technology is steadily improving. “The current cell phones have the computational power of your laptop five or seven years ago, and image quality has increased a lot,” says Pulli. And even though cameras can’t yet do things like change the lighting of a scene on the fly, some concepts drawn from computational photography are slowly finding their way into the market.
Adobe’s Photoshop, for example, now has tools that allow a user to combine a series of photographs taken of a scene at various exposures. The effect is an eerie layering of the contrast information from all the exposures, yielding darker blacks and brighter whites.
Meanwhile, camera manufacturers are taking advantage of computational techniques to correct distortions or chromatic aberrations caused by imperfect lenses. “Lens makers try to come up with lens designs that fight problems as best they can,” says Frédo Durand, an associate professor of electrical engineering. But now, he says, those problems can be fixed after a picture is taken. “That means there’s not as much pressure on the manufacturers to get that right,” Durand says. It also means that consumers can expect to pay less for a camera that takes high-quality images.