One of the annoyances of cell-phone and compact cameras is that they lack the SLR’s focusing control. With an SLR camera, the lens can be moved to change what’s in focus. By adjusting the aperture, a photographer can get a shot in which a foreground subject is in clear focus, while the background is purposely blurred to deëmphasize distracting elements. SLRs are expensive, however, and they’re difficult for amateurs to use. Computational-photography researchers are trying to develop a simple, fixed-lens cell-phone camera that makes it easy for anyone to achieve such effects. They also hope to give photographers the ability to choose which objects they want in focus after a picture is taken.
Cameras are designed to focus on objects within a given range. When a camera is focused on a particular object, the lens concentrates the light reflecting off that object onto the sensor array. The light reflecting off objects that are not in focus still reaches the sensors, but it’s unconcentrated, resulting in a blurred image. “If a camera is not perfectly focused,” Durand says, “then the lens will project points from the scene onto the sensor as disks rather than points.”
If the distances between the camera and objects in an image are known, then an algorithm can be applied to the image data to sharpen the out-of-focus parts of a picture, converting the blurred disks of light into focused points. Conventional cameras, however, can’t determine this depth information on their own.
To extract depth information from a photograph, Durand, Freeman, and other colleagues modified an existing lens with a mask inserted into the aperture. Essentially, the mask is a piece of cardboard that blocks part of the light to subtly change the look of the out-of-focus parts of the picture. Durand explains that the undifferentiated blur caused by an ordinary out-of-focus lens doesn’t provide enough clues that could be used to reconstruct a clear image. But their mask changes this uniform blur into what he calls a “weird but structured mess.” Streaks and other unusual features of the blurry image help the researchers recover depth information: thanks to the way the mask blocks light in the camera aperture, an object 10 feet from the camera will be blurred differently from an object five inches away. Because they know the shape of the mask, the researchers have been able to mathematically define the blur associated with each depth, enabling them to devise an algorithm that can undo it (see photographs of conventional and coded apertures and “Extracting Depth Information,” p. M15).
Another strategy for improving focus, especially in a simple cell-phone camera, is similar to Durand’s technique for addressing motion blur. An SLR’s large aperture size gives it a shallow depth of field (the range of distances from the camera where objects appear sharply in focus), which makes it possible to focus on a specific subject and allow the background, the foreground, or even both to recede, explains Raskar. But pictures taken with ordinary cell-phone cameras, which have very small apertures, appear “flat” because everything looks as though it’s the same distance from the camera. At the first IEEE International Conference on Computational Photography, held in San Francisco in April, postdoc Ankit Mohan presented a paper he wrote with Raskar and others describing a technique for simulating a lens with a larger aperture size. They demonstrated how a fixed-lens camera can be designed so that both its lens and its sensor move slightly during exposure. By varying the velocity and range of the movement, they are able to, in effect, change the focal length and aperture size to control which part of the photo is in focus; the rest is purposely blurred (see “Focus Control for Fixed-Lens Cameras,” p. M15). Such technology could give a cheap cell-phone camera the focusing control of an SLR.