Camera-phone owners can use new software to reprogram these devices–and capture images that would previously have been impossible to get.
Stanford University researchers have made software for the Nokia N900 phone that gives developers, and users, greater control over the phone’s camera components than ever before. This software makes a variety of apps possible. Using the software, developers have already created apps that can capture both light and dark parts of a scene, stitch panoramic photos together automatically, and capture extremely sharp photos even in low light.
“My hope is that this will shift the camera industry,” says Stanford’s Marc Levoy, who leads the group that released the software this week at the SIGGRAPH computer graphics conference in Los Angeles.
Digital photography is normally constrained by the software built into the camera by its manufacturer. A field known as “computational photography” expands the possibilities of digital photography. It does this by using software to provide the user with more control over a camera’s components. Prior to the release of the new Stanford software, this kind of control has meant tethering that camera to a laptop. “That doesn’t make it easy to try out our ideas in realistic settings,” says Levoy.
Levoy and colleagues have also developed Frankencamera–an experimental, portable computational camera designed to be similar to a conventional one. Yet another way to expand the reach of this new approach to photography comes from smart phones–which feature powerful computers and increasingly capable imaging equipment.
“If other people in the mobile space start to experiment with these ideas, and users find that useful or cool, we will see similar apps in the biggest mobile app stores,” says Levoy. “That will put pressure on the camera industry to open up to allow similar innovation using their platforms.”
The images captured using computational photography can be stunning. For example, a camera can rapidly shoot a series of images while varying its focus, before combining them to make a single image in which objects at any distance appear sharp.
The software released for the N900 consists of one version of the Frankencamera software platform and a handful of apps built for it. One app lets the camera shoot three images of a scene with different exposures to capture both light and dark parts, resulting in a “high dynamic range” (HDR) image. Another guides a user to capture a series of overlapping images across a scene, and varies the exposure in adjacent photos so that a composite image can be stitched together in HDR. A third app, called Lucky Imaging, ensures sharp results in low light by constantly shooting images but only storing those judged by the software to be sharp enough.
The Frankencamera hardware is built from scratch using off-the-shelf components. The developers have made the details of both the hardware and software publicly available for free.
“We’ve already seen a few ideas implemented on it,” says Levoy of the Frankencamera software platform. “The one that impressed me the most was where one of our students connected two flashguns to it.” A shot of playing cards being flung into the air was captured by having one flashbulb strobe during a long exposure and the other fire brightly at the end of it. “That impressed me because he could do that in a weekend,” says Levoy.
The model used to create that image, the Frankencamera F2, is being redesigned by the Stanford team to feature an improved sensor that’s similar to the ones in professional-grade digital single-lens reflex (SLR) cameras. The F3 should be finished by the end of the year, says Levoy. A $1 million National Science Foundation grant will ensure that the camera is distributed to U.S. researchers to encourage further research, and it will also be made available for anyone to buy.
Outside of the research community, Levoy says, computational photography is most likely to make a big impact on mobile platforms. Indeed, some mobile app developers have already begun using concepts from the field, says Paul Worthington, an analyst at the San Mateo, CA, firm Future Image, which specializes in digital imaging.
An iPhone app called Pro HDR already makes it possible to take HDR images on the iPhone, although the app only combines two images. Another app, You Gotta See This, uses the gyroscope in the new iPhone 4 to create a panorama when a person scans their phone across a scene.
“Camera manufacturers used to ignore phones because they could say the image quality was too poor to worry about,” Worthington says. Now that’s changed. “Today it is smart cameras versus dumb cameras,” he says.
Levoy hopes phone vendors will adopt some of the ideas in the Frankencamera platform to enable even more powerful camera apps. But fully embracing computational photography will require more than just software tweaks to cell phones. “To really enable computational photography on smart phones, there may need to be a virtuous cycle where vendors go to their hardware suppliers and ask for more flexible components,” Levoy says.
These weird virtual creatures evolve their bodies to solve problems
They show how intelligence and body plans are closely linked—and could unlock AI for robots.
Surgeons have successfully tested a pig’s kidney in a human patient
The test, in a brain-dead patient, was very short but represents a milestone in the long quest to use animal organs in human transplants.
Is everything in the world a little bit conscious?
The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional
reasons. But can it be tested? Surprisingly, perhaps it can.
We reviewed three at-home covid tests. The results were mixed.
Over-the-counter coronavirus tests are finally available in the US. Some are more accurate and easier to use than others.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.