Skip to Content

App Turns iPhone into a Smarter Camera

Developers use clever tricks to overcome the shortcomings of smart-phone cameras.
January 31, 2011

The cameras in most mobile phones are an afterthought. This has left an opening for programmers to step in and develop software to make the images produced by smart phones much better.

Bring the blur: A new iPhone 4 app called SynthCam simulates a blurring effect for an image’s background that’s usually produced only with larger, more expensive cameras.

One roadblock to this effort has been the cameras themselves—their very design imposes limits on what a photographer can reasonably capture. Now Stanford professor Marc Levoy has created an app that changes what the iPhone’s camera is capable of.

Called SynthCam, Levoy’s software lets the iPhone 4 take pictures that look like they were taken with a larger, more expensive camera.

Most standalone cameras have an adjustable aperture—the opening through which light travels into the camera—that can be used to produce various photographic effects. A large aperture creates a shallow depth of field, so an object of interest remains crisp while the rest of the scene is blurred. The iPhone has a small aperture, meaning all parts of an image are equally in focus. SynthCam overcomes this limitation by capturing multiple scenes and combining them to make a single image.

Levoy’s app is based on his research in computational photography, which uses software to enable digital cameras to capture new types of photographs, such as those that exploit careful timing of the flash and shutter, and to help improve images taken with less sophisticated cameras. Computational photography can also be applied to smart phones, especially since the devices have lots of processing power, and developer tools provide access to a phone’s hardware.

“This combination of camera and computational platforms opens up so many things that you can do,” says Kari Pulli, Nokia Fellow at the Company’s Palo Alto research center in Palo Alto, California. Pulli was not involved in developing the app.

Other apps that use principles from computational photography are already available through Apple’s App Store. Some apps, like HDR Camera and TrueHDR, create photos with a higher range of luminance and colors by capturing different exposures in succession and then combining them in a single image. Apps like 360 Panorama and AutoStich Panorama let a person take panoramic photos by automatically stitching together multiple images from a moving camera. There are also many apps, such as Hipstamatic, Instagram, and 100 Cameras in 1, that let people apply filters to their pictures, making them look as if they were taken with a different type of camera.

When a person uses SynthCam, she selects a still point of interest, like a statue, and taps its location on the phone’s screen. Then she moves the phone in a small circle around the fixed point for about 10 seconds. The app tracks the point of interest, searching for it in all frames. Realigning all of the images produces the composite one—showing the item of interest in sharp focus and the background out of focus. 

SynthCam is in its first version, and the reviews on iTunes have been mixed. Some users complain that the user interface still needs work, while other have found it difficult to reproduce the a shallow depth of field effect.

In addition to simulating a shallow depth of field, SynthCam collects more light, producing better pictures in low-light conditions. It also removes moving objects from the background, since the composite is captured over 10 seconds or more.

Levoy says he developed the app, which costs 99 cents from the App Store, to let people see what’s possible on phone cameras. He expects an explosion of apps that use computational photography techniques over the next few years. “I’m not going to get rich over this,” he says. “But if it encourages other people to do the same, then that’s a good thing.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.