Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

On a summer day in 1826, at his country estate about 340 kilometers southeast of Paris, Joseph Nicéphore Niépce set up his camera obscura and projected the image of his courtyard onto a pewter plate coated with a light-sensitive material. For eight hours, the lens focused light from the sun, chemically fixing the areas where the light struck the plate to capture the view of a pigeon house, a pear tree, a barn roof, and an extended wing of his house. For this achievement, Niépce is credited with creating the world’s first photograph.

Pewter and other solid plates gave way to flexible rolls of film in 1889; color film followed in the mid-1930s. In the mid-1990s, the first mass-market color digital cameras were introduced, capturing images with light sensors on a chip. These advances have led to cheaper, smaller, more portable cameras that can produce vivid images. But at the most fundamental level, cameras haven’t been altered significantly, says Ramesh Raskar, associate professor and leader of the Camera Culture group at the MIT Media Lab. “The physical device itself has barely changed over the last 100 years,” he says. “You have a similar lens, a similar box that mimics the human eye. Other than the fact that it’s cheaper, faster, and more convenient, photography hasn’t changed that much.”

Raskar, however, is hoping that he and others at MIT and around the world can spark a revolution in photography. Researchers in a field called computational photography are rethinking digital cameras to take better advantage of the computers built into them. They envision a day when anyone can use a camera with a small, cheap lens to take the type of stunning pictures that today are achievable only by professional photographers using high-end equipment and software such as Adobe Photoshop. In fact, they think such cameras could exceed today’s most sophisticated technologies, overcoming what have seemed like fundamental limits.

Computational photography encompasses new designs for optical components and camera hardware as well as new algorithms for image analysis. The goal, says Raskar, is to build cameras that can record what the eye sees, not just what the lens and sensor are capable of capturing. “If you’re on a roller coaster, you can never get a good picture,” he says. “If you’re at a great dinner, you can never take pictures that make the food look appetizing.” But with computational techniques, cameras could eliminate blur from a snapshot taken on a bumpy amusement-park ride. Such cameras could also capture the subtle shapes and shadows of food and people’s smiles in the low light of a candlelit dinner–without a long exposure time, which invariably produces blurry pictures, or the use of a disruptive flash.

Moreover, computational photography could make it easy for amateur photographers to create pictures that today require specialized and time-consuming post-processing techniques. Even cell-phone cameras, which have inexpensive fixed lenses, could give amateurs the same kind of control over focusing that professionals have with a high-end single-lens reflex (SLR) camera.

All cameras operate in the same basic way: light enters through a focusing lens and passes through an aperture. In a traditional camera, the light hits photoreactive chemicals on film or plates. In a digital camera, the light passes through color-separating filters and lands on an array of photosensors, each of which represents a pixel. When light hits a photosensor, it produces an electrical current whose strength reflects the intensity of the light. The current is converted to digital 1s and 0s, which the camera’s processor (a computer chip) then converts into the image that shows up on the camera’s preview screen and is stored on a flash memory card or an internal hard drive.

Pages

0 comments about this story. Start the discussion »

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me