Skip to Content

Photo Chop Shop

Digital forensics can detect misleading cut-and-paste jobs and match a photograph to an individual camera’s “fingerprint.”
December 6, 2005

In 2003, the Los Angeles Times ran a picture by staff photographer Brian Walski of a British soldier in Basra, Iraq, motioning to a man carrying a child. When an astute journalist at the Hartford Courant, one of many newspapers that reprinted the photo, noticed that it seemed to contain repeated images of the same person in the background, the veracity of the picture came into question. Walski admitted that he had used Adobe’s Photoshop software to combine two separate photographs for the final image, and was promptly fired.

The Walski episode not only led to a widespread discussion of ethics in photojournalism, but also demonstrated how easily a skilled user can employ programs like Photoshop to fool average viewers – and sometimes even experts – into taking a faked image for the truth. Because almost all digital photos, including those used as evidence in court, are vulnerable to this kind of tampering, computer scientists and others are busy advancing the state of the art in digital forensics.

“The problem [of photo altering] had been in my head for a couple of years,” says Nasir Memon, a computer scientist at Polytechnic University in Brooklyn who specializes in image processing. He began to see more and more articles in newspapers about digital photo tampering, Memon explains, and decided about two years ago to use his experience in enhancing photos to detect digital alterations.

There’s already one way to prevent tampering – or at least expose it – but the process is expensive and not widely available. Cameras equipped with “digital watermarking” technology can append an extra stream of identifying data to the image file. If the photo is changed at all, the digital watermark is corrupted. Canon, for one, has been selling cameras with this technology and the supplementary software for reading watermarks since 2002. They’re being purchased mainly by professionals, such as crime scene investigators, who need to prove that the photos they take are unaltered.

However, not all photographs used in court are taken by experts using watermarking technology. That’s where digital forensics comes in, says Memon, who is organizing a February 2006 symposium on the field in San Jose, CA. The technology can uncover well-hidden alterations in photos taken with regular digital cameras, match an image to the camera that captured it, and determine whether two images were taken with the same camera.

One example is software Memon developed to characterize a brand of camera, such as a Sony or Canon, by its digital “signature.” “These are not the things that will help you nail down a criminal,” Memon explains, “but clues that form a piece of a puzzle that can solve a crime.”

Memon’s program relies on the fact that digital cameras record image information in discrete squares of color, or pixels. Each pixel consists of a sensor for red, blue, or green light. “You don’t have all three [sensors] at any point,” Memon explains, so cameras use “interpolation” algorithms to adjust the color of an individual pixel based on readings from the surrounding pixels. These algorithms vary from company to company, and they “leave telltale artifacts” on pictures, Memon says. In this way an image from one camera can be distinguished from one taken with another.

So far, Memon and his students in Brooklyn have catalogued the color estimation styles of 10 different manufacturers. Memon notes, however, that there is a difference between each company’s high-end and mid-range models. The technique is about 90 percent accurate, Memon says, but as the number of digital cameras on the market grows, it becomes more difficult to match a picture to a camera brand.

Memon’s technique is useful when investigators are hunting down a camera and a photographer; but in some instances, the camera is already part of the evidence. In these cases, a technique developed by Jessica Fridrich at the State University of New York in Binghamton can help to prove that an individual picture came from a specific camera.

To accomplish this bit of sleuthing, Fridrich exploits the fact that every camera produces tiny imperfections, or “noise,” within an image. “If you zoom in on a portion of a picture that’s supposed to be a uniform blue sky, you’ll see those pixels are not monotonous blue,” she explains. “You’ll start to see irregularities.”

Fridrich’s software extracts these irregularities from a large number of pictures captured by the same camera. (Since investigators have access to the camera, they can take as many pictures as needed.) Because each individual camera has a characteristic way of producing noise, the irregularities can be averaged to create a unique signature, and individual photos can be checked against this signature. The technique is accurate 99.99 percent of the time, according to Fridrich. “We have discovered the equivalent to matching a bullet to the barrel and gun,” she says.  

Fridrich’s technique works even after a picture has been compressed to a smaller file size, to be sent in an e-mail, for instance. In contrast, digital forensic techniques like Memon’s fail if a file has been shrunk. “The beauty of [noise correlation] is that it is robust to distortion,” says Fridrich.

Recently, Fridrich has been extending her noise analysis technique to determine if certain regions of an image have been altered. If the noise is not uniform across the picture, a segment has been tampered with, she says.

“Everyday, somewhere in the world, you have someone questioning the veracity of an image,” Memon says. Both Memon’s and Fredrich’s tools are useful in different settings – and together, they should help make it harder for photo forgers to dupe the unwitting public.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.