Skip to Content

Identifying Manipulated Images

New tools that analyze the lighting in images help spot tampering.
March 17, 2008

Photo-editing software gets more sophisticated all the time, allowing users to alter pictures in ways both fun and fraudulent. Last month, for example, a photo of Tibetan antelope roaming alongside a high-speed train was revealed to be a fake, according to the Wall Street Journal, after having been published by China’s state-run news agency. Researchers are working on a variety of digital forensics tools, including those that analyze the lighting in an image, in hopes of making it easier to catch such manipulations.

True or false? The tool used above spots whether an image has been manipulated by modeling the lighting in the image based on an analysis of visible surfaces. To analyze an image, a user indicates the surfaces to consider using contour lines (shown above in white). The system checks for inconsistencies in the way that those surfaces are lit.

Tools that analyze lighting are particularly useful because “lighting is hard to fake” without leaving a trace, says Micah Kimo Johnson, a researcher in the brain- and cognitive-sciences department at MIT, whose work includes designing tools for digital forensics. As a result, even frauds that look good to the naked eye are likely to contain inconsistencies that can be picked up by software.

Many fraudulent images are created by combining parts of two or more photographs into a single image. When the parts are combined, the combination can sometimes be spotted by variations in the lighting conditions within the image. An observant person might notice such variations, Johnson says; however, “people are pretty insensitive to lighting.” Software tools are useful, he says, because they can help quantify lighting irregularities–they can give solid information during evaluations of images submitted as evidence in court, for example–and because they can analyze more complicated lighting conditions than the human eye can. Johnson notes that in many indoor environments, there are dozens of light sources, including lightbulbs and windows. Each light source contributes to the complexity of the overall lighting in the image.

Johnson’s tool, which requires an expert user, works by modeling the lighting in the image based on clues garnered from various surfaces within the image. (It works best for images that contain surfaces of a fairly uniform color.) The user indicates the surface he wants to consider, and the program returns a set of coefficients to a complex equation that represents the surrounding lighting environment as a whole. That set of numbers can then be compared with results from other surfaces in the image. If the results fall outside a certain variance, the user can flag the image as possibly manipulated.

Hany Farid, a professor of computer science at Dartmouth College, who collaborated with Johnson in designing the tool and is a leader in the field of digital forensics, says that “for tampering, there’s no silver button.” Different manipulations will be spotted by different tools, he points out. As a result, Farid says, there’s a need for a variety of tools that can help experts detect manipulated images and can give a solid rationale for why those images have been flagged.

Neal Krawetz, who owns a computer consulting firm called Hacker Factor, presented his own image-analysis tools last month at the Black Hat 2008 conference in Washington, DC. Among his tools was one that looks for the light direction in an image. The tool focuses on an individual pixel and finds the lightest of the surrounding pixels. It assumes that light is coming from that direction, and it processes the image according to that assumption, color-coding it based on light sources. While the results are noisy, Krawetz says, they can be used to spot disparities in lighting. He says that his tool, which has not been peer-reviewed, is meant as an aid for average people who want to consider whether an image has been manipulated–for example, people curious about content that they find online.

Cynthia Baron, associate director of digital media programs at Northeastern University and author of a book on digital forensics, is familiar with both Krawetz’s and Farid’s work. She says that digital forensics is a new enough field of research that even the best tools are still some distance away from being helpful to a general user. In the meantime, she says, “it helps to be on the alert.” Baron notes that, while sophisticated users could make fraudulent images that would evade detection by the available tools, many manipulations aren’t very sophisticated. “It’s amazing to me, some of the things that make their way onto the Web and that people believe are real,” she says. “Many of the things that software can point out, you can see with the naked eye, but you don’t notice it.”

Johnson says that he sees a need for tools that a news agency, for example, could use to quickly perform a dozen basic checks on an image to look for fraud. While it might not catch all tampering, he says, such a tool would be an important step, and it could work “like an initial spam filter.” As part of developing that type of tool, he says, work needs to be done on creating better interfaces for existing tools that would make them accessible to a general audience.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.