Photo-editing software gets more sophisticated all the time, allowing users to alter pictures in ways both fun and fraudulent. Last month, for example, a photo of Tibetan antelope roaming alongside a high-speed train was revealed to be a fake, according to the Wall Street Journal, after having been published by China’s state-run news agency. Researchers are working on a variety of digital forensics tools, including those that analyze the lighting in an image, in hopes of making it easier to catch such manipulations.
Tools that analyze lighting are particularly useful because “lighting is hard to fake” without leaving a trace, says Micah Kimo Johnson, a researcher in the brain- and cognitive-sciences department at MIT, whose work includes designing tools for digital forensics. As a result, even frauds that look good to the naked eye are likely to contain inconsistencies that can be picked up by software.
Many fraudulent images are created by combining parts of two or more photographs into a single image. When the parts are combined, the combination can sometimes be spotted by variations in the lighting conditions within the image. An observant person might notice such variations, Johnson says; however, “people are pretty insensitive to lighting.” Software tools are useful, he says, because they can help quantify lighting irregularities–they can give solid information during evaluations of images submitted as evidence in court, for example–and because they can analyze more complicated lighting conditions than the human eye can. Johnson notes that in many indoor environments, there are dozens of light sources, including lightbulbs and windows. Each light source contributes to the complexity of the overall lighting in the image.
Johnson’s tool, which requires an expert user, works by modeling the lighting in the image based on clues garnered from various surfaces within the image. (It works best for images that contain surfaces of a fairly uniform color.) The user indicates the surface he wants to consider, and the program returns a set of coefficients to a complex equation that represents the surrounding lighting environment as a whole. That set of numbers can then be compared with results from other surfaces in the image. If the results fall outside a certain variance, the user can flag the image as possibly manipulated.
Hany Farid, a professor of computer science at Dartmouth College, who collaborated with Johnson in designing the tool and is a leader in the field of digital forensics, says that “for tampering, there’s no silver button.” Different manipulations will be spotted by different tools, he points out. As a result, Farid says, there’s a need for a variety of tools that can help experts detect manipulated images and can give a solid rationale for why those images have been flagged.
Neal Krawetz, who owns a computer consulting firm called Hacker Factor, presented his own image-analysis tools last month at the Black Hat 2008 conference in Washington, DC. Among his tools was one that looks for the light direction in an image. The tool focuses on an individual pixel and finds the lightest of the surrounding pixels. It assumes that light is coming from that direction, and it processes the image according to that assumption, color-coding it based on light sources. While the results are noisy, Krawetz says, they can be used to spot disparities in lighting. He says that his tool, which has not been peer-reviewed, is meant as an aid for average people who want to consider whether an image has been manipulated–for example, people curious about content that they find online.
Cynthia Baron, associate director of digital media programs at Northeastern University and author of a book on digital forensics, is familiar with both Krawetz’s and Farid’s work. She says that digital forensics is a new enough field of research that even the best tools are still some distance away from being helpful to a general user. In the meantime, she says, “it helps to be on the alert.” Baron notes that, while sophisticated users could make fraudulent images that would evade detection by the available tools, many manipulations aren’t very sophisticated. “It’s amazing to me, some of the things that make their way onto the Web and that people believe are real,” she says. “Many of the things that software can point out, you can see with the naked eye, but you don’t notice it.”
Johnson says that he sees a need for tools that a news agency, for example, could use to quickly perform a dozen basic checks on an image to look for fraud. While it might not catch all tampering, he says, such a tool would be an important step, and it could work “like an initial spam filter.” As part of developing that type of tool, he says, work needs to be done on creating better interfaces for existing tools that would make them accessible to a general audience.
A quick guide to the most important AI law you’ve never heard of
The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence.
It will soon be easy for self-driving cars to hide in plain sight. We shouldn’t let them.
If they ever hit our roads for real, other drivers need to know exactly what they are.
Crypto is weathering a bitter storm. Some still hold on for dear life.
When a cryptocurrency’s value is theoretical, what happens if people quit believing?
Artificial intelligence is creating a new colonial world order
An MIT Technology Review series investigates how AI is enriching a powerful few by dispossessing communities that have been dispossessed before.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.