Skip to Content
MIT News: 77 Mass Ave

A clever shield against photo fakery

AI makes it easy to tamper with images online, but an MIT-built system subtly alters them to foil the manipulation.

""
In this example, someone seeks to modify an image found online, writing a text prompt to change the casual clothing to suits and then using a diffusion model to generate a realistic matching image. By “immunizing” the original with alterations invisible to the human eye, the PhotoGuard system makes the result of this manipulation look like an unnatural blur of gray.Courtesy of the Researchers

Remember that selfie you posted last week? There’s currently nothing stopping someone from taking it and editing it with AI—and it might be impossible to prove that the resulting image is fake. 

The good news is that a new tool created by researchers at MIT could prevent this.

The tool, called PhotoGuard, works like a protective shield by altering photos in tiny ways that are invisible to the human eye but prevent them from being manipulated. If someone tries to use an editing app based on a generative AI model such as Stable Diffusion to manipulate an image that has been “immunized” by PhotoGuard, the result will look unrealistic or warped. 

Right now, “anyone can take our image, modify it however they want, put us in very bad-looking situations, and blackmail us,” says Hadi Salman, a PhD student at MIT who contributed to the research. PhotoGuard is “an attempt to solve the problem of our images being manipulated maliciously by these models,” says Salman. The tool could, for example, help prevent women’s selfies from being made into nonconsensual deepfake pornography.

The MIT team used two different techniques to stop images from being edited using Stable Diffusion. In the first, PhotoGuard adds imperceptible signals to the image so that the AI model interprets it as something else, such as a block of pure gray. In the second, it disrupts the way the AI models generate images, essentially by encoding them with secret signals that alter how they’re processed by the model, so any edited image looks like that gray block. For now, the technique works reliably only on Stable Diffusion, an open-source image generation model. 

In theory, people could apply this protective shield to their images before they upload them online, says Aleksander Madry, SM ’09, PhD ’11, a professor of electrical engineering and computer science who contributed to the research. But a more effective approach, he adds, would be for tech companies to add it to images that people upload into their platforms automatically—though it’s an arms race, because new AI models that might be able to override any new protections are always coming out.

Keep Reading

Most Popular

10 Breakthrough Technologies 2024

Every year, we look for promising technologies poised to have a real impact on the world. Here are the advances that we think matter most right now.

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

AI for everything: 10 Breakthrough Technologies 2024

Generative AI tools like ChatGPT reached mass adoption in record time, and reset the course of an entire industry.

What’s next for AI in 2024

Our writers look at the four hot trends to watch out for this year

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.