Skip to Content
Artificial intelligence

Google has released a tool to spot faked and doctored images

February 5, 2020
An image of an American flag being analyzed by Assembler
An image of an American flag being analyzed by Assembler
An image of an American flag being analyzed by AssemblerJigsaw

Jigsaw, a technology incubator at Google, has released an experimental platform called Assembler to help journalists and front-line fact-checkers quickly verify images.

How it works: Assembler combines several existing techniques in academia for detecting common manipulation techniques, including changing image brightness and pasting copied pixels elsewhere to cover up something while retaining the same visual texture. It also includes a detector that spots deepfakes of the type created using StyleGAN, an algorithm that can generate realistic imaginary faces. These detection techniques feed into a master model that tells users how likely it is that an image has been manipulated. 

Why it matters: Fake images are among the harder things to verify, especially with the rise of manipulation by artificial intelligence. The window of opportunity for journalists and fact-checkers to react is also rapidly shrinking, as disinformation spreads at speed and scale.

Not a panacea: Assembler is a good step in fighting manipulated media—but it doesn’t cover many other existing manipulation techniques, including those used for video, which the team will need to add and update as the ecosystem keeps evolving. It also still exists as a separate platform from the channels where doctored images are usually distributed. Experts have recommended that tech giants like Facebook and Google incorporate these types of detection features directly into their platforms. That way such checks can be performed in close to real time as photos and videos are uploaded and shared.

There are other approaches to consider, too. Some startups are pursuing verification technology, for example, which memorizes the positions of pixels in a photo at the moment of their capture—but this also comes with challenges.

Beyond technology: Ultimately, technical fixes won’t be enough. One of the trickiest aspects of digital fakery isn’t the fake images themselves. Rather, it’s the idea that they exist, which can easily be invoked to doubt the veracity of real media. This is the type of challenge that will require social and policy solutions as well.

Deep Dive

Artificial intelligence

open sourcing language models concept
open sourcing language models concept

Meta has built a massive new language AI—and it’s giving it away for free

Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

images created by Google Imagen
images created by Google Imagen

The dark secret behind those cute AI-generated animal images

Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.

Yann LeCun
Yann LeCun

Yann LeCun has a bold new vision for the future of AI

One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.

AGI is just chatter for now concept
AGI is just chatter for now concept

The hype around DeepMind’s new AI model misses what’s actually cool about it

Some worry that the chatter about these tools is doing the whole field a disservice.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.