Skip to Content
Artificial intelligence

Facebook, Google, Twitter aren’t prepared for presidential deepfakes

August 6, 2019
Deepfakes of Donald Trump and Elizabeth Warren.
Deepfakes of Donald Trump and Elizabeth Warren.CVPR

None of the big three internet foghorns—Facebook, Google, or Twitter—seems to have a clear plan for dealing with AI-generated fake videos, or “deepfakes,” ahead of next year’s presidential election, according to the chairman of the House Intelligence Committee. 

Status update: Adam Schiff, a Democrat from California, said Friday that the three companies “have begun thinking seriously about the challenges posed by machine-manipulated media, or deepfakes, but that there is much more work to be done if they are to be prepared for the disruptive effect of this technology in the next election.” 

Don’t panic: No need to freak out. There are, in fact, some emerging techniques for spotting videos that have been fabricated using AI. 

Face-off: Deepfake videos use recent advances in machine learning to automatically swap faces in a video or perform other reality-blurring tricks. Simple deepfake tools can be downloaded from the web, and you can find many surreal examples of the results across the internet

The worry: Image manipulation has been around for a long time, but AI is making sophisticated fakery more accessible. During an election, a deepfake could perhaps be used to be used to influence voters at the last moment. In May, a video of Nancy Pelosi that had been doctored to make it appear as if she were slurring her speech circulated rapidly on social media. 

Cat and mouse: At the moment, there are a few ways to spot deepfakes. Irregular blinking is one telltale sign a video has been messed with, for example. But detection is something of an arms race, because an AI algorithm can usually be trained to address a given flaw. 

Gotcha: This June, a new paper from several digital forensics experts outlined a more foolproof approach. It relies on training a detection algorithm to recognize the face and head movements of a particular person, thereby showing when that person’s face has been pasted onto the head and body of someone else. The approach only works when the system has been trained to recognize someone, but it could at least keep presidential candidates safe from attack.

Keeping quiet? Google actually provided some funding for this new research. So maybe these companies are keeping their cards close to their chest when it comes to deepfake detection. If you want to stay one step ahead of the fakers, that would certainly be a smart move.

Deep Dive

Artificial intelligence

The inside story of how ChatGPT was built from the people who made it

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.

AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.

The original startup behind Stable Diffusion has launched a generative AI for video

Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.

GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why

We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.