None of the big three internet foghorns—Facebook, Google, or Twitter—seems to have a clear plan for dealing with AI-generated fake videos, or “deepfakes,” ahead of next year’s presidential election, according to the chairman of the House Intelligence Committee.
Status update: Adam Schiff, a Democrat from California, said Friday that the three companies “have begun thinking seriously about the challenges posed by machine-manipulated media, or deepfakes, but that there is much more work to be done if they are to be prepared for the disruptive effect of this technology in the next election.”
Don’t panic: No need to freak out. There are, in fact, some emerging techniques for spotting videos that have been fabricated using AI.
Face-off: Deepfake videos use recent advances in machine learning to automatically swap faces in a video or perform other reality-blurring tricks. Simple deepfake tools can be downloaded from the web, and you can find many surreal examples of the results across the internet.
The worry: Image manipulation has been around for a long time, but AI is making sophisticated fakery more accessible. During an election, a deepfake could perhaps be used to be used to influence voters at the last moment. In May, a video of Nancy Pelosi that had been doctored to make it appear as if she were slurring her speech circulated rapidly on social media.
Cat and mouse: At the moment, there are a few ways to spot deepfakes. Irregular blinking is one telltale sign a video has been messed with, for example. But detection is something of an arms race, because an AI algorithm can usually be trained to address a given flaw.
Gotcha: This June, a new paper from several digital forensics experts outlined a more foolproof approach. It relies on training a detection algorithm to recognize the face and head movements of a particular person, thereby showing when that person’s face has been pasted onto the head and body of someone else. The approach only works when the system has been trained to recognize someone, but it could at least keep presidential candidates safe from attack.
Keeping quiet? Google actually provided some funding for this new research. So maybe these companies are keeping their cards close to their chest when it comes to deepfake detection. If you want to stay one step ahead of the fakers, that would certainly be a smart move.
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
Deepfakes of Chinese influencers are livestreaming 24/7
With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
AI hype is built on high test scores. Those tests are flawed.
With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.
You need to talk to your kid about AI. Here are 6 things you should say.
As children start back at school this week, it’s not just ChatGPT you need to be thinking about.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.