Google has released a giant database of deepfakes to help fight deepfakes

It includes 3,000 AI-generated videos that were made using various publicly available algorithms.
The context: Over the past year, generative algorithms have become so good at synthesizing media that what they produce could soon become indistinguishable from reality. Experts are now racing to find better methods for detecting these so-called deepfakes, especially with the 2020 US presidential election approaching.
Deepfake drop: On Tuesday, Google released an open-source database containing 3,000 original manipulated videos as part of its effort to accelerate the development of deepfake detection tools. It worked with 28 actors to record videos of them speaking, making common expressions, and doing mundane tasks. It then used publicly available deepfake algorithms to alter their faces.
State of the art: Earlier this month, Facebook announced that it would be releasing a similar database near the end of the year. In January, an academic team led by a researcher from the Technical University of Munich created one called FaceForensics++ by performing four common face manipulation methods on nearly 1,000 compiled YouTube videos. With each of these data sets, the idea is the same: to create a large corpus of examples that can help train and test automated detection tools.
Cat-and-mouse game: But once a detection method has been developed to exploit a flaw in a particular generation algorithm, the algorithm can easily be updated to correct for it. As a result, some experts are now trying to figure out detection methods that assume the perfection of synthetic images. Others argue that reining in deepfakes won’t be accomplished through technical means alone: instead, it will also require social, political, and legal solutions to change the incentives encouraging their creation.
To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.
Deep Dive
Artificial intelligence
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Generative AI deployment: Strategies for smooth scaling
Our global poll examines key decision points for putting AI to use in the enterprise.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.