Skip to Content
Humans and technology

Facebook is making its own AI deepfakes to head off a disinformation disaster

The CTO of Facebook says videos forged using AI will be used maliciously on its platforms before long.
September 5, 2019
Side-by-side images of original video and deepfake.
Side-by-side images of original video and deepfake.Facebook

Facebook fears that AI-generated “deepfake” videos could be the next big source of viral misinformation—spreading among its users with potentially catastrophic consequences for the next US presidential election.

Its solution? Making lots of deepfakes of its own, to help researchers build and refine detection tools.

Facebook has directed its team of AI researchers to produce a number of highly realistic fake videos featuring actors doing and saying routine things. These clips will serve as a data set for testing and benchmarking deepfake detection tools. The Facebook deepfakes will be released at a major AI conference at the end of the year.

The rise of deepfakes has been driven by recent advances in machine learning. It has long been possible for movie studios to manipulate images and video with software and computers, and algorithms capable of capturing and re-creating a person’s likeness have already been used to make point-and-click tools for pasting a person’s face onto someone else.

Methods for spotting forged media exist, but they often involve painstaking expert analysis. Tools for catching deepfakes automatically are only just emerging.

Facebook’s CTO, Mike Schroepfer, says deepfakes are advancing rapidly, so devising much better ways to flag or block potential fakes is vital.

“We have not seen this as huge problem on our platforms yet, but my assumption is if you increase access—make it cheaper, easier, faster to build these things—it clearly increases the risk that people will use this in some malicious fashion,” Schroepfer, who is spearheading the initiative, said last night. “I don’t want to be in a situation where this is a massive problem and we haven’t been investing massive amounts in R&D.”

Comparing the effort to the fight against spam email, Schroepfer said Facebook may not be able to catch the most sophisticated fakes. “We’ll catch the obvious ones,” he said. But he said Facebook isn’t employing any methods yet because the forgeries are improving so quickly.

The social network will dedicate $10 million for funding the detection technology through grants and challenge prizes. Together with Microsoft, the Partnership on AI, and academics from institutions including MIT, UC Berkeley, and Oxford University, the company is launching the Deepfake Detection Challenge, which will offer unspecified cash rewards for the best detection methods.

Making a deepfake typically requires two video clips. Algorithms learn the appearance of each face in order paste one onto the other while maintaining each smile, blink, and nod. Different AI techniques can also be used to re-create a specific person’s voice. The term “deepfake” is taken from a Reddit user who released such a tool in 2017. It refers to deep learning, the AI technique employed.

One of the big worries is that deepfakes could be used to spread highly contagious misinformation during the next year’s US election, perhaps even swaying the outcome. Several US Senators have sounded the alarm about the threat and Ben Sasse (R–Nebraska) introduced a bill to make it illegal to create or distribute deepfakes with malicious intent. A recent report on election misinformation from NYU identifies deepfakes as one of several key challenges for the 2020 election.

Manipulated videos are already spreading on social platforms, in fact. Earlier this year, a clip that appeared to show Nancy Pelosi slurring her speech (made simply by slowing the footage down) rapidly spread across Facebook. The company refused to remove that post or a deepfake of Mark Zuckerberg, instead choosing to flag the clips as fake with fact-checking organizations.

It makes sense for Facebook to try to get out ahead of the issue, especially after the fallout from the last presidential election. As details of political misinformation campaigns emerged, Facebook faced intense criticism for allowing such propaganda to spread.

Promoting the deepfake challenge might have unintended consequences, though.  Henry Ajder, an analyst at Deeptrace, a Dutch company that’s working on tools for spotting forged clips, notes that the narrative around deepfakes can offer a way for politicians to dodge accountability, by claiming that real information has been forged (see “Fake America great again”). “The mere idea of deepfakes is already creating a lot of problems,” Ajder says. “It’s a virus in the political sphere that’s infected the minds of politicians and citizens.”

Moreover, despite the alarm, Ajder, who tracks deepfakes in the wild, doubts that the technology will be weaponized for political ends for some time. He believes it will more immediately become a potent tool of cyber-stalking and bullying.

A few methods for detecting deepfakes already exist. Simple techniques involve analyzing the data in a video file or looking for telltale mouth movements and blinking, which is more difficult for an algorithm to capture and re-create.

A method developed recently by a group of leading experts involves training a deep-learning algorithm to recognize the specific way a person’s head moves, since this is not something that algorithms typically learn. 

This approach came about through another effort to develop detection tools, which is being funded by the Defense Advanced Research Projects Agency (DARPA).

Many experts have been surprised and alarmed by the speed with which AI forgeries are progressing. Just this week, a Chinese app called Zao sparked debate by posting deepfake videos supposedly created from a still image. Hao Li, a visual effects artist and an associate professor at the University of Southern California, has warned that it may be possible to mass-produce undetectable deepfakes before long (see “The world’s top deepfake artist is wrestling with the monster he created”).

Deep Dive

Humans and technology

Unlocking the power of sustainability

A comprehensive sustainability effort embraces technology, shifting from risk reduction to innovation opportunity.

Building a data-driven health-care ecosystem

Harnessing data to improve the equity, affordability, and quality of the health care system.

Let’s not make the same mistakes with AI that we made with social media

Social media’s unregulated evolution over the past decade holds a lot of lessons that apply directly to AI companies and technologies.

People are worried that AI will take everyone’s jobs. We’ve been here before.

In a 1938 article, MIT’s president argued that technical progress didn’t mean fewer jobs. He’s still right.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.