Skip to Content
Policy

Three threats posed by deepfakes that technology won’t solve

As deepfakes get better, companies are rushing to develop technology to detect them. But little of their potential harm will be fixed without social and legal solutions.
October 2, 2019
Screencapture from Putin deepfake video
Screencapture from Putin deepfake videoMIT Technology Review

Picture this: A perfectly accurate deepfake detector finally exists. It instantly adds a big red DEEPFAKE label to every video that has been manipulated with AI, no matter how seamlessly realistic the video might look.

That might sound like just what we need to fight deepfakes, which people worry could bring the end of truth and the death of democracy. “Perfectly real” manipulated videos could be here in as little as six months, which suggests the 2020 US presidential election campaign could become a battlefield of fake video: Donald Trump admitting to corrupt deals with Russia, Elizabeth Warren advocating a total ban on guns, Kamala Harris disparaging white people. 

Technologists have responded with more technology. The US government has funded a project on “media forensics.” Facebook and Microsoft recently announced a deepfake detection challenge, and Google released a giant database of deepfakes to fight deepfakes. But while the technique of creating deepfakes is new, much of the actual harm that they represent—disinformation and harassment—is not, according to Britt Paris, an information scholar at Rutgers University and coauthor of a recent Data & Society report on deepfakes. A perfectly accurate deepfake detector can’t address those harms. This is why. 

1) Problem: Deepfake detectors can’t tell us what should—and shouldn’t—be taken down

Remember that slowed-down video of Nancy Pelosi? That wasn’t a deepfake. It still spread untruths, and Facebook decided not to take it down. A deepfake detector wouldn’t have helped with that judgment call. “The more you get toward automated use, the more likely you are to have inaccuracies or censorship,” says Kate Klonick, a professor at St. John’s University and an expert on platform governance. “Defining satire, defining fake news, defining fiction—these are all huge philosophical questions.”

Idea: Better moderation

Society has to work through these problems. Until then, one fix could be to give more power to those who are capable of making judgment calls: human content moderators.To that end, moderators could be paid more, trained better, and valued as an important part of maintaining a safe internet, says Sarah T. Roberts, an information scholar at UCLA. Specialized teams of vetted moderators could judge the context of a video, fact-check it, and decide whether it should remain on a platform. They might not have the perfect answer for that Pelosi video, but  they would still have a sense of the social and political impact of a variety of deepfakes and their targets. They could tell that parody deepfakes of Nicolas Cage are fine and fake porn is not.

Someone else who might be qualified to pass judgment on a deepfake? The victim of one. Companies should make it easier to report deepfake harassment, says Danielle Citron, a cyberlaw expert at Boston University. All users should be educated on their rights, and the steps for reporting should be obvious and accessible, not buried in a privacy policy.

2) Problem: Deepfake-busting technology might not help the people who need protection most

History has shown that new technology is used against marginalized groups—like women, people of color, LGBT people, and activists—before it becomes a mainstream threat, says Paris. In the 1990s, for example, there were already crude Photoshopped images of women’s heads on the bodies of adult film actresses. The people in power didn’t care enough to do anything about it. “If we had paid attention to the problem of sexual exploitation of women without their consent then, we would have been in a much better position to deal with it socially, legally, and culturally now,” says Mary Anne Franks, a legal scholar at the University of Miami.

History repeats itself. Researchers say the biggest risk of deepfakes is not that they swing an election but that they’re used to bully private citizens. 

Idea: Don't build anything without consulting those most affected

Talk to the people who are most vulnerable to deepfakes, says Sam Gregory, program director at Witness, a nonprofit that studies synthetic media. Even if the goal is to create a deepfake detector, there are plenty of social questions involved. Will it be available for people in other countries? Will it be trained to spot political fakes or gender and sexual violence? “Once the infrastructure is set, it’s really marginalized people and populations who are excluded, because they don’t have the agency to change that infrastructure,” Gregory says.

3) Problem: Deepfake detection is too late to help victims

With deepfakes, “there’s little real recourse after that video or audio is out,” says Franks, the University of Miami scholar.

Existing laws are inadequate. Laws that punish sharing legitimate private information like medical records don’t apply to false but damaging videos. Laws against impersonation are “oddly limited,” Franks says—they focus on making it illegal to impersonate a doctor or government official. Defamation laws only address false representations that portray the subject negatively, but Franks says we should be worried about deepfakes that falsely portray people in a positive light too.

Idea: New laws

Texas recently passed a bill to ban deepfakes. A California deepfake bill passed in both houses and now awaits Gov. Gavin Newsom's signature. Meanwhile, representative Yvette Clarke, a US congresswoman from New York, recently introduced federal legislation called the DEEPFAKES Accountability Act. It would force social-media companies to build better detection tools into their platforms and make it possible to punish or jail people who post malicious deepfakes.

For their part, Franks and Citron are working on a federal bill to criminalize malicious deepfakes, which they call “digital forgeries.” To their thinking, a digital forgery is something that a reasonable person would think is real. It must also be likely to cause harm to a particular person or to public order—for example, if a video falsely showed a Muslim person committing a crime.

For any such law to work in the US, it would ideally be at the federal level, not the state level. Because of a law called Section 230, platform companies aren’t legally responsible for hosting harmful third-party content unless it violates federal criminal law. If posting harmful videos became a crime, it would not only deter people from posting them; platforms like Facebook would have to work harder to keep them off. “These companies would not be able to raise Section 230 in defense,” says Franks. “They would have any number of defenses, but that particular avenue to them would be blocked.”

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

Africa’s push to regulate AI starts now        

AI is expanding across the continent and new policies are taking shape. But poor digital infrastructure and regulatory bottlenecks could slow adoption.

Yes, remote learning can work for preschoolers

The largest-ever humanitarian intervention in early childhood education shows that remote learning can produce results comparable to a year of in-person teaching.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.