Skip to Content
Tech policy

Twitter says it may warn users about deepfakes—but won’t remove them

November 12, 2019
Twitter
Twitter
TwitterAP

The news: Twitter has drafted a deepfake policy that would warn users about synthetic or manipulated media, but not remove it. Specifically, it says it would place a notice next to tweets that contain deepfakes, warn people before they share or like tweets that include deepfakes, or add a link to a news story or Twitter Moment explaining that it isn’t real. Twitter has said it may remove deepfakes that could threaten someone’s physical safety or lead to serious harm. People have until November 27 to give Twitter feedback on the proposals.

The context: It’s become relatively easy to make convincing doctored videos thanks to advances in artificial intelligence. That’s led to a huge panic over the potential for deepfakes to subvert democracy, as they can be used to make politicians seem to say or do whatever the creator wants.

A real threat?:The most notorious political deepfakes so far either have not been deepfakes (see the Nancy Pelosi video released in May) or have been created by people warning about deepfakes, rather than any bad actors themselves. For example, in the UK today two new deepfakes have been released of the prime minister, Boris Johnson, and leader of the opposition, Jeremy Corbyn, endorsing each other for an upcoming election on December 12. But they were created by a social enterprise trying to raise awareness of the issue.

The real problem: There is no denying that deepfakes pose a significant new threat. But so far, they’re mostly a threat to women, particularly famous actors and musicians. A recent report found that 96% of deepfakes are porn, virtually always created without the consent of the person depicted. These would already break Twitter’s existing rules and be removed.

An issue for the whole industry: That said, it is refreshing to see a social-media company wrangling with its content moderation responsibilities so openly. The varying responses to the Pelosi video (YouTube removed it, Facebook flagged it as false, and Twitter let it stand) show what a complex, thorny problem manipulated videos can pose. And unfortunately, we can’t expect deepfake detection technology to fix it, either. We’ll need social and legal solutions,too.

Sign up here for our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech.

Deep Dive

Tech policy

surveillance concept
surveillance concept

South Africa’s private surveillance machine is fueling a digital apartheid

As firms have dumped their AI technologies into the country, it’s created a blueprint for how to surveil citizens and serves as a warning to the world.

Europe's AI Act concept
Europe's AI Act concept

A quick guide to the most important AI law you’ve never heard of

The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence.

pouring a pile of sugar
pouring a pile of sugar

Inside the fierce, messy fight over “healthy” sugar tech

Yi-Heng “Percival” Zhang was a leader in rare sugar research. Then things got sticky.

dossier of journalist information concept
dossier of journalist information concept

The secret police: Inside the app Minnesota police used to collect data on journalists at protests

Intrepid Response is a little-known but powerful app that lets police quickly upload and share information across agencies. But what happens to the information it collects?

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.