Skip to Content
Artificial intelligence

Three problems with Facebook’s plan to kill hate speech using AI

Mark Zuckerberg thinks AI will largely automate the process of censorship, but that assumes profound progress will be made.
April 12, 2018
<a href=""></a>

Mark Zuckerberg told the US Congress this week that Facebook will increasingly rely on artificial intelligence to catch hate speech spread on the platform. “I am optimistic that over a five-to-10-year period we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate,” said the Facebook CEO, who was called to testify after the scandal around Cambridge Analytica’s misappropriation of personal data belonging to millions of users.

Facebook already employs 15,000 human moderators to screen and remove offensive content, and it plans to hire another 5,000 by the end of this year, Zuckerberg said. But right now, those moderators can only react to posts Facebook users have flagged. Using AI to identify potentially offending material would make it faster and easier to remove. But it won’t be easy, for three reasons.

1. Words are easy, but meaning is hard

Language remains a huge AI challenge. It’s easy enough for a computer to catch key words or phrases, or to classify the sentiment of text, but understanding the meaning of a post would require far deeper knowledge of the world. What makes language a powerful and complex way to communicate is that it relies on common-sense knowledge, and that we use a mental model of other people to pack a lot of information into a few words (see “AI’s language problem”).

“Fake news, especially, is going to be very hard,” says Ernest Davis, a professor at NYU who specializes in the challenge of common-sense reasoning with computers. “If you look at what Snopes does, they look at a wide variety of things. And fake news is often made of half-truths.”

2. It’s an arms race

Even if progress is made in natural-language understanding, the purveyors of hate and misinformation could well adopt some of the same tools in order to evade detection.

So warns Sean Gourley, the CEO of Primer, a company that uses AI to generate reports for US intelligence agencies via In-Q-Tel, an investment fund. Speaking at an MIT Technology Review event recently, Gourley said that AI would also inevitably be used to mass-produce targeted and optimized fake news stories in the not-too-distant future.

3. Video will make things worse

We may in fact be seeing the beginnings of a far more insidious era of fake news. Researchers have demonstrated convincing-looking synthetic videos and audio created by machine learning, including tricks like having politicians appear to make speeches that never happened. The trickery has already raised the troubling prospect of fake revenge porn.

Understanding video is something AI researchers are just starting to tackle. The fake videos made this way could also prove especially difficult for an AI to catch. They are created using two neural networks that compete to generate and spot fake imagery (see “10 Breakthrough Technologies: Dueling neural networks”). The process relies on fooling one of the networks into thinking something is real, so building a system that could catch the fakes would be difficult.


Deep Dive

Artificial intelligence

What is AI?

Everyone thinks they know but no one can agree. And that’s a problem.

What are AI agents? 

The next big thing is AI tools that can do more complex tasks. Here’s how they will work.

How to use AI to plan your next vacation

AI tools can be useful for everything from booking flights to translating menus.

Why Google’s AI Overviews gets things wrong

Google’s new AI search feature is a mess. So why is it telling us to eat rocks and gluey pizza, and can it be fixed?

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.