Skip to Content
Artificial intelligence

Three problems with Facebook’s plan to kill hate speech using AI

Mark Zuckerberg thinks AI will largely automate the process of censorship, but that assumes profound progress will be made.
April 12, 2018
<a href="http://www.shopcatalog.com/">www.shopcatalog.com</a>

Mark Zuckerberg told the US Congress this week that Facebook will increasingly rely on artificial intelligence to catch hate speech spread on the platform. “I am optimistic that over a five-to-10-year period we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate,” said the Facebook CEO, who was called to testify after the scandal around Cambridge Analytica’s misappropriation of personal data belonging to millions of users.

Facebook already employs 15,000 human moderators to screen and remove offensive content, and it plans to hire another 5,000 by the end of this year, Zuckerberg said. But right now, those moderators can only react to posts Facebook users have flagged. Using AI to identify potentially offending material would make it faster and easier to remove. But it won’t be easy, for three reasons.

1. Words are easy, but meaning is hard

Language remains a huge AI challenge. It’s easy enough for a computer to catch key words or phrases, or to classify the sentiment of text, but understanding the meaning of a post would require far deeper knowledge of the world. What makes language a powerful and complex way to communicate is that it relies on common-sense knowledge, and that we use a mental model of other people to pack a lot of information into a few words (see “AI’s language problem”).

“Fake news, especially, is going to be very hard,” says Ernest Davis, a professor at NYU who specializes in the challenge of common-sense reasoning with computers. “If you look at what Snopes does, they look at a wide variety of things. And fake news is often made of half-truths.”

2. It’s an arms race

Even if progress is made in natural-language understanding, the purveyors of hate and misinformation could well adopt some of the same tools in order to evade detection.

So warns Sean Gourley, the CEO of Primer, a company that uses AI to generate reports for US intelligence agencies via In-Q-Tel, an investment fund. Speaking at an MIT Technology Review event recently, Gourley said that AI would also inevitably be used to mass-produce targeted and optimized fake news stories in the not-too-distant future.

3. Video will make things worse

We may in fact be seeing the beginnings of a far more insidious era of fake news. Researchers have demonstrated convincing-looking synthetic videos and audio created by machine learning, including tricks like having politicians appear to make speeches that never happened. The trickery has already raised the troubling prospect of fake revenge porn.

Understanding video is something AI researchers are just starting to tackle. The fake videos made this way could also prove especially difficult for an AI to catch. They are created using two neural networks that compete to generate and spot fake imagery (see “10 Breakthrough Technologies: Dueling neural networks”). The process relies on fooling one of the networks into thinking something is real, so building a system that could catch the fakes would be difficult.

 

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.