Skip to Content
Artificial intelligence

Facebook’s AI is still largely baffled by covid misinformation

May 12, 2020
Facebook misinformation
Eric Risberg/AP

The news: In its latest Community Standards Enforcement Report, released today, Facebook detailed the updates it has made to its AI systems for detecting hate speech and disinformation. The tech giant says 88.8% of all the hate speech it removed this quarter was detected by AI, up from 80.2% in the previous quarter. The AI can remove content automatically if the system has high confidence that it is hate speech, but most is still checked by a human being  first.

Behind the scenes: The improvement is largely driven by two updates to Facebook’s AI systems. First, the company is now using massive natural-language models that can better decipher the nuance and meaning of a post. These models build on advances in AI research within the last two years that allow neural networks to be trained on language without any human supervision, getting rid of the bottleneck caused by manual data curation.

The second update is that Facebook’s systems can now analyze content that consists of images and text combined, such as hateful memes. AI is still limited in its ability to interpret such mixed-media content, but Facebook has also released a new data set of hateful memes and launched a competition to help crowdsource better algorithms for detecting them.

Covid lies: Despite these updates, however, AI hasn’t played as big a role in handling the surge of coronavirus misinformation, such as conspiracy theories about the virus’s origin and fake news of cures. Facebook has instead relied primarily on human reviewers at over 60 partner fact-checking organizations. Only once a person has flagged something, such as an image with a misleading headline, do AI systems take over to search for identical or similar items and automatically add warning labels or take them down. The team hasn’t yet been able to train a machine-learning model to find new instances of disinformation itself. “Building a novel classifier for something that understands content it’s never seen before takes time and a lot of data,” Mike Schroepfer, Facebook’s CTO, said on a press call.

Why it matters: The challenge reveals the limitations of AI-based content moderation. Such systems can detect content similar to what they’ve seen before, but they founder when new kinds of misinformation appear. In recent years, Facebook has invested heavily in developing AI systems that can adapt more quickly, but the problem is not just the company’s: it remains one of the biggest research challenges in the field.

Deep Dive

Artificial intelligence

Sam Altman says helpful agents are poised to become AI’s killer function

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Taking AI to the next level in manufacturing

Reducing data, talent, and organizational barriers to achieve scale.

Is robotics about to have its own ChatGPT moment?

Researchers are using generative AI and other techniques to teach robots new skills—including tasks they could perform in homes.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.