Skip to Content
Artificial intelligence

An AI for generating fake news could also help detect it

Sometimes it takes a bot to know one.
March 12, 2019
Hendrik Strobelt and Sebastian Gehrmann

Last month OpenAI rather dramatically withheld the release of its newest language model, GPT-2, because it feared it could be used to automate the mass production of misinformation. The decision also accelerated the AI community’s ongoing discussion about how to detect this kind of fake news. In a new experiment, researchers at the MIT-IBM Watson AI Lab and HarvardNLP considered whether the same language models that can write such convincing prose can also spot other model-generated passages.

The idea behind this hypothesis is simple: language models produce sentences by predicting the next word in a sequence of text. So if they can easily predict most of the words in a given passage, it’s likely it was written by one of their own.

The researchers tested their idea by building an interactive tool based on the publicly accessible downgraded version of OpenAI’s GPT-2. When you feed the tool a passage of text, it highlights the words in green, yellow, or red to indicate decreasing ease of predictability; it highlights them in purple if it wouldn’t have predicted them at all. In theory, the higher the fraction of red and purple words, the higher the chance the passage was written by a human; the greater the share of green and yellow words, the more likely it was written by a language model.

detecting fake news
A reading comprehension passage from a US standardized test, written by a human.
Hendrik Strobelt and Sebastian Gehrmann
detecting fake news
A passage written by OpenAI's downgraded GPT-2.
Hendrik Strobelt and Sebastian Gehrmann

Indeed, the researchers found that passages written by the downgraded and full versions of GPT-2 came out almost completely green and yellow, while scientific abstracts written by humans and text from reading comprehension passages in US standardized tests had lots of red and purple.

But not so fast. Janelle Shane, a researcher who runs the popular blog Letting Neural Networks Be Weird and who was uninvolved in the initial research, put the tool to a more rigorous test. Rather than just feed it text generated by GPT-2, she fed it passages written by other language models as well, including one trained on Amazon reviews and another trained on Dungeons and Dragons biographies. She found that the tool failed to predict a large chunk of the words in each of these passages, and thus it assumed they were human-written. This identifies an important insight: a language model might be good at detecting its own output, but not necessarily the output of others.

This story originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.

Deep Dive

Artificial intelligence

open sourcing language models concept
open sourcing language models concept

Meta has built a massive new language AI—and it’s giving it away for free

Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

images created by Google Imagen
images created by Google Imagen

The dark secret behind those cute AI-generated animal images

Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.

AGI is just chatter for now concept
AGI is just chatter for now concept

The hype around DeepMind’s new AI model misses what’s actually cool about it

Some worry that the chatter about these tools is doing the whole field a disservice.

From data and AI aspirations to sustainable business outcomes

There are 3 common challenges that organizations face while transforming AI aspirations into scalable and intelligent solutions. Get an insider’s view of use case scenarios that illustrate real business and functional value through a proven framework and process toward sustainable digital transformation. About the speaker Vishal Kapoor, Vice President, Data and AI, Kyndryl Vishal Kapoor…

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.