MIT Technology Review Subscribe

Humans may be more likely to believe disinformation generated by AI

The way AI models structure text may have something to do with it, according to the study authors.

Disinformation generated by AI may be more convincing than disinformation written by humans, a new study suggests. 

The research found that people were 3% less likely to spot false tweets generated by AI than those written by humans.

Advertisement

That credibility gap, while small, is concerning given that the problem of AI-generated disinformation seems poised to grow significantly, says Giovanni Spitale, the researcher at the University of Zurich who led the study, which appeared in Science Advances today. 

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

“The fact that AI-generated disinformation is not only cheaper and faster, but also more effective, gives me nightmares,” he says. He believes that if the team repeated the study with the latest large language model from OpenAI, GPT-4, the difference would be even bigger, given how much more powerful GPT-4 is. 

To test our susceptibility to different types of text, the researchers chose common disinformation topics, including climate change and covid. Then they asked OpenAI’s large language model GPT-3 to generate 10 true tweets and 10 false ones, and collected a random sample of both true and false tweets from Twitter. 

Next, they recruited 697 people to complete an online quiz judging whether tweets were generated by AI or collected from Twitter, and whether they were accurate or contained disinformation. They found that participants were 3% less likely to believe human-written false tweets than AI-written ones. 

The researchers are unsure why people may be more likely to believe tweets written by AI. But the way in which GPT-3 orders information could have something to do with it, according to Spitale. 

“GPT-3’s text tends to be a bit more structured when compared to organic [human-written] text,” he says. “But it’s also condensed, so it’s easier to process.”

The generative AI boom puts powerful, accessible AI tools in the hands of everyone, including bad actors. Models like GPT-3 can generate incorrect text that appears convincing, which could be used to generate false narratives quickly and cheaply for conspiracy theorists and disinformation campaigns. The weapons to fight the problem—AI text-detection tools—are still in the early stages of development, and many are not entirely accurate. 

OpenAI is aware that its AI tools could be weaponized to produce large-scale disinformation campaigns. Although this violates its policies, it released a report in January warning that it’s “all but impossible to ensure that large language models are never used to generate disinformation.” OpenAI did not immediately respond to a request for comment.

Advertisement

However, the company has also urged caution when it comes to overestimating the impact of disinformation campaigns. Further research is needed to determine the populations at greatest risk from AI-generated inauthentic content, as well as the relationship between AI model size and the overall performance or persuasiveness of its output, the authors of OpenAI’s report say. 

It’s too early to panic, says Jon Roozenbeek, a postdoc researcher who studies misinformation at the department of psychology at the University of Cambridge, who was not involved in the study. 

Although distributing disinformation online may be easier and cheaper with AI than with human-staffed troll farms, moderation on tech platforms and automated detection systems are still obstacles to its spread, he says. 

“Just because AI makes it easier to write a tweet that might be slightly more persuasive than whatever some poor sap in some factory in St. Petersburg came up with, it doesn’t necessarily mean that all of a sudden everyone is ripe to be manipulated,” he adds.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement