The fact-checking trap
After the 2016 US presidential election, Facebook began putting warning tags on news stories fact-checkers judged to be false. But a new study coauthored by Sloan professor David Rand finds there’s a catch: this makes readers more willing to believe and share other stories that are also false.
“Putting a warning on some content is going to make you think, to some extent, that all of the other content without the warning might have been checked and verified,” says Rand. Fortunately, that problem can be addressed by also labeling stories found to be true.
In the study, 6,739 US residents were given a variety of true and false headlines and asked if they’d share each story on social media. Those in the control group had no stories labeled; others saw a “FALSE” label on some false stories; a third group saw warnings on some false stories and “TRUE” labels on some true ones.
Participants considered sharing just 16.1% of labeled false stories, compared with 29.8% in the control group. But they were also willing to share 36.2% of the unlabeled false stories, up from 29.8%. Those who saw both warning and verification labels shared only 13.7% of the headlines labeled false, and just 26.9% of the nonlabeled false ones. These findings held true regardless of whether the discredited items were “concordant” with participants’ stated politics.
Rand advises labeling both true and false stories. Then, he says, “if you see a story without a label, you know it simply hasn’t been checked.”
Keep Reading
Most Popular
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
ChatGPT is about to revolutionize the economy. We need to decide what that looks like.
New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.