After the 2016 US presidential election, Facebook began putting warning tags on news stories fact-checkers judged to be false. But a new study coauthored by Sloan professor David Rand finds there’s a catch: this makes readers more willing to believe and share other stories that are also false.
“Putting a warning on some content is going to make you think, to some extent, that all of the other content without the warning might have been checked and verified,” says Rand. Fortunately, that problem can be addressed by also labeling stories found to be true.
In the study, 6,739 US residents were given a variety of true and false headlines and asked if they’d share each story on social media. Those in the control group had no stories labeled; others saw a “FALSE” label on some false stories; a third group saw warnings on some false stories and “TRUE” labels on some true ones.
Participants considered sharing just 16.1% of labeled false stories, compared with 29.8% in the control group. But they were also willing to share 36.2% of the unlabeled false stories, up from 29.8%. Those who saw both warning and verification labels shared only 13.7% of the headlines labeled false, and just 26.9% of the nonlabeled false ones. These findings held true regardless of whether the discredited items were “concordant” with participants’ stated politics.
Rand advises labeling both true and false stories. Then, he says, “if you see a story without a label, you know it simply hasn’t been checked.”
The gene-edited pig heart given to a dying patient was infected with a pig virus
The first transplant of a genetically-modified pig heart into a human may have ended prematurely because of a well-known—and avoidable—risk.
Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging
The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.