“Everyone is entitled to his own opinion, but not his own facts.” The late New York senator Daniel Patrick Moynihan’s ethical maxim got a workout in 2016, and it may face its stiffest test yet now that Facebook will use fact-checking organizations to help debunk articles shared on its network.
Facebook’s move comes in response to the outcry over the persistence of hoax stories during the recent election campaign. But not all such stories will be vetted now. Although fact-checkers typically respond to political claims of all kinds, rating them true or false or somewhere in between, Facebook’s new partnership is focused strictly on identifying the “worst of the worst” bogus content being churned out in the U.S. and abroad to generate clicks and ad dollars. One such fake story said Republicans are planning to cut Social Security “As Much As 50% Immediately”; another claimed that President Obama planned to “Immediately Pay Blacks ‘Reparations.’”
“We’re going to be looking at very clear-cut falsehoods,” says Aaron Sharockman, executive director of PolitiFact, one of seven organizations on board with the new feature, the others being ABC News, the Associated Press, FactCheck.org, Snopes, Climate Feedback, and the Washington Post. “If there are things that might be misleading, or spinning the truth, we’re probably going to avoid them as part of this partnership.”
This means that large numbers of misleading and manipulative articles will remain untouched on Facebook as long as they have some basis in reality. “There are other forums where fact-checkers can make their cases on the gray areas and half-truths,” says Alexios Mantzarlis, director of the International Fact-Checking Network at Poynter, whose “Code of Principles” is being used to guide Facebook’s stable of trusted third parties. (None of the organizations involved is being compensated in connection with the new feature.)
The structure of Facebook has made it uniquely fertile ground for propagating misinformation. While a person might be savvy enough to identify a spoofed news site that comes up in a Google search and discount the information accordingly, Facebook’s walled garden has meant that, until now, fake news sites have appeared no differently in users’ feeds than legitimate ones. Add to that the fact that information on Facebook is shared among trusted friends and family members, and it’s no surprise that a post-election Ipsos poll conducted for BuzzFeed News found that people more likely to look to Facebook for their news were also more likely to rate bogus headlines as accurate.
Facebook is clearly hesitant to act as an information gatekeeper, especially given the controversy that erupted over accusations that human editors contracted by the company suppressed conservative news in the site’s trending news section (a dubious criticism that itself has been subject to fact-check). Articles deemed fake will be downgraded in the social network’s feeds, but not erased entirely. Facebook users attempting to share these articles will be prompted by a notice stating: “Before you share this story, you might want to know that independent fact-checkers disputed its accuracy.”
How users will respond to these new notices is an open question. Marking an article as “disputed” as opposed to “false” could play into the partisan narrative of there being two sides to every claim. In addition, a notice that fact-checkers from mainstream media organizations like the Washington Post and the Associated Press have flagged a post may well be a point in an article’s favor in the eyes of many. According to research conducted for the American Press Institute last year, Republicans tend not to view fact-checkers as favorably as Democrats.
Facebook’s cautious approach might be the only practical one, however, given the backlash it’s already received over its first tentative steps. “It’s gone from ‘Oh my god, Facebook isn’t doing anything’ to ‘Oh my god, Facebook is censoring everything,’” says Mantzarlis.
Besides, swimming against the tide is nothing new for fact-checkers, says Lucas Graves, a professor at the University of Wisconsin who published Deciding What’s True: The Rise of Political Fact-Checking in American Journalism in September. “A fact-check never yields the immediate and decisive impact that we might hope for in an ideal world,” Graves says. “We always imagine that you can expose a claim as being false, and people will stop believing it and politicians will stop repeating it, but it doesn’t work that way.”
Matt Mahoney, a freelance fact-checker, has been a staff researcher at MIT Technology Review and the Boston Globe.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.