Yesterday, Facebook revealed its plan for fighting disinformation ahead of the 2020 US election. It includes spending $2 million on a media literacy project, making it easier to research political ads, and using more prominent fact-checking labels. Each step is commendable, but it all seems hypocritical coming from a company that refuses to do anything about political ads that contain false information.
The message seems to be that Facebook is very concerned with preventing falsehoods—but only when they are spread by regular users and not by the people who might be elected to positions of real power. At the same time, CEO Mark Zuckerberg was right when he said during a speech last week that “I don’t think most people want to want to live in a world where you can only post things that tech companies judge to be 100% true.”
But there’s a middle ground between Facebook deciding what everyone is allowed to see and letting politicians lie as they wish. Facebook should revisit its policy of not touching political content and instead put one of those new, prominent labels on top of political ads that contain false information (like the Trump campaign ad that lied about Joe Biden, or the fake Facebook ad that Elizabeth Warren bought to goad Zuckerberg). That way, the company can keep the ads up without letting falsehoods spread unnoticed, which is especially important because political ads are often microtargeted at communities that might be most likely to believe them.
To be clear, Facebook’s third-party fact-checking program has not been a panacea for the problem of disinformation. An enormous amount of content is posted every day, far too much for everything to be fact-checked. There are people who won’t trust the fact-checkers, and so a label is meaningless to them.
Facebook’s own execution leaves much to be desired as well. In July, the fact-checking platform Full Fact, one of Facebook’s partners, released a report criticizing the company for not sharing enough data and not responding quickly enough to content flagged as false. But to the extent that fact-checking is valuable (and the Full Fact report concluded that it was), political ads should be among the most carefully fact-checked, not the least.
Zuckerberg argues that the company avoids fact-checking politicians “because we think people should be able to see for themselves what politicians are saying.” But most people are not going to bother to fact-check a political ad or seek out journalism elsewhere debunking it. As a result, Facebook’s hands-off policy is not actually neutral. It favors, and helps support, candidates who have no qualms about lying and spreading conspiracy theories. The worst players win.
Having a specific fact-checking team dedicated to political ads could address many of these issues. Facebook already knows which ads are paid for by political campaigns. It’s not an endless content stream. Fact-checking ads wouldn’t make Facebook a censor. It also wouldn’t “prevent a politician’s speech from reaching its audience,” as Facebook spokesperson Nick Clegg fears. It would ensure that the people who come across the ad are able to “see for themselves” what politicians are saying, and also see for themselves which politicians are comfortable with bald-faced lies.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
A startup says it’s begun releasing particles into the atmosphere, in an effort to tweak the climate
Make Sunsets is already attempting to earn revenue for geoengineering, a move likely to provoke widespread criticism.
10 Breakthrough Technologies 2023
These exclusive satellite images show that Saudi Arabia’s sci-fi megacity is well underway
Weirdly, any recent work on The Line doesn’t show up on Google Maps. But we got the images anyway.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.