Skip to Content
Election 2020

Why social media can’t keep moderating content in the shadows

Online platforms aren’t transparent about their decisions—which leaves them open to claims of censorship and masks the true costs of misinformation.
November 6, 2020
Pexels

Back in 2016, I could count on one hand the kinds of interventions that technology companies were willing to use to rid their platforms of misinformation, hate speech, and harassment. Over the years, crude mechanisms like blocking content and banning accounts have morphed into a more complex set of tools, including quarantining topics, removing posts from search, barring recommendations, and down-ranking posts in priority. 

And yet, even with more options at their disposal, misinformation remains a serious problem. There was a great deal of coverage about misinformation on Election Day—my colleague Emily Drefyuss found, for example, that when Twitter tried to deal with content using the hashtag #BidenCrimeFamily, with tactics including “de-indexing” by blocking search results, users including Donald Trump adapted by using variants of the same tag. But we still don’t know much about how Twitter decided to do those things in the first place, or how it weighs and learns from the ways users react to moderation.

What actions did these companies take? How do their moderation teams work? What is the process for making decisions?

As social media companies suspended accounts and labeled and deleted posts, many researchers, civil society organizations, and journalists scrambled to understand their decisions. The lack of transparency about those decisions and processes means that—for many—the election results end up with an asterisk this year, just as they did in 2016.

What actions did these companies take? How do their moderation teams work? What is the process for making decisions? Over the last few years, platform companies put together large task forces dedicated to removing election misinformation and labeling early declarations of victory. Sarah Roberts, a professor at UCLA, has written about the invisible labor of platform content moderators as a shadow industry, a labyrinth of contractors and complex rules which the public knows little about. Why don’t we know more? 

In the post-election fog, social media has become the terrain for a low-grade war on our cognitive security, with misinformation campaigns and conspiracy theories proliferating. When the broadcast news business served the role of information gatekeeper, it was saddled with public interest obligations such as sharing timely, local, and relevant information. Social media companies have inherited a similar position in society, but they have not taken on those same responsibilities. This situation has loaded the cannons for claims of bias and censorship in how they moderated election-related content.  

Bearing the costs

In October, I joined a panel of experts on misinformation, conspiracy, and infodemics for the House Permanent Select Committee on Intelligence. I was flanked by Cindy Otis, an ex-CIA analyst; Nina Jankowicz, a disinformation fellow at the Wilson Center; and Melanie Smith, head of analysis at Graphika. 

As I prepared my testimony, Facebook was struggling to cope with QAnon, a militarized social movement being monitored by their dangerous-organizations department and condemned by the House in a bipartisan bill. My team has been investigating QAnon for years. This conspiracy theory has become a favored topic among misinformation researchers because of all the ways it has remained extensible, adaptable, and resilient in the face of platform companies' efforts to quarantine and remove it. 

QAnon has also become an issue for Congress, because it’s no longer about people participating in a strange online game: it has touched down like a tornado in the lives of politicians, who are now the targets of harassment campaigns that cross over from the fever dreams of conspiracists to violence. Moreover, it’s happened quickly and in new ways. Conspiracy theories usually take years to spread through society, with the promotion of key political, media, and religious figures. Social media has sped this process through ever-growing forms of content delivery. QAnon followers don’t just comment on breaking news; they bend it to their bidding

I focused my testimony on the many unnamed harms caused by the inability of social media companies to prevent misinformation from saturating their services. Journalists, public health and medical professionals, civil society leaders, and city administrators, like law enforcement and election officials, are bearing the cost of misinformation-at-scale and the burden of addressing its effects. Many people tiptoe around political issues when chatting with friends and family, but as misinformation about protests began to mobilize white vigilantes and medical misinformation led people to downplay the pandemic, different professional sectors took on important new roles as advocates for truth

Take public health and medical professionals, who have had to develop resources for mitigating medical misinformation about covid-19. Doctors are attempting to become online influencers in order to correct bogus advice and false claims of miracle cures—taking time away from delivering care or developing treatments. Many newsrooms, meanwhile, adapted to the normalization of misinformation on social media by developing a “misinformation beat”—debunking conspiracy theories or fake news claims that might affect their readers. But those resources would be much better spent on sustaining journalism rather than essentially acting as third-party content moderators. 

Civil society organizations, too, have been forced to spend resources on monitoring misinformation and protecting their base from targeted campaigns. Racialized disinformation is a seasoned tactic of domestic and foreign influence operations: campaigns either impersonate communities of color or use racism to boost polarization on wedge issues. Brandi Collins-Dexter testified about these issues at a congressional hearing in June, highlighting how tech companies hide behind calls to protect free speech at all costs without doing enough to protect Black communities targeted daily on social media with medical misinformation, hate speech, incitement, and harassment. 

Election officials, law enforcement personnel, and first responders are at a serious disadvantage attempting to do their jobs while rumors and conspiracy theories spread online. Right now, law enforcement is preparing for violence at polling places. 

A pathway to improve

When misinformation spreads from the digital to the physical world, it can redirect public resources and threaten people’s safety. This is why social media companies must take the issue as seriously as they take their desire to profit. 

But they need a pathway to improve. Section 230 of the Communications and Decency Act empowers social media companies to improve content moderation, but politicians have threatened to remove these protections so they can continue with their own propaganda campaigns. All throughout the October hearing, the specter loomed of a new agency that could independently audit civil rights violations, examine issues of data privacy, and assess the market externalities of this industry on other sectors. 

As I argued during the hearing, the enormous reach of social media across the globe means it is important that regulation not begin with dismantling Section 230 until a new policy is in place. 

Until then, we need more transparency. Misinformation is not solely about the facts; it’s about who gets to say what the facts are. Fair content moderation decisions are key to public accountability. 

Rather than hold on to technostalgia for a time when it wasn’t this bad, sometimes it is worth asking what it would take to uninvent social media, so that we can chart a course for the web we want—a web that promotes democracy, knowledge, care, and equity. Otherwise, every unexplained decision by tech companies about access to information potentially becomes fodder for conspiracists and, even worse, the foundation for overreaching governmental policy.

Keep Reading

Most Popular

wet market selling fish
wet market selling fish

This scientist now believes covid started in Wuhan’s wet market. Here’s why.

How a veteran virologist found fresh evidence to back up the theory that covid jumped from animals to humans in a notorious Chinese market—rather than emerged from a lab leak.

light and shadow on floor
light and shadow on floor

How Facebook and Google fund global misinformation

The tech giants are paying millions of dollars to the operators of clickbait pages, bankrolling the deterioration of information ecosystems around the world.

masked travellers at Heathrow airport
masked travellers at Heathrow airport

We still don’t know enough about the omicron variant to panic

The variant has caused alarm and immediate border shutdowns—but we still don't know how it will respond to vaccines.

egasus' fortune after macron hack
egasus' fortune after macron hack

NSO was about to sell hacking tools to France. Now it’s in crisis.

French officials were close to buying controversial surveillance tool Pegasus from NSO earlier this year. Now the US has sanctioned the Israeli company, and insiders say it’s on the ropes.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.