Skip to Content
Policy

How the truth was murdered

Pandemic, protest, and a precarious election have created an overwhelming flood of disinformation. It didn’t have to be this way.

October 7, 2020
sharing misinformation
Najeebah Al-Ghadban

Hundreds of thousands of Americans are dead in a pandemic, and one of the infected is the president of the United States. But not even personally contracting covid-19 has stopped him from minimizing the illness in Twitter messages to his supporters. 

Meanwhile, suburban moms steeped in online health propaganda are printing out Facebook memes and showing up maskless to stores, camera in hand and hell-bent on forcing low-paid retail workers to let them shop anyway. Armed right-wing militias are patrolling western towns, embracing online rumors of “antifa” invasions. And then there’s QAnon, the online conspiracy theory that claims Trump is waging a secret war against a ring of satanist pedophiles.

QAnon drew new energy from the uncertainty and panic caused by the pandemic, growing into an “omniconspiracy theory”: a roaring river fed by dozens of streams of conspiratorial thinking. Researchers have documented how QAnon is amplifying health misinformation about covid-19, and infiltrating other online campaigns by masking outlandish beliefs in a more mainstream-friendly package. “Q,” the anonymous account treated as a prophet by QAnon’s believers, recently instructed followers to “camouflage” themselves online and “drop all references re: ‘Q’ ‘Qanon’ etc. to avoid ban/termination.” Now wellness communities, mothers’ groups, churches, and human rights organizations are trying to deal with the spread of this dangerous conspiracy theory in their midst. 

When Pew Research polled Americans on QAnon in early 2020, just 23% of adults knew a little or a lot about it. When Pew surveyed people again in early September, that number had doubled—and the way they felt about the movement was split down party lines, Pew said: “41% of Republicans who have heard something about it say QAnon is somewhat or very good for the country.” Meanwhile, 77% of Democrats thought it was “very bad.”

Major platforms like Facebook and Twitter have started to take aggressive action against QAnon accounts and disinformation networks. Facebook banned QAnon groups altogether on Tuesday, aiming directly at one of the conspiracy theory’s more powerful distribution networks. But those networks were able to thrive, relatively undisturbed, on social media for years. The QAnon crackdown feels too late, as if the platforms were trying to stop a river from flooding by tossing out water in buckets. 

sharing misinformation
NAJEEBAH AL-GHADBAN

Many Americans, especially white Americans, have experienced the rise of online hate and disinformation as if they’re on a high bridge over that flooding river, staring only at the horizon. As the water rises, it sweeps away anything that wasn’t able to get such a safe and sturdy perch. Now that bridge isn’t high enough, and even the people on it can feel the deadly currents.

I think a lot of people believe that this rising tide of disinformation and hate did not exist until it was lapping at their ankles. Before that, the water just wasn’t there—or if it was, perhaps it was a trickle or a stream. 

But if you want to know just how the problem got so big and so bad, you have to understand how many people tried to tell us about it. 

Rising waters

“Everybody’s like, ‘I didn’t see this coming,’” Shireen Mitchell says. Back in the early 2010s, Mitchell, an entrepreneur and analyst, was one of many Black researchers documenting coordinated Twitter campaigns of harassment and disinformation against Black feminists. “We saw it coming. We were tracking it,” she says.

I called Mitchell in early September, about a week after Twitter took down a handful of accounts pretending to represent Black Democrats turned Trump supporters. 

Impersonating Black people on Twitter is a tactic with a long history. Shafiqah Hudson and I’Nasah Crockett, two Black feminist activists, noticed in 2014 that Twitter accounts pushing purportedly Black feminist hashtags like #EndFathersDay and #whitewomencantberaped had something strange about them. Everything about those accounts—the word choice, the bios, the usernames—felt like a racist right-wing troll’s idea of a Black feminist. And that’s exactly what they were. As noted in a long feature in Slate about their work, Crockett and Hudson uncovered hundreds of fake accounts at the time and documented how the campaign worked.  

“We are not seen as reliable actors … too invested, not a worthy enough interest group.”

Like Mitchell, Hudson, and Crockett, some of the earliest and best experts in how online harassment works have been people who were targeted by it. But many of those same experts have found their research second-guessed, both by the social-media platforms where mob abuse thrives and by a new crop of influential, often white voices in academia and journalism that have made a living by translating online meme culture for a larger audience. 

“Trans people as a whole have accumulated a wearying amount of experience in dealing with this thing,” says Katherine Cross, a PhD student at the University of Washington who specializes in the study of online abuse, and who is herself a trans woman of color. “Our knowledge that we produce is ignored for many of the same reasons. We are not seen as reliable actors. We’re seen as too invested, as not a worthy enough interest group—on and on and on. And that too has been memory-­holed, I think.” 

Many of the journalists, like me, who have large platforms to cover internet culture are white. Since Trump’s 2016 election, a number of us have become go-to voices for those seeking to find out how his online supporters operate, what they believe, and how they go viral. But many of us unwittingly helped build the mechanisms that have been used to spread abuse. 

Irony-dependent meme culture has flourished over the last 10 years, with the racism and sexism often explained away by white reporters as simple viral humor. But the path jokes took into the mainstream, originating on message boards like 4Chan before being laundered for the public sphere by journalists, is the same route now used to spread QAnon, health misinformation, and targeted abuse. The way reporters covered memes helped teach white supremacists exactly how much they could get away with.

Whitney Phillips, an assistant professor at Syracuse University who studies online misinformation, published a report in 2018 documenting how journalists covering misinformation simultaneously perform a vital service and risk exacerbating harmful phenomena. It’s something Phillips, who is white, has been reckoning with personally. “I don’t know if there’s a specific moment that keeps me up at night,” she told me, “but there’s a specific reaction that does. And I would say that’s laughter.” Laughter by others, and laughter of her own.

Mitchell and I talked for nearly two hours in September, and she told me how she felt, sometimes, seeing mini-­generations of new white voices cycling in and out of her area of expertise. Fielding interview request after interview request, she is often asked to reframe her own experiences for a “lay audience”—that is, for white people. Meanwhile, expert accounts from the communities most harmed by online abuse are treated at best as secondary in importance, and often omitted altogether. 

One example: Gamergate, the 2014 online abuse campaign targeting women and journalists in the gaming industry. It began with a man’s vicious online rant about a (white) ex-girlfriend. It broke through to become a major cultural and news story. The moment made the public at large take online harassment more seriously, but at the same time it demonstrated how abuse campaigns keep working, over and over. 

Even then, Cross says, the people who were best able to talk about why these campaigns took hold and what might stop them—that is, the people under attack—were not taken seriously as experts. She was one of them, both writing about Gamergate and being targeted by it. Media attention to online abuse gathered pace after Gamergate, Mitchell told me, for a simple reason: “When you finally paid attention, you paid attention when a white woman was being targeted, but not when a Black woman was being targeted.”

And as some companies began trying to do something about abuse, those involved in such efforts often found themselves becoming the targets of exactly the same kind of harassment.

When Ellen Pao took over as CEO of Reddit in 2014, she oversaw the site’s first real attempt to confront the misogyny, racism, and abuse that had found a home there. In 2015, Reddit introduced an anti-harassment policy and then banned five notorious subreddits for violating it. Redditors who were angry at those bans then attacked Pao, launching petitions calling for her resignation. She ended up stepping down later that year and is now a campaigner for diversity in the technology industry.

Pao and I spoke in June 2020, just after Reddit banned r/The_Donald, a once-popular pro-Trump subreddit. For years it had served as an organizing space to amplify conspiracy-­fueled, extremist messages, and for years Pao had urged Reddit’s leadership to ban it. By the time they finally did, many of its subscribers had already moved off the site and on to other platforms, like Gab, that were less likely to crack down on them. 

“It’s always been easier not to do anything,” Pao told me. “It takes no resources. It takes no money. You can just keep doing nothing.”

A constant deluge

It’s not as if the warnings of Pao, Cross, and others have only just penetrated mainstream consciousness, though. The flood waters come back again and again.  

The Friday before Donald Trump was elected in 2016, another conspiracy theory—one that would, in about a year’s time, help create QAnon—trended on Twitter. #SpiritCooking was easy to debunk. Its central claims were that Hillary Clinton’s campaign chair, John Podesta, was an occultist, and that a dinner hosted by a prominent performance artist was actually a secret satanic ritual. The source of the theory was an invitation to the dinner in Podesta’s stolen email archives, which had been released publicly by WikiLeaks that October. 

I wrote about misinformation during the 2016 elections, and watched as #SpiritCooking evolved into Pizzagate, a conspiracy theory about secret pedophile rings centered on pizza shops in Washington, DC. Reddit banned a Pizzagate forum in late November that year for “doxxing” people (i.e., putting their personal information online). On December 4, 2016, exactly one month after #SpiritCooking exploded, a North Carolina man walked into a DC restaurant targeted by Pizzagate believers, lifted up his AR-15 rifle, and opened fire. 

These first few months after the 2016 election marked another point in time—much like today—when the flood of disinformation was enough to get more people than usual to notice. Shocked by Trump’s election, many worried that foreign interference and fake news spread on social media had swayed voters. Facebook CEO Mark Zuckerberg initially dismissed this as “a pretty crazy idea,” but ensuing scrutiny of social-media platforms by the media, governments, and the public revealed that they could indeed radicalize and harm people, especially those already vulnerable. 

And the damage continued to grow. YouTube’s recommendation system, designed to get people to watch as many videos as possible, led viewers down algorithmically generated tunnels of misinformation and hate. On Twitter, Trump repeatedly used his huge platform to amplify supporters who promoted racist and conspiratorial ideologies. In 2017, Facebook introduced video livestreaming and was shortly overwhelmed by live videos of graphic violence. In 2019, even before covid-19, vaccine misinformation thrived on the platform as measles outbreaks spread across the US. 

“Choosing to have people whose main objective is to constantly spew hate speech … that’s a decision. No one has forced them to make that decision.”

The tech companies responded with a running list of fixes: hiring enormous numbers of moderators; developing automated systems for detecting and removing some kinds of extreme content or misinformation; updating their rules, algorithms, and policies to ban or diminish the reach of some forms of harmful content. 

But so far the toxic tide has outpaced their ability—or their willingness—to beat it back. Their business models depend on maximizing the amount of time users spend on their platforms. Moreover, as a number of studies have shown, misinformation originates disproportionately from right-wing sources, which opens the tech platforms to accusations of political bias if they try to suppress it. In some cases, NBC News reported in August, Facebook deliberately avoided taking disciplinary action against popular right-wing pages posting otherwise rule-breaking misinformation. 

Many experts believed that the next large-scale test of these companies’ capacity to handle an onslaught of coordinated disinformation, hate, and extremism was going to be the November 2020 election. But the covid pandemic came first—a fertile breeding ground for news of fake cures, conspiracy theories about the virus’s origin, and propaganda that went against common-sense public health guidelines. 

If that is any guide, the platforms are going to be largely powerless to prevent the spread of fake news about ballot fraud, violence on the streets, and vote counts come Election Day. 

The storm and the flood

I’m not proposing to tell you the magical policy that will fix this, or to judge what the platforms would have to do to absolve themselves of this responsibility. Instead, I’m here to point out, as others have before, that people had a choice to intervene much sooner, but didn’t. Facebook and Twitter didn’t create racist extremists, conspiracy theories, or mob harassment, but they chose to run their platforms in a way that allowed extremists to find an audience, and they ignored voices telling them about the harms their business models were encouraging.

Sometimes these calls came from within their own companies and social circles. 

When Ariel Waldman, a science communicator, went public with her story of Twitter abuse, she hoped she’d be the last person to be the target of harassment on the site. It was May 2008.

By this point she’d already tried privately for a year to get her abusers removed from the platform, but she remained somewhat optimistic when she decided to publish a blog post detailing her experiences.  

After all, she knew some of the people who had founded Twitter just a couple of years earlier. 

“I used to hang out at their office, and they were acquaintances. I went to their Halloween parties,” Waldman told me this summer. There were models for success at the time, too: Flickr, the photo-sharing website, had been extremely responsive to requests to take down abusive content targeting her. 

So she wrote about the threats and abuse hurled at her, and detailed her emails back and forth with the company’s founders. But Twitter never adequately dealt with her abuse. Twelve years later, Waldman has seen the same pattern repeat itself year after year. 

“Choosing to have people whose main objective is to constantly spew hate speech and harm other people on a platform—that’s a decision. No one has forced them to make that decision,” she says. 

“They alone make it. And I feel that they increasingly act as if—you know, that it’s more complicated than that. But I don’t really think it is.” 


I don’t know what to tell you about how to stop the flood. And even if I did, it wouldn’t undo the considerable damage from the rising waters. There have been permanent effects on those voices who were turned into footnotes as they tried to warn the rest of us. 

Today, Mitchell notes, the same groups that engaged in mob campaigns of abuse and harm have reframed themselves as the victims whenever there are calls for major social-media platforms to silence them. “If they have had the right to run amok for all that time, then you take that away from them—then they feel like they’re the ones who are oppressed,” she says. “While no one pays attention to the people who are actually oppressed.” 

sharing misinformation
NAJEEBAH AL-GHADBAN

One path toward making things better could involve providing more incentive for companies to do something. That might include reforming Section 230, the law that shields social-media companies from legal liability for user-posted content. 

Mary Anne Franks, a professor at the University of Miami who has worked on online harassment, believes that a meaningful reform of the law would do two things: limit the reach of those protections to speech rather than conduct, and remove immunity from companies that knowingly benefit from the viral spread of hate or misinformation. 

Pao notes that companies might also take these issues more seriously if their leadership looked more like the people being harassed. “You’ve got to get people with diverse backgrounds in at high levels to make the hard decisions,” she says, adding that that’s what they did at Reddit: “We just brought in a bunch of people from different racial and ethnic backgrounds, mostly women, who understood the problems and could see why we needed to change. But right now these companies have boards full of white men who don’t push back on problems and focus on the wrong metrics.”

Phillips, of Syracuse, is more skeptical. You Are Here, a book she published with her writing partner Ryan Milner earlier this year, frames online abuse and disinformation as a global ecological disaster—one that, like climate change, is rooted deeply in human behavior, has a long historical context, and is now all-encompassing, poisoning the air.

She says that asking technology companies to solve a problem they helped create cannot work. 

“The fact of the matter is that technology, our networks, the way information spreads, is what helped facilitate the hell. Those same things are not what’s going to bring us out of it. The idea that there’s going to be some scalable solution is just a pipe dream,” Phillips says. “This is a human problem. It is facilitated and exacerbated exponentially by technology. But in the end of it, this is about people and belief.”

Cross concurs, and offers a tenuous hope that awareness is finally shifting. 

“It’s impossible for people to deny that this has, like sand, gotten into everything, including the places you didn’t know you had,” she says. 

“Maybe it will cause an awakening. I don’t know how optimistic I am, but I feel like at least the seeds are there. The ingredients are there for that sort of thing. And maybe it can happen. I have my doubts.” 

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

Africa’s push to regulate AI starts now        

AI is expanding across the continent and new policies are taking shape. But poor digital infrastructure and regulatory bottlenecks could slow adoption.

Yes, remote learning can work for preschoolers

The largest-ever humanitarian intervention in early childhood education shows that remote learning can produce results comparable to a year of in-person teaching.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.