Skip to Content
Opinion

Of course you could have seen this coming

The people who invaded the Capitol have spent years showing us who they are online.

January 8, 2021
Chris Kleponis/Sipa USA, via AP Images

Maybe you saw this coming nearly a decade ago, when #YourSlipIsShowing laid bare how racist Twitter users were impersonating Black women on the internet. Maybe, for you, it was during Gamergate, the online abuse campaign targeting women in the industry. Or maybe it was the mass shooting in Christchurch, New Zealand, when a gunman steeped in the culture of 8chan livestreamed himself murdering dozens of people. 

Maybe it was when you, or your friend, or your community, became the target of an extremist online mob, and you saw online anger become real-world danger and harm. 

Or maybe what happened on Wednesday, when a rabble of internet-fueled Trump supporters invaded the Capitol, came as a surprise.

For weeks they had been planning their action in plain sight on the internet—but they have been showing you who they are for years. The level of shock you feel right now about the power and danger of online extremism depends on whether you were paying attention. 

The consequences of inaction

The mob who tried to block Congress from confirming Joe Biden’s presidential victory  showed how the stupidity and danger of the far-right internet could come into the real world again, but this time it struck at the center of the US government. Neo-nazi streamers weren’t just inside the Capitol; they were putting on a show for audiences of tens of thousands of people who egged them on in the chats. The mob was having fun doing memes in the halls of American democracy as a woman—a Trump supporter whose social media history shows her devotion to QAnon—was killed trying to break into congressional offices.

The past year, especially since the pandemic, has been one giant demonstration of the consequences of inaction: the consequences of ignoring the many, many people who have been begging social media companies to take the meme-making extremists and conspiracy theorists that have thrived on their platforms seriously. 

Facebook and Twitter acted to slow the rise of QAnon over the summer, but only after the pro-Trump conspiracy theory was able to grow relatively unrestricted there for three years. Account bans and algorithm tweaks have long been too little, too late to deal with racists, extremists, and conspiracy theorists, and they have rarely addressed the fact that these powerful systems were working exactly as intended.    

I spoke with a small handful of the people who could have told you this was coming for a story in October. Researchers, technologists, and activists told me that major social media companies have, for the entire lifetime of their history, chosen to do nothing, or to act only after their platforms cause abuse and harm. 

Ariel Waldman tried to get Twitter to meaningfully address abuse there in 2008. Researchers like Shafiqah Hudson, I’Nasah Crockett, and Shireen Mitchell have tracked exactly how harassment works and finds an audience on these platforms for years. Whitney Phillips talked about how she’s haunted by laughter—not just from other people, but also her own—back in the earliest days of her research into online culture and trolling, when overwhelmingly white researchers and personalities treated the extremists among them as edgy curiosities.

Many, many people have been begging social media companies to take the meme-making extremists and conspiracy theorists seriously.

Ellen Pao, who briefly served as CEO of Reddit in 2014 and stepped down after introducing the platform’s first anti-harassment policy, was astonished that Reddit had banned r/The_Donald only in June 2020, after evidence had built for years to show that the popular pro-Trump message board served as an organizing space for extremists and a channel for mob abuse. Of course, by the time it was banned, many of its users had already migrated away from Reddit to TheDonald.win, an independent forum created by the same people who ran the previous version. Its pages were filled with dozens of calls for violence ahead of Wednesday’s rally turned attempted coup. 

Banning Trump doesn’t solve the problem

Facebook, Twitter, and YouTube didn’t create conspiracy thinking or extremist ideologies, of course. Nor did they invent the idea of dangerous personality cults. But these platforms have—by design—handed those groups the mechanisms to reach much larger audiences much faster, and to recruit and radicalize new converts, even at the expense of the people and communities those ideologies target for abuse. And crucially, even when it was clear what was happening, they chose the minimal amount of change—or decided not to intervene at all. 

In the wake of the attempted coup at the Capitol building, people are again looking at the major social media companies to see how they respond. The focus is on Trump’s personal accounts, which he used to encourage supporters to descend on DC and then to praise them when they did. Will he be banned from Twitter? There are compelling arguments for why he should be. 

But as heavy and consequential as that would be, it’s also, in other ways … not. Abuse, harassment, conspiracy thinking, and racism will still be able to benefit from social media companies that remain interested in acting only when it’s too late, even without Trump retweeting them and egging them on. 

Facebook has banned Trump indefinitely, and it has also increased moderation of groups, where a lot of conspiracy-fueled activity lives. These changes are good, but again, not new: people have told Facebook about this for years; Facebook employees have told Facebook about this for years. Groups were instrumental in organizing Stop the Steal protests in the days after the election, and before that in anti-mask protests, and before that in spreading fake news, and before that as a central space for anti-vaccine misinformation. None of this is new. 

There are only so many ways to say that more people should have listened. If you’re paying attention now, maybe you’ll finally start hearing what they say. 

Deep Dive

Humans and technology

Unlocking the power of sustainability

A comprehensive sustainability effort embraces technology, shifting from risk reduction to innovation opportunity.

Building a more reliable supply chain

Rapidly advancing technologies are building the modern supply chain, making transparent, collaborative, and data-driven systems a reality.

Building a data-driven health-care ecosystem

Harnessing data to improve the equity, affordability, and quality of the health care system.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.