Skip to Content
Tech policy

Nearly half of Twitter accounts pushing to reopen America may be bots

There has been a huge upswell of Twitter bot activity since the start of the coronavirus pandemic, amplifying medical disinformation and the push to reopen America.
May 21, 2020
Protestors ask Massachusetts Governor Baker to reopen.
Protestors ask Massachusetts Governor Baker to reopen.
Maddie Meyer / Getty

Kathleen M. Carley and her team at Carnegie Mellon University’s Center for Informed Democracy & Social Cybersecurity have been tracking bots and influence campaigns for a long time. Across US and foreign elections, natural disasters, and other politicized events, the level of bot involvement is normally between 10 and 20%, she says.

But in a new study, the researchers have found that bots may account for between 45 and 60% of Twitter accounts discussing covid-19. Many of those accounts were created in February and have since been spreading and amplifying misinformation, including false medical advice, conspiracy theories about the origin of the virus, and pushes to end stay-at-home orders and reopen America.

They follow well-worn patterns of coordinated influence campaigns, and their strategy is already working: since the beginning of the crisis, the researchers have observed a greater polarization in Twitter discourse around the topic.

A number of factors could account for this surge. The global nature of the pandemic means a larger swath of actors are motivated to capitalize on the crisis as a way to meet their political agendas. Disinformation is also now more coordinated in general, with more firms available for hire to create such influence campaigns.

But it’s not just the volume of accounts that worries Carley, the center’s director. Their patterns of behavior have grown more sophisticated, too. Bots are now often more deeply networked with other accounts, making it easier for them to disseminate their messages widely. They also engage in more strategies to target at-risk groups like immigrants and minorities and help real accounts engaged in hate speech to form online groups.

To perform their most recent analysis, the researchers studied more than 200 million tweets discussing coronavirus or covid-19 since January. They used machine-learning and network analysis techniques to identify which accounts were spreading disinformation and which were most likely bots or cyborgs (accounts run jointly by bots and humans).

The system looks for 16 different maneuvers that disinformation accounts can perform, including “bridging” between two groups (connecting two online communities), “backing” an individual (following the account to increase the person’s level of perceived influence), and “nuking” a group (actions that lead to an online community being dismantled).

Through the analysis, they identified more than 100 types of inaccurate covid-19 stories and found that not only were bots gaining traction and accumulating followers, but they accounted for 82% of the top 50 and 62% of the top 1,000 influential retweeters. The influence of each account was calculated to reflect the number of followers it reached as well as the number of followers its followers reached.

The researchers have begun to analyze Facebook, Reddit, and YouTube to understand how disinformation spreads between platforms. The work is still in the early stages, but it’s already revealed some unexpected patterns. For one, the researchers have found that many disinformation stories come from regular websites or blogs before being picked up on different social platforms and amplified. Different types of stories also have different provenance patterns. Those claiming that the virus is a bioweapon, for example, mostly come from so-called “black news” sites, fake news pages designed to spread disinformation that are often run outside the US. In contrast, the “reopen America” rhetoric mostly comes from blogs and Facebook pages run in the US.

The researchers also found that users of different platforms will respond to such content in very different ways. On Reddit, for example, moderators are more likely to debunk and ban disinformation. When a coordinated campaign around reopening America popped on Facebook, Reddit users began discussing the phenomenon and counteracting the messaging. “They were saying, ‘Don’t believe any of that stuff. You can’t trust Facebook,’” says Carley.

Unfortunately, there are no easy solutions to this problem. Banning or removing accounts won’t work, as more can be spun up for every one that is deleted. Banning accounts that spread inaccurate facts also won’t solve anything. “A lot of disinformation is done through innuendo or done through illogical statements, and those are hard to discover,” she says.

Carley says researchers, corporations, and the government need to coordinate better to come up with effective policies and practices for tamping this down. “I think we need some kind of general oversight group,” she says. “Because no one group can do it alone.”

Deep Dive

Tech policy

wet market selling fish
wet market selling fish

This scientist now believes covid started in Wuhan’s wet market. Here’s why.

How a veteran virologist found fresh evidence to back up the theory that covid jumped from animals to humans in a notorious Chinese market—rather than emerged from a lab leak.

thermal image of young woman wearing mask
thermal image of young woman wearing mask

The covid tech that is intimately tied to China’s surveillance state

Heat-sensing cameras and face recognition systems may help fight covid-19—but they also make us complicit in the high-tech oppression of Uyghurs.

German woman stands in queue for vaccination
German woman stands in queue for vaccination

What Europe’s new covid surge means—and what it doesn’t

New restrictions are coming into place across Europe as covid cases rise again. But there are several reasons why a new wave is happening.

surveillance drone in Afghanistan
surveillance drone in Afghanistan

After 20 years of drone strikes, it’s time to admit they’ve failed

The very first drone attack missed its target, and two decades on civilians are still being killed. Why can't we accept that the technology doesn't work?

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.