Skip to Content
Policy

Nearly half of Twitter accounts pushing to reopen America may be bots

There has been a huge upswell of Twitter bot activity since the start of the coronavirus pandemic, amplifying medical disinformation and the push to reopen America.
May 21, 2020
Protestors ask Massachusetts Governor Baker to reopen.
Maddie Meyer / Getty

Kathleen M. Carley and her team at Carnegie Mellon University’s Center for Informed Democracy & Social Cybersecurity have been tracking bots and influence campaigns for a long time. Across US and foreign elections, natural disasters, and other politicized events, the level of bot involvement is normally between 10 and 20%, she says.

But in a new study, the researchers have found that bots may account for between 45 and 60% of Twitter accounts discussing covid-19. Many of those accounts were created in February and have since been spreading and amplifying misinformation, including false medical advice, conspiracy theories about the origin of the virus, and pushes to end stay-at-home orders and reopen America.

They follow well-worn patterns of coordinated influence campaigns, and their strategy is already working: since the beginning of the crisis, the researchers have observed a greater polarization in Twitter discourse around the topic.

A number of factors could account for this surge. The global nature of the pandemic means a larger swath of actors are motivated to capitalize on the crisis as a way to meet their political agendas. Disinformation is also now more coordinated in general, with more firms available for hire to create such influence campaigns.

But it’s not just the volume of accounts that worries Carley, the center’s director. Their patterns of behavior have grown more sophisticated, too. Bots are now often more deeply networked with other accounts, making it easier for them to disseminate their messages widely. They also engage in more strategies to target at-risk groups like immigrants and minorities and help real accounts engaged in hate speech to form online groups.

To perform their most recent analysis, the researchers studied more than 200 million tweets discussing coronavirus or covid-19 since January. They used machine-learning and network analysis techniques to identify which accounts were spreading disinformation and which were most likely bots or cyborgs (accounts run jointly by bots and humans).

The system looks for 16 different maneuvers that disinformation accounts can perform, including “bridging” between two groups (connecting two online communities), “backing” an individual (following the account to increase the person’s level of perceived influence), and “nuking” a group (actions that lead to an online community being dismantled).

Through the analysis, they identified more than 100 types of inaccurate covid-19 stories and found that not only were bots gaining traction and accumulating followers, but they accounted for 82% of the top 50 and 62% of the top 1,000 influential retweeters. The influence of each account was calculated to reflect the number of followers it reached as well as the number of followers its followers reached.

The researchers have begun to analyze Facebook, Reddit, and YouTube to understand how disinformation spreads between platforms. The work is still in the early stages, but it’s already revealed some unexpected patterns. For one, the researchers have found that many disinformation stories come from regular websites or blogs before being picked up on different social platforms and amplified. Different types of stories also have different provenance patterns. Those claiming that the virus is a bioweapon, for example, mostly come from so-called “black news” sites, fake news pages designed to spread disinformation that are often run outside the US. In contrast, the “reopen America” rhetoric mostly comes from blogs and Facebook pages run in the US.

The researchers also found that users of different platforms will respond to such content in very different ways. On Reddit, for example, moderators are more likely to debunk and ban disinformation. When a coordinated campaign around reopening America popped on Facebook, Reddit users began discussing the phenomenon and counteracting the messaging. “They were saying, ‘Don’t believe any of that stuff. You can’t trust Facebook,’” says Carley.

Unfortunately, there are no easy solutions to this problem. Banning or removing accounts won’t work, as more can be spun up for every one that is deleted. Banning accounts that spread inaccurate facts also won’t solve anything. “A lot of disinformation is done through innuendo or done through illogical statements, and those are hard to discover,” she says.

Carley says researchers, corporations, and the government need to coordinate better to come up with effective policies and practices for tamping this down. “I think we need some kind of general oversight group,” she says. “Because no one group can do it alone.”

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

Yes, remote learning can work for preschoolers

The largest-ever humanitarian intervention in early childhood education shows that remote learning can produce results comparable to a year of in-person teaching.

Three technology trends shaping 2024’s elections

The biggest story of this year will be elections in the US and all around the globe

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.