Skip to Content

Bogus Grass-Roots Politics on Twitter

Data-mining techniques reveal fake Twitter accounts that give the impression of a vast political movement.
November 2, 2010

Researchers have found evidence that political campaigns and special-interest groups are using scores of fake Twitter accounts to create the impression of broad grass-roots political expression. A team at Indiana University used data-mining and network-analysis techniques to detect the activity.

How true? This network graph shows the connections between 6,278 accounts that used the hashtag #gop in September and October 2010.

“We think this technique must be common,” says Filippo Menczer, an associate professor at Indiana University and one of the principal investigators on the project. “Wherever there are lots of eyes looking at screens, spammers will be there; so why not with politics?”

The research effort is dubbed the Truthy project, a reference to comedian Stephen Colbert’s coinage of the word “truthiness,” or a belief held to be true regardless of facts or logic. The goal was to uncover organized propaganda or smear campaigns masquerading as a spontaneous outpouring of opinion on Twitter—a tactic known as fake grass roots, or “Astroturf.”

The researchers relied largely on network-analysis techniques, in which connections between different members of a network are mapped out. Long used in mathematics and the sciences, network analysis is increasingly being used to study the Internet and social networks. The team received tips from Twitter users about suspicious messages and accounts, and then conducted network analysis to understand how these accounts were linked. They also tracked “memes”—keywords or Web links—that suddenly saw a big spike in usage. If the memes came from many otherwise unconnected accounts, they were likely to be legitimate. But if they came from relatively small, tightly connected networks of accounts, they were more likely to be Astroturf.

Menczer says the research group uncovered a number of accounts sending out duplicate messages and also retweeting messages from the same few accounts in a closely connected network. For instance, two since-closed accounts, called @PeaceKaren_25 and @HopeMarie_25, sent out 20,000 similar tweets, most of them linking to, or promoting, the House minority leader John Boehner’s website, gopleader.gov.

In another case, 10 different accounts were used to send out thousands of posts, many of them duplicates slightly altered to avoid detection as spam. All of the tweets linked back to posts on a conservative website called Freedomist.com.

“If you hear the same message from many different sources that you think are independent who are saying the same thing, you’re much more likely to believe it,” says Bruno Gonçalves, a research associate on the project. Repeated messages can also show up as “trending” topics on Twitter, and can even influence Google’s search results. Gonçalves says the researchers are now working to automate the process of identifying suspicious content solely by studying network topology.

The inspiration for the project was a paper published by Panagiotis Takis Metaxas and Eni Mustafaraj of Wellesley College in July 2010. They studied the 2008 special election for a Massachusetts Senate seat between Democrat Martha Coakley and Republican Scott Brown, and found that many Twitter accounts repeated the same negative tweets, apparently in a successful attempt to influence Google’s Realtime search results for either candidate’s name.

In one case, a network of nine Twitter accounts, all created within 13 minutes of one another, sent out 929 messages in about two hours as replies to real account holders in the hopes that these users would retweet the messages. The fake accounts were probably controlled by a script that randomly picked a Twitter user to reply to and a message and a Web link to include. Although Twitter shut the accounts down soon after, the messages still reached 61,732 users.

Bernardo Huberman, who studies social computing at HP Labs in Palo Alto, California, isn’t sure such dirty tricks will accomplish much. In a study that successfully used Twitter activity to predict the popularity of movies, he found that legitimate movie studio Twitter campaigns were largely ineffective compared with honest mass opinion. To truly influence opinion, you have to reach millions of people, not just a few thousand, he says. “Yes, indeed, people are doing this. So what’s new?” he says.

But Menczer thinks Twitter Astroturfing could motivate like-minded readers to get out and vote, discourage political opponents from voting, or influence swing voters. “The cost is almost zero,” he points out. “For the cost of one ad on TV, you could pay 10 people to spend all their time doing this.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.