Researchers have found evidence that political campaigns and special-interest groups are using scores of fake Twitter accounts to create the impression of broad grass-roots political expression. A team at Indiana University used data-mining and network-analysis techniques to detect the activity.
“We think this technique must be common,” says Filippo Menczer, an associate professor at Indiana University and one of the principal investigators on the project. “Wherever there are lots of eyes looking at screens, spammers will be there; so why not with politics?”
The research effort is dubbed the Truthy project, a reference to comedian Stephen Colbert’s coinage of the word “truthiness,” or a belief held to be true regardless of facts or logic. The goal was to uncover organized propaganda or smear campaigns masquerading as a spontaneous outpouring of opinion on Twitter—a tactic known as fake grass roots, or “Astroturf.”
The researchers relied largely on network-analysis techniques, in which connections between different members of a network are mapped out. Long used in mathematics and the sciences, network analysis is increasingly being used to study the Internet and social networks. The team received tips from Twitter users about suspicious messages and accounts, and then conducted network analysis to understand how these accounts were linked. They also tracked “memes”—keywords or Web links—that suddenly saw a big spike in usage. If the memes came from many otherwise unconnected accounts, they were likely to be legitimate. But if they came from relatively small, tightly connected networks of accounts, they were more likely to be Astroturf.
Menczer says the research group uncovered a number of accounts sending out duplicate messages and also retweeting messages from the same few accounts in a closely connected network. For instance, two since-closed accounts, called @PeaceKaren_25 and @HopeMarie_25, sent out 20,000 similar tweets, most of them linking to, or promoting, the House minority leader John Boehner’s website, gopleader.gov.
In another case, 10 different accounts were used to send out thousands of posts, many of them duplicates slightly altered to avoid detection as spam. All of the tweets linked back to posts on a conservative website called Freedomist.com.
“If you hear the same message from many different sources that you think are independent who are saying the same thing, you’re much more likely to believe it,” says Bruno Gonçalves, a research associate on the project. Repeated messages can also show up as “trending” topics on Twitter, and can even influence Google’s search results. Gonçalves says the researchers are now working to automate the process of identifying suspicious content solely by studying network topology.
The inspiration for the project was a paper published by Panagiotis Takis Metaxas and Eni Mustafaraj of Wellesley College in July 2010. They studied the 2008 special election for a Massachusetts Senate seat between Democrat Martha Coakley and Republican Scott Brown, and found that many Twitter accounts repeated the same negative tweets, apparently in a successful attempt to influence Google’s Realtime search results for either candidate’s name.
In one case, a network of nine Twitter accounts, all created within 13 minutes of one another, sent out 929 messages in about two hours as replies to real account holders in the hopes that these users would retweet the messages. The fake accounts were probably controlled by a script that randomly picked a Twitter user to reply to and a message and a Web link to include. Although Twitter shut the accounts down soon after, the messages still reached 61,732 users.
Bernardo Huberman, who studies social computing at HP Labs in Palo Alto, California, isn’t sure such dirty tricks will accomplish much. In a study that successfully used Twitter activity to predict the popularity of movies, he found that legitimate movie studio Twitter campaigns were largely ineffective compared with honest mass opinion. To truly influence opinion, you have to reach millions of people, not just a few thousand, he says. “Yes, indeed, people are doing this. So what’s new?” he says.
But Menczer thinks Twitter Astroturfing could motivate like-minded readers to get out and vote, discourage political opponents from voting, or influence swing voters. “The cost is almost zero,” he points out. “For the cost of one ad on TV, you could pay 10 people to spend all their time doing this.”
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.