Skip to Content
Uncategorized

Twitter Bots Create Surprising New Social Connections

Researchers show how simple programs posing as real people can shape interactions on Twitter.
January 23, 2012

You might have encountered a “Twitter bot” before: an automated program that perhaps retweeted something you wrote because it had particular keywords. Or maybe you received a message from an unfamiliar, seemingly human-controlled account, only to click on an accompanying link and realize you’d been fooled by a spambot.

Now a group of freelance Web researchers has created more sophisticated Twitter bots, dubbed “socialbots,” that can not only fool people into thinking they are real people, but also serve as virtual social connectors, speeding up the natural rate of human-to-human communication.

The work has its origins in meetings of the Web Ecology Project, an independent research group focused on studying the structure and dynamics of social media phenomena. The group began by questioning the claims of so-called social media consultants who say they can grow their clients’ Twitter networks, and even increase online interaction between a brand and Twitter users.

“A lot of people you can hire now say they are really good at community engagement,” says Tim Hwang, one of the authors of a research paper describing the socialbot experiments. Hwang and his colleagues wondered, “Can we measure those claims?”

The Web Ecology Project set up an experiment in which teams of researchers competed to gain the most Twitter @replies. Since there was no rule against automating the process, a few teams quickly realized they could compete better by using bots.

Hwang and two other researchers created their own organization, called the Pacific Social Architecting Corporation, to keep studying and developing socialbots. And they set up another experiment to further study bot-human interaction, and to measure socialbots’ ability to go one step further and catalyze new human-to-human connections.

In further experiments, the group tracked 2,700 Twitter users, divided into randomly assigned “target groups” of 300, over 54 days. The first 33 days served as a control period, during which no socialbots were deployed. Then, during the 21-day experimental period, nine bots were activated, one for each target group.

Each bot was programmed to perform simple actions like retweeting messages, and “introducing” one human user to another by replying to one and mentioning another in the same message.

On average, each bot attracted 62 new followers and received 33 incoming tweets (mentions and retweets). But Hwang and his colleagues also found that the human-to-human activity changed within the target groups when the socialbots were introduced. They noted a 43 percent increase in follows, compared to the control period averaged over all the groups. However, one group exhibited a 355 percent increase in this connection rate. Further work will explore why this may have happened.

Credit: Max Nanis and Ian Pearce

The results of the experiment, visualized using network graph software, are seen above.

The image shows changes that happened over the course of several days after socialbots were introduced to a target group. The blue dots, or nodes, represent human users, and the green ones bots. The darkness and size of a node corresponds to the number of followers an account has; bigger, darker blue nodes stand for more followers. Lines represent follow relationships, although not necessarily reciprocal ones. A dark blue line indicates a follow relationship that involves at least one user that has many followers in the graph. A green line is a follow between a bot and a human.

The spatial orientation of the nodes was determined by a force-based algorithm, and accounts are clustered according to the number of mutual friends or followers they have.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.