Skip to Content

Future elections may be swayed by intelligent, weaponized chatbots

The AI advances that brought you Alexa are teaching propaganda how to talk.
Magoz

The battle against propaganda bots is an arm’s race for our democracy. It’s one we may be about to lose. Bots—simple computer scripts—were originally designed to automate repetitive tasks like organizing content or conducting network maintenance, thus sparing humans hours of tedium. Companies and media outlets also use bots to operate social-media accounts, to instantly alert users of breaking news or promote newly published material.

But they can also be used to operate large numbers of fake accounts, which makes them ideal for manipulating people. Our research at the Computational Propaganda Project studies the myriad ways in which political bots employing big data and automation have been used to spread disinformation and distort online discourse.

Bots have proved to be one of the best ways to broadcast extremist viewpoints on social media, but also to amplify such views from other, genuine accounts by liking, sharing, retweeting, hearting, and following, just as a human would. By doing so they’re gaming the algorithms, and rewarding the posts they’ve interacted with by giving them more visibility.

This will seem tame compared with what’s on the way.

Strength in numbers

In the wake of Russia’s interference in the 2016 US election came a wave of discussion about how to shield politics from propaganda. Twitter has taken down suspicious accounts, including bots, in the tens of millions this year, while regulators have proposed bot bans and transparency measures, and called for better cooperation with internet platforms.

So it may appear as if we’re gaining the upper hand. And that’s partly true—the bots’ tactics have lost their novelty and never had finesse. Their strength used to lie in numbers. Propagandists would mobilize armies of them to flood the internet with posts and replies in an attempt to overwhelm genuine democratic discourse. As we’ve created technical countermeasures that are better at detecting bot-like behavior, it’s become easier to shut them down. People, too, have become more alert and effective at spotting them. The average bot does little to conceal its robotic character, and a quick look at its patterns of tweeting, or even its profile picture, can give it away.

The next generation of bots is rapidly evolving, however. Owing in large part to advances in natural-language processing—the same technology that makes possible voice-operated interfaces like Amazon’s Alexa, Google Assistant, and Microsoft’s Cortana—these bots will behave a lot more like real people.

Admittedly, these conversational interfaces are still bumpy, but they’re getting better, and the benefits of being able to successfully decode human language are tremendous. Digital assistants are just one use of them—brands operate conversational chatbots for customer service, and publishers like CNN use them to distribute personalized media content.

Such chatbots openly declare themselves to be automated, but the propaganda bots won’t. They’ll present themselves as human users participating in online conversation in comment sections, group chats, and message boards.

Contrary to popular belief, this isn’t happening yet. Most bots merely react to keywords that trigger a boilerplate response, which rarely fits into the context or syntax of a given conversation. These responses are often easy to spot.

But it’s getting harder. Already, some simple preprogrammed bot scripts have been successful at misleading users. As bots learn how to understand context and intent, they become more adept at engaging in conversation without blowing their cover.

In a few years, conversational bots might seek out susceptible users and approach them over private chat channels. They’ll eloquently navigate conversations and analyze a user’s data to deliver customized propaganda. Bots will point people toward extremist viewpoints and counter arguments in a conversational manner.

Rather than broadcasting propaganda to everyone, these bots will direct their activity at influential people or political dissidents. They’ll attack individuals with scripted hate speech, overwhelm them with spam, or get their accounts shut down by reporting their content as abusive.

Great for Google, great for bots

It’s worth taking a look at exactly how the AI techniques that power these kinds of bots are getting better, because the methods employed by tech companies also happen to be great for boosting the capabilities of political bots.

To work, natural-language processing requires substantial amounts of data. Tech companies like Google and Amazon get such data by opening their language-processing algorithms to the public via application programming interfaces, or APIs. Third parties—such as a bank, for example—that want to automate conversations with their customers can send raw data, such as the audio or text scripts of phone calls, to these APIs. Algorithms process the language and return machine-readable data ready to trigger commands. In return, the technology companies that provide these APIs get access to large amounts of conversational examples, which they can use to improve their machine learning and algorithms.

In addition, almost all major technology companies make open-source algorithms for natural-language processing available to developers. The developers can use these to build new, proprietary applications—software for a voice-controlled robot, for example. As developers advance and refine the original algorithms, the technology companies profit from their feedback.

The problem is that such services are widely accessible to almost anyone—including the people building political bots. By providing a toolkit for automating conversation, tech companies are unwittingly teaching propaganda to talk.

The worst is yet to come

Bots versed in human language remain outliers for now. It still requires substantial expertise, computing power, and training data to equip bots with state-of-the-art language-processing algorithms. But it’s not out of reach. Since 2010 political parties and governments have spent more than half a billion dollars on social-­media manipulation, turning it into a highly professionalized and well-funded sector.

There’s still a long way to go before a bot will be able to spoof a human in one-on-one conversation. Yet as the algorithms evolve, those capabilities will emerge.

As with any other innovation, once these AI techniques are out of the box, they’ll inevitably break free from the limited set of applications they were originally designed to perform.

Lisa-Maria Neudert is a doctoral candidate at the Oxford Internet Institute and a researcher with the Computational Propaganda Project.

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.