Skip to Content

Can a Social-Media Algorithm Predict a Terror Attack?

Researchers unpacked how terrorists use social media, then built an algorithm that could help us predict future events.

Monitoring social media seems like an obvious way of predicting events such as a protest or a terrorist attack, but it has so far proved challenging. For example, Brazil was largely unprepared for mass protests in 2013 even though they were organized on social media.

Such failures provided motivation for a study published today in Science. A team of researchers was able to characterize a fundamental way that terrorists and other groups use social media to organize themselves. The researchers then used this data to create an algorithm that may be able to predict the future behaviors of these groups, including when their activity escalates leading up to an event (see “Fighting Isis Online”). 

Most social-media platforms offer an easy way to set up a community or organization page where anyone can join, exchange information, and remain anonymous. These ad hoc groups, termed “aggregates” in this research, are being used by terrorist groups to communicate and build support. 

Neil Johnson, a physicist at the University of Miami, and his team focused on a Russia-based social platform called VKontakte, which boasts 360 million users worldwide. They manually identified 196 pro-ISIS aggregates involving 108,086 individuals based on content that suggested a concrete connection to ISIS (rather than just keywords.) The researchers saw that these aggregates grow over time, and larger ones develop from the coalescence of smaller ones. They tracked them over a six-month period to gather data about their behaviors on a day-to-day basis, which they then used to create a predictive algorithm.

The research surfaces some fundamental characteristics of social groups that could be important for combating terrorism—for example, that it is more effective to identify aggregates rather than individuals (which are more numerous and time-consuming to parse), and to target smaller, weaker aggregates before they combine into larger ones. The algorithm also seems to indicate that the rate of aggregate formation escalates leading up to big events, which was true before the 2013 protests in Brazil and the 2014 ISIS attacks in Kobane, Syria.

Johnson says that the information uncovered by their algorithm could be used to create a tool that aids anti-terrorism efforts (see “What Google and Facebook Can Do to Fight Isis”).  “It would be possible to create automated machinery that then looks across the different online media sites, and detects the aggregates, detects their dynamics, checks it out, looks for the escalation, and therefore heightens alerts when there's an escalation of aggregate creation,” he says. 

Eliminating terrorist activity on social media presents a challenge—often shutdowns come from the platform itself, which must navigate the line between public safety and free speech. Facebook has a team that identifies and removes individuals or groups associated with terrorist content, and earlier this year Twitter suspended 125,000 accounts with links to ISIS. Individual hackers and government agencies may also intervene—last year, the online Hacktivist group Anonymous removed 20,000 Twitter accounts with ties to ISIS. 

But some scientists question the value of the algorithm as a predictive tool for anti-terrorism efforts. Andrew Gelman, a professor of statistics and politics at Columbia University, thinks the idea of looking at aggregates is a good one, but the study’s analysis of the behaviors of aggregates may be more useful than its predictive algorithm.

“In theory there is some benefit from modeling,” he says, “But I don’t think they’re really there yet.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.