Monitoring social media seems like an obvious way of predicting events such as a protest or a terrorist attack, but it has so far proved challenging. For example, Brazil was largely unprepared for mass protests in 2013 even though they were organized on social media.
Such failures provided motivation for a study published today in Science. A team of researchers was able to characterize a fundamental way that terrorists and other groups use social media to organize themselves. The researchers then used this data to create an algorithm that may be able to predict the future behaviors of these groups, including when their activity escalates leading up to an event (see “Fighting Isis Online”).
Most social-media platforms offer an easy way to set up a community or organization page where anyone can join, exchange information, and remain anonymous. These ad hoc groups, termed “aggregates” in this research, are being used by terrorist groups to communicate and build support.
Neil Johnson, a physicist at the University of Miami, and his team focused on a Russia-based social platform called VKontakte, which boasts 360 million users worldwide. They manually identified 196 pro-ISIS aggregates involving 108,086 individuals based on content that suggested a concrete connection to ISIS (rather than just keywords.) The researchers saw that these aggregates grow over time, and larger ones develop from the coalescence of smaller ones. They tracked them over a six-month period to gather data about their behaviors on a day-to-day basis, which they then used to create a predictive algorithm.
The research surfaces some fundamental characteristics of social groups that could be important for combating terrorism—for example, that it is more effective to identify aggregates rather than individuals (which are more numerous and time-consuming to parse), and to target smaller, weaker aggregates before they combine into larger ones. The algorithm also seems to indicate that the rate of aggregate formation escalates leading up to big events, which was true before the 2013 protests in Brazil and the 2014 ISIS attacks in Kobane, Syria.
Johnson says that the information uncovered by their algorithm could be used to create a tool that aids anti-terrorism efforts (see “What Google and Facebook Can Do to Fight Isis”). “It would be possible to create automated machinery that then looks across the different online media sites, and detects the aggregates, detects their dynamics, checks it out, looks for the escalation, and therefore heightens alerts when there's an escalation of aggregate creation,” he says.
Eliminating terrorist activity on social media presents a challenge—often shutdowns come from the platform itself, which must navigate the line between public safety and free speech. Facebook has a team that identifies and removes individuals or groups associated with terrorist content, and earlier this year Twitter suspended 125,000 accounts with links to ISIS. Individual hackers and government agencies may also intervene—last year, the online Hacktivist group Anonymous removed 20,000 Twitter accounts with ties to ISIS.
But some scientists question the value of the algorithm as a predictive tool for anti-terrorism efforts. Andrew Gelman, a professor of statistics and politics at Columbia University, thinks the idea of looking at aggregates is a good one, but the study’s analysis of the behaviors of aggregates may be more useful than its predictive algorithm.
“In theory there is some benefit from modeling,” he says, “But I don’t think they’re really there yet.”
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
A startup says it’s begun releasing particles into the atmosphere, in an effort to tweak the climate
Make Sunsets is already attempting to earn revenue for geoengineering, a move likely to provoke widespread criticism.
10 Breakthrough Technologies 2023
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.