Skip to Content

The study of epidemics has flourished in recent years thanks to computer models that simulate the way infectious agents spread. Many of these models and the assumptions behind them have been validated with real data from the spread of disease. So there is a growing body of evidence showing that computer simulations are a valuable predictive tool that can help tackle and prevent epidemics.

But it’s not just disease that spreads in this way. Various researchers are using the same ideas to model the spread of ideas and opinions. One interesting question is how radical and extreme ideas spread through society and what measures can be taken to control and prevent this process.

Today, we get a fascinating insight into this problem thanks to the work of Friedrich August at the Technical University of Berlin and a few buddies. Their approach is to divide society into a number of subgroups and then suppose that there is a certain probability that a member of one subgroup can switch to another. They then simulate how the size of these subgroups change over time and as the probabilities change.

That’s standard fair until you start labeling the groups and imagining how they interact. August and company hypothesize that within any society there is a subgroup of people who are are actively radical, that is they practice extreme behavior. They assume there is another subgroup of passive supporters who accept but do not practice extreme behavior. And finally there is a third subgroup of neutral individuals who are susceptible to being converted into passive supporters.

A crucial question here is how to assign probabilities for switching from one group to another. One important process, says August’s group, is the rate at which active radicals are removed from society by processes such as migration, deportation, arrest and death.

August and company assume that some of these removals have a radicalizing effect on the susceptible group. For example, the arrest or murder of an active radical turns some neutrals into passive supporters.

When this happens a feedback loop is set up: the removal of active radicals generates more passive supporters from which more active radicals can be recruited and so on.

Feedback loops are interesting because they lead to nonlinear behavior, where the ordinary intuitive rules of cause and effect no longer apply. So a small increase in one type of behavior can lead to a massive increase in another. In the language of physics, a phase transition occurs.

Sure enough, that’s exactly what happens in August’s model. They show that for various parameters in their model, a small increase in the removal rate of active radicals generates a massive increase in passive supporters, providing an almost limitless pool from which to recruit more active radicals.

What this model describes, of course, is the cycle of violence that occurs in so many of the world’s trouble spots.

That has profound implications for governments contemplating military intervention that is likely to cause “collateral damage.” If you replace the term “active radical” with “terrorist” then a clear prediction of this model is that military intervention creates the conditions in which terrorism flourishes.

They say that this feedback loop can halted only if the removal of terrorists can be achieved without the attendant radicalizing side effects. As August and colleagues put it: “if this happened practically without casualties, fatalities, applying torture or committing terroristic acts against the local population.”

This is an interesting approach. It clearly shows that public opinion and behavior can change dramatically in ways that are difficult to predict.

But the work is by no means complete. These models and the phase changes they predict need to be studied in much more detail. For example, it’s likely that certain types of extreme behavior can drive away passive supporters so there may be important negative feedback effects that also need to be accounted for.

August and colleagues are strident in their conclusions, however. They say: “This strongly indicates that military solutions are inappropriate.” It’ll be interesting to see how their ideas spread.

Ref: arxiv.org/abs/1010.1953: Passive Supporters of Terrorism and Phase Transitions

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.