Skip to Content

Mathematics of Opinion Formation Reveals How Moderation Trumps Extremism

Historians know that the best way to replace an extremist view is with an equally extreme opposing view. Now mathematicians have discovered how to make moderation spread instead

History is littered with examples of new ideologies that have swept through populations, dramatically changing the way people think. 

More often than not, the process begins when a community’s way of thinking is dominated by a dogma that is an important part of the group’s institutions and common practices. Then a new way of thinking comes along, often backed by a small group unwavering advocates.

This challenges the status quo and steadily wins over the population, eventually replacing the old ideas and the dominant new ideology. 

But there is something curious about this phenomenon, say Steve Strogatz at Cornell University in Ithaca and a few pals. In many cases, the new ideology is as extreme as the old. “Why is it that moderate positions so rarely prevail?” they ask.

Today, they provide an answer of sorts using a simplified model of the way ideas spread through a population.

Their model consists of a society of people who can hold the extreme opinion, A, or the opposing view, B.  Some people, the moderates, hold neither A or B (they are called ABs). People can change their view but some fraction of the population will never change. These are the zealots.

Strogatz and co introduce rules for how people change their opinion. The model progresses in discrete steps of time. At each time step,  they chose a speaker and a listener from the population at random. If the speaker is an A or B and talks to somebody with the opposing view, the listener becomes a moderate, holding neither view.

However if an A or B talks to a moderate, then the listener is converted to that point of view (either A or B). In all other cases, there is no change.

Strogatz and co then explore various scenarios to see how the views spread, given different fractions of the starting population and other assumptions.. 

For example, they look at the case where the starting population consists of people who hold view B and zealots holding A. This is equivalent to B being the reigning view and A being the new dogma. 

This scenario depends crucially on the fraction of A zealots. Below some threshold, the reigning view, B, prevails. But when the fraction of zealots rises above the threshold, A quickly spreads though the population and takes over, leaving few Bs and almost no moderates.

So an interesting question is how to increase the fraction of moderates. Strogatz and co look at various scenarios, such as making moderates less likely to convert by introducing a stubborness factor. 

It’s easy to imagine that making the moderates more stubborn, increases their number. But in fact, exactly the opposite occurs. Increasing the moderates’ stubborness makes them more vulnerable to being taken over by zealots.

The reason is subtle. Making moderates more stubborn certainly reduces their conversion rate to the new view but crucially, it also reduces the flow of moderates to the old opinion. In this way, the undecided population is steadily depleted, eventually causing it to collapse. 

Strogatz and co study seven different scenarios, looking for ways to increase the population of moderates. But the moderates are wiped out in all these scenarios.

Except one. Strogatz and co point out that their model suggests that the ability to convert others to your view plays an important role. So they create a scenario in which the moderates have the ability to evangelise.

In this scenario, the population of moderates can be maintained. However, the moderates  can also disappear if the level of evangelism dies down. 

Strogatz and co go on to show that an even better approach is to create a background level of evangelism, which tends to convert people to moderation without having to converse with moderates. This is equivalent to some kind of environmental factor such as a TV advertising campaign promoting moderation. And in this case, moderation prevails.

Strogatz and co are quick to point out that this is a simplified model . “By itself, this final assessment should be regarded with caution,” they say. 

But it does suggest some interesting avenues for future research. And with extremist views playing an ever more important role in global stability, perhaps the time is right to examine these ideas in more detail.

Ref: Encouraging Moderation: Clues From A Simple Model Of Ideological Conflict 

Deep Dive


Capitalizing on machine learning with collaborative, structured enterprise tooling teams

Machine learning advances require an evolution of processes, tooling, and operations.

The race to destroy PFAS, the forever chemicals 

Scientists are showing these damaging compounds can be beat.

How scientists are being squeezed to take sides in the conflict between Israel and Palestine

Tensions over the war are flaring on social media—with real-life ramifications.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.