Skip to Content
Uncategorized

How Quantum Probability Theory Could Explain Human Logical Fallacies

A quantum model of reasoning beats its classical counterpart in explaining why humans make errors in judging probabilities.

The conjunction and disjunction fallacies are famous for revealing the limits of human reasoning about probability.

This can be measured by telling people a short story about a character and then asking questions about the likelihood of certain statements about that character. Take a look at this story about Linda (which I’ve taken from Wikipedia):

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

  1. Linda is a bank teller.

  2. Linda is a bank teller and is active in the feminist movement.

It turns out that 85 per cent of people choose the second option. But the probability of two events occurring together (in conjunction) is always less than or equal to the probability of one of them alone.

This is the conjunction fallacy (humans show a similar problem over the probability of one event OR another being true, called the disjunction fallacy).

The question is how to explain the problem humans have with this kind of reasoning. Until now, psychologists have turned to classical probability theory to study the concept of probability judgement error. This allows them to build a mathematical model of human reasoning that allows for errors in judgement.

But Jerome Busemeyer at Indiana University and buddies have a different take. They say that quantum probability theory leads to more realistic predictions about the type of errors humans make.

“Quantum probability theory is a general and coherent theory based on a set of (von Neumann) axioms which relax some of the constraints underlying classic (Kolmogorov) probability theory,” say the team.

That’s an interesting insight, to say the least. And if it pans out, it signals a fundamental shift in thinking about the brain.

What Busemeyer and co are saying is that the principles of quantum information processing, including the ideas of superposition and interference, lead to better models of the way humans make decisions.

What this idea needs, of course, is some kind of testable hypothesis that differentiates it from classical models. The team hint at this when describing how the principle of superposition applies to thinking about voting habits, when a voter has to choose between two candidates.

According to classical theory, before the vote is cast, the voter is in a mixed state. But Busemeyer and co say that thinking about the voter in a superposition of states is a better model. That kind of thinking ought to lead to some testable predictions.

Busemeyer and co are at pains to distance themselves from research that uses quantum mechanics to model the brain in an attempt to understand consciousness. and memory. “We are not following this line,” they say. Instead they keep their work far more abstract.

But inevitably the question will be asked. If the principles of quantum information processing better describe the way humans make decisions, what does that imply about the way the brain works?

There’s no telling where this kind of thinking will lead.

Ref: arxiv.org/abs/0909.2789: Quantum Probability Explanations for Probability Judgment ‘Errors’

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.