Skip to Content
Uncategorized

How Computational Complexity Will Revolutionize Philosophy

The theory of computation has had a profound influence on philosophical thinking. But computational complexity theory is about to have an even bigger effect, argues one computer scientist.

Since the 1930s, the theory of computation has profoundly influenced philosophical thinking about topics such as the theory of the mind, the nature of mathematical knowledge and the prospect of machine intelligence. In fact, it’s hard to think of an idea that has had a bigger impact on philosophy.

And yet there is an even bigger philosophical revolution waiting in the wings. The theory of computing is a philosophical minnow compared to the potential of another theory that is currently dominating thinking about computation.

At least, this is the view of Scott Aaronson, a computer scientist at the Massachusetts Institute of Technology. Today, he puts forward a persuasive argument that computational complexity theory will transform philosophical thinking about a range of topics such as the nature of mathematical knowledge, the foundations of quantum mechanics and the problem of artificial intelligence.

Computational complexity theory is concerned with the question of how the resources needed to solve a problem scale with some measure of the problem size, call it n. There are essentially two answers. Either the problem scales reasonably slowly, like n, n^2 or some other polynomial function of n. Or it scales unreasonably quickly, like 2^n, 10000^n or some other exponential function of n.

So while the theory of computing can tell us whether something is computable or not, computational complexity theory tells us whether it can be achieved in a few seconds or whether it’ll take longer than the lifetime of the Universe.

That’s hugely significant. As Aaronson puts it: “Think, for example, of the difference between reading a 400-page book and reading every possible such book, or between writing down a thousand-digit number and counting to that number.”

He goes on to say that it’s easy to imagine that once we know whether something is computable or not, the problem of how long it takes is merely one of engineering rather than philosophy. But he then goes on to show how the ideas behind computational complexity can extend philosophical thinking in many areas.

Take the problem of artificial intelligence and the question of whether computers can ever think like humans. Roger Penrose famously argues that they can’t in his book The Emperor’s New Mind. He says that whatever a computer can do using fixed formal rules, it will never be able to ‘see’ the consistency of its own rules. Humans, on the other hand, can see this consistency.

One way to measure the difference between a human and computer is with a Turing test. The idea is that if we cannot tell the difference between the responses given by a computer and a human, then there is no measurable difference.

But imagine a computer that records all conversations it hears between humans. Over time, this computer will build up a considerable database that it can use to make conversation. If it is asked a question, it looks up the question in its database and reproduces the answer given by a real human.

In this way a computer with a big enough look up table can always have a conversation that is essentially indistinguishable from one that humans would have

“So if there is a fundamental obstacle to computers passing the Turing Test, then it is not to be found in computability theory,” says Aaronson.

Instead, a more fruitful way forward is to think about the computational complexity of the problem. He points out that while the database (or look up table) approach “works,” it requires computational resources that grow exponentially with the length of the conversation.

Aaronson points out that this leads to a powerful new way to think about the problem of AI. He says that Penrose could say that even though the look up table approach is possible in principle, it is effectively impractical because of the huge computational resources it requires.

By this argument, the difference between humans and machines is essentially one of computational complexity.

That’s an interesting new line of thought and just one of many that Aaronson explores in detail in this essay.

Of course, he acknowledges the limitations of computational complexity theory. Many of the fundamental tenets of the theory, such as P ≠ NP, are unproven; and many of the ideas only apply to serial, deterministic Turing machines, rather than the messier kind of computing that occurs in nature.

But he says these criticisms do not allow philosophers (or anybody else) to arbitrarily dismiss the arguments of complexity theory. Indeed, many of these criticisms raise interesting philosophical questions in themselves.

Computational complexity theory is a relatively new discipline which builds on advances made in the 70s, 80s and 90s. And that’s why it’s biggest impacts are yet to come.

Aaronson points us in the direction of some of them in an essay that is thought provoking, entertaining and highly readable. If you have an hour or two to spare, it’s worth a read.

Ref: arxiv.org/abs/1108.1791: Why Philosophers Should Care About Computational Complexity

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.