Skip to Content

The 50-year-old problem that eludes theoretical computer science

A solution to P vs NP could unlock countless computational problems—or keep them forever out of reach.

The Steiner tree problem:  Connect a set of points with line segments of minimum total length.
The Steiner tree problem: Connect a set of points with line segments of minimum total length.Derek Brahney
October 27, 2021

1. On Monday, July 19, 2021, in the middle of another strange pandemic summer, a leading computer scientist in the field of complexity theory tweeted out a public service message about an administrative snafu at a journal. He signed off with a very loaded 

“Happy Monday.”

In a parallel universe, it might have been a very happy Monday indeed. A proof had appeared online at the esteemed journal ACM Transactions on Computational Theory, which trades in “outstanding original research exploring the limits of feasible computation.” The result purported to solve the problem of all problems—the Holy Grail of theoretical computer science, worth a $1 million prize and fame rivaling Aristotle’s forevermore.

This treasured problem—known as “P versus NP”—is considered at once the most important in theoretical computer science and mathematics and completely out of reach. It addresses questions central to the promise, limits, and ambitions of computation, asking: 

Why are some problems harder than others? 

Which problems can computers realistically solve? 

How much time will it take? 

And it’s a quest with big philosophical and practical payoffs. 

“Look, this P versus NP question, what can I say?” Scott Aaronson, a computer scientist at the University of Texas at Austin, wrote in his memoir of ideas, Quantum Computing Since Democritus. “People like to describe it as ‘probably the central unsolved problem of theoretical computer science.’ That’s a comical understatement. P vs NP is one of the deepest questions that human beings have ever asked.” 

One way to think of this story’s protagonists is as follows:

“P” represents problems that a computer can handily solve. 

“NP” represents problems that, once solved, are easy to check—like jigsaw puzzles, or Sudoku. Many NP problems correspond to some of the most stubborn and urgent problems society faces. 

The million-dollar question posed by P vs. NP is this: Are these two classes of problems one and the same? Which is to say, could the problems that seem so difficult in fact be solved with an algorithm in a reasonable amount of time, if only the right, devilishly fast algorithm could be found? If so, many hard problems are suddenly solvable. And their algorithmic solutions could bring about societal changes of utopian proportions—in medicine and engineering and economics, biology and ecology, neuroscience and social science, industry, the arts, even politics and beyond. 

Sometimes the classifications evolve—hard problems are revealed to be easy when researchers find more efficient solutions. Testing whether a number is prime, for instance, has been known to be in the class NP since the mid-1970s. But in 2002, three computer scientists at the Indian Institute of Technology Kanpur devised an unconditional proof and a clever algorithm that finally confirmed the problem was also in P. 

If all the tricky problems could be transformed with such algorithmic sleight of hand, the consequences for society—for humanity and our planet—would be enormous. 

For starters, encryption systems, most of which are based on NP problems, would be cracked. We’d need to find a completely different approach to sending secure communications. Protein folding, a 50-year-old grand challenge in biology, would become more tractable, unlocking newfound abilities to design drugs that cure or treat disease and discover enzymes that break down industrial waste. It would also mean finding optimal solutions to everyday hard problems, such as mapping out a road trip to hit all destinations with minimal driving, or seating wedding guests so that only friends share the same dinner table. 

Since the P vs. NP problem’s inception 50 years ago—emerging from the momentous intersection of mathematical logic and electronic computing technology—researchers around the world have made Herculean attempts at a solution. Some computer scientists have suggested that the efforts might be better likened to those of Sisyphus, who labored without resolution. But while those who first explored the problem are running out of time to see a solution, the newer generations are happily taking up the quest.

For Manuel Sabin, a computer scientist just finishing a PhD at UC Berkeley, the allure is in probing the impossibility of problems where “you won’t know the answer until the sun engulfs the earth.” The search might be quixotic, but Sabin would regret not tilting at these windmills. 

Timothy Gowers, a mathematician at the University of Cambridge, calls it “one of my personal mathematical diseases.” He lost the summer of 2013 to the pursuit, after he asked students for an essay about the subject on a test. As he recounted on his blog: “After marking the essays in June, I thought I would just spend an hour or two thinking about the problem again, and that hour or two accidentally turned into about three months.” 

stanchions
The traveling salesman problem: Find the shortest possible route that visits each city once, ultimately returning to the city of origin.
DEREK BRAHNEY

The quest has even stumped the University of Toronto computer scientist Stephen Cook, who framed the problem and launched the field of computational complexity with a seminal paper in 1971. For this work, he won the Turing Award, computer science’s equivalent of the Nobel Prize. But he’s had no luck finding a solution. Cook says he never had any good ideas—“It’s just too difficult.”

2. Michael Sipser, an MIT computer scientist, estimates he’s spent, all told, as much as a decade on the problem. He got interested during grad school in the 1970s, and he bet his fellow student Len Adleman an ounce of gold that it would be solved by the end of the century (Sipser paid up). 

In the 1980s, he achieved a nice result solving a version of the problem with a “restricted” computational model—leading to an exciting period in the field with several beautiful results, giving cause for hope that a solution might not be too far off. 

Sipser still returns to the problem every now and then, and he’s a steadfast ambassador, delivering countless talks on the subject. 

The way he inches into an accessible explanation of P vs. NP is with a basic multiplication problem: 7 × 13 = ?

The answer, 91, is easy enough to compute in your head. Though multiplying bigger numbers isn’t as easy, it would still take a computer practically no time at all. 

But flipping those problems around is another matter. Consider, for example, finding the two 97-digit prime numbers that multiply to produce this very large number: 

310 7418240490 0437213507 5003588856 7930037346 0228427275 4572016194 8823206440 5180815045 5634682967 1723286782 4379162728 3803341547 1073108501 9195485290 0733772482 2783525742 3864540146 9173660247 7652346609 

This factoring problem was part of a challenge assessing the difficulty of cracking the RSA keys used in cryptography. Solving it took 80 processors five months of continuous computing, Sipser explains—which works out to roughly 33 years with only a single processor. Factoring is a hard problem because all current methods seek the answer via “brute force,” checking the astronomical number of possibilities one by one by one. Even for a computer, this is a slow process. 

“The interesting question here is, do you really need to search?” Sipser says. “Or is there some way of solving the factoring problem that zooms in on the answer quickly without searching? We don’t know the answer to that question.”

Questions like this one get at the heart of computational complexity—a field full of beastly problems that researchers are trying to understand. Aaronson has assembled a “Complexity Zoo,” an online catalogue with 545 classes of problems (and counting). Each is classified according to its complexity, or difficulty, and the resources—time, memory, energy—required to find solutions. P and NP are the main attractions. 

As scientific serendipity would have it, a Soviet mathematician, Leonid Levin, converged on a result equivalent to Cook's at more or less the same time.

P is “the class that started it all.” It is the class of problems that can be solved by a computer in a reasonable amount of time. More specifically, P problems are those for which the time it takes to find a solution can be described by a polynomial function, such as n^2. In polynomial-­time algorithms, n is the size of the input, and growth against that input occurs at a reasonable rate (in this case, to the power of two).

By contrast, some hard NP problems might only be solvable by algorithms with run times defined by an exponential function, such as 2^n—producing an exponential growth rate (as with the spread of covid). NP, as Aaronson describes it, is “the class of dashed hopes and idle dreams.” He is, though, quick to clarify a common misconception: not all NP problems are difficult. The class NP in fact contains the class P—because problems with easy solutions are, of course, also easy to check.

NP’s more challenging problems often have momentous practical applications. For these problems, an exhaustive brute-force search for a solution would likely go on for an impractically long time—geologic time—before producing an answer. If a brute-force search algorithm is the best algorithm possible, then P does not equal NP. 

And among the cognoscenti, that’s apparently the consensus, which some liken more to religious belief: P ≠ NP. Most allow only a sliver of hope that the opposite will prove true. “I’d give it a 2 to 3% chance that P equals NP,” Aaronson says. “Those are the betting odds that I’d take.”

The result published in July presented a proof of exactly that long shot. But it was only the latest in a long tradition of proofs that don’t pass muster. Within a day of publication, in a turn of events worthy of Monty Python, the paper was removed from the online journal; then it seemed to reappear briefly before disappearing permanently. It was the most recent version of a paper that the author had posted more than 60 times to the arXiv preprint server over the last decade. The journal’s editor in chief explained on Twitter that the result had been rejected, but in a case of human error, the paper’s disposition had somehow changed from “reject” to “accept” and the proof had found its way to publication. 

3. In early August, when I met Steve Cook at his office on campus, he’d neither seen nor heard of that latest P vs. NP proof snafu. Now 81, he’d only recently retired, since his memory was failing. “That’s why we have James here,” he said—his son James, 36, also a computer scientist, had joined us for my visit. Steve was in the midst of clearing out his office. A giant recycling bin stood in the middle of the room, filling up with old yellowing issues of the Journal of Symbolic Logic, a stack of super-fat Toronto telephone books waiting nearby.

Over the years, Cook has seen many proofs purporting to solve the P vs. NP problem. In 2000, after the Clay Mathematics Institute named it one of the seven unsolved “Millennium Problems” (each worth a $1 million prize), he was inundated with messages from people who thought they’d triumphed. All the results were wrong, if not plainly bogus. About half claimed to have proved that P equals NP; the other half went in the opposite direction. Not too long ago, one person claimed to have proved both.

Cook, in his 1971 paper, conjectured that P does not equal NP (he phrased it using different terminology common at the time). He’s since invested a significant if indeterminate amount of time working to establish that that’s the case. “I don’t have a good memory of toiling away,” he says, but his colleagues recall that whenever they went into the department on the weekend, Steve was there in his office. 

Unless he’s racing sailboats, Cook is not one to rush; he likes to give an idea time. And his former students remember a distinct lack of swagger. The computer scientist Anna Lubiw, at the University of Waterloo, says that when he taught Cook’s theorem—part of that pioneering paper—he never referred to it as such and never even gave any hints that he was the person who proved it. Maria Klawe, a mathematician and computer scientist and the president of Harvey Mudd College, says she would regularly correct Cook when he lost his way teaching proofs that he knew inside out: “He’d get stuck and say, ‘Okay. Tell me how the proof goes.’” Cook was also famously modest in grant applications and reports pertaining to his research—he’d confess: “Honestly, I have made little progress …”

He did make headway, however, in recruiting James to take up the cause. Early on, James displayed an interest in mathematics and computing—at age nine, he urged his dad to teach him Boolean algebra and logic. A couple of years ago, after earning a PhD at Berkeley and doing a stint at Google, he set off as an independent researcher focusing on miscellaneous projects, some of them indirectly connected to P vs. NP. And despite the track record, James, who bears a striking resemblance to his father, is undaunted at having inherited such a seemingly interminable quest. He regards it as he would any mathematical endeavor: it’s a fun puzzle. “There’s got to be an answer to these questions,” he says. “And it’s like, come on, somebody’s got to solve it. Let’s just get this figured out. It’s been a long time. It’s embarrassing that we don’t know the answer yet.” 

The lack of progress hasn’t stopped this community of happy Sisypheans from celebrating computational complexity’s 50th anniversary. The festivities began in 2019, when devotees from around the world gathered at the Fields Institute for Research in Mathematical Sciences, at the University of Toronto, for a symposium in Cook’s honor. Christos Papadimitriou, a computer scientist at Columbia University who has spent much of his career working on P vs. NP, opened the event with a public lecture, looking back not a half-century but millennia. 

He began by describing age-old quests for solutions—using algebraic tools or straightedge and compass, which he considered rudimentary forms of computation. Papadimitriou’s tale eventually arrived at Alan Turing, the British mathematician whose 1936 paper “On Computable Numbers” formalized the notions of “algorithm” and “computation.” Turing also showed—with his idea of a “universal computing machine”—that there is no “mechanical” way (that is, performed by a machine) to prove the truth or falsehood of mathematical statements; no systematic way to distinguish the provable from the unprovable. 

Papadimitriou said he considers Turing’s paper the birth certificate of computer science—“and the birth certificate says that computer science was born with a stark understanding of its own limitations.” He reckoned computer science is the only known field of scientific discourse born with such an awareness—“as opposed to other sciences, which understand their own limitations, like the rest of us, in late middle age.”

It wasn’t long after Turing’s  ideas (and similar ideas from others) found embodiment in the first computers that scientists confronted questions about the machines’ inherent capabilities and limitations. In the early 1950s, John von Neumann, the Hungarian-American pioneer of the modern computer, “bragged about an algorithm of his being polynomial, compared to the exponential incumbent,” as Papadimitriou recalled—he’d outwitted a slow algorithm with a fast one. This was the dawn of a new theory: computational complexity theory. The crux of it was that only polynomial algorithms are in any sense good or practical or worth aiming at a problem, whereas an exponential algorithm, Papadimitriou said, “is the algorithmic equivalent of death.”

Cook first started thinking about complexity in the mid-1960s. While working on his PhD at Harvard, he contemplated whether it is possible to prove, given certain computational models, that multiplication is harder than addition (it remains an open problem). 

In 1967, according to a book about Cook forthcoming from the Association for Computing Machinery (ACM), while a postdoc at Berkeley, he drafted course notes that contained the seed of his big result. He’d worked out a formulation of the complexity classes that came to be known as P and NP, and he posed the question of whether P was equal to NP. (At around the same time, others, including the computer scientist Jack Edmonds, now retired from the University of Waterloo, were circling around the same ideas.)

But the field of computer science was only just beginning, and to most scientists and mathematicians such ideas were unfamiliar if not downright strange. After four years at Berkeley’s mathematics department, Cook was considered for tenure but not offered a position. He had advocates in the university’s new department of computer science, and they lobbied for him to be granted a position in their ranks, but the dean wasn’t inclined to give tenure to someone whom the illustrious mathematicians had denied. 

Most complexity theorists dream a little smaller, opting instead for indirect approaches.

In 1970, Cook moved to the University of Toronto. The following year he published his breakthrough. Submitted to a symposium of the ACM held that May in Shaker Heights, Ohio, the paper sharpened the concept of complexity and defined a way to characterize the hardest problems in NP. It proved, in a flash of algorithmic alchemy, that one problem, known as the satisfiability problem (seeking a solution for a formula given a set of constraints), was in a sense the hardest problem in NP, and that all the other NP problems could be reduced to it. 

This was a crucial theorem: If there is a polynomial-time algorithm that solves the satisfiability problem, then that algorithm will serve as a skeleton key, unlocking solutions to all the problems in NP. And if there exists a polynomial-time solution for all the problems in NP, then P = NP. 

Among computer scientists, Cook’s theorem is iconic. Leslie Valiant, of Harvard, recalled at the 2019 symposium precisely where and when he first heard of it. After finishing undergraduate studies in math, he’d started a PhD in computer science. While there were courses and degrees in this fledgling field, he said, it felt ephemeral, perhaps lacking in deep intellectual content. “It was a serious worry for people doing computer science at the time,” he said. They asked, ‘Is this a field? Where is it going?’ One day, Valiant came upon Cook’s paper. He read it overnight. “I was transformed,” he said. “In an instant, my concerns about computer science were very much reduced. This paper—for me, it really made the field. I think it made computer science—made it into something of substance.”

And then, as the story goes, after Cook’s theorem came a deluge.

In 1972, Dick Karp, a computer scientist at Berkeley, having read Cook’s esoteric paper, demonstrated that many of the classic computational problems with which he was intimately acquainted—essentially every problem he didn’t know how to solve, drawn from mathematical programming, operations research, graph theory, combinatorics, and computational logic—possessed the same transformational property that Cook had found with the satisfiability problem. In total, Karp found 21 problems, including the knapsack problem (seeking the optimal way to pack a constrained space with the most valuable items), the traveling-­salesman problem (finding the shortest possible route that visits each city once and returns to the city of origin), and the Steiner tree problem (seeking to optimally connect a set of points with line segments of minimum total length). 

Karp showed that this special collection of problems were all equivalent, which in turn demonstrated that the pattern identified by Cook was not an isolated phenomenon, but rather a classification methodology of surprising power and reach. It was a litmus test of sorts, identifying the class of what became known as “NP-complete” problems: a solution to any would crack them all. 

Papadimitriou thinks of NP-completeness as a versatile tool. “If you cannot solve a problem, try to prove it is NP-complete, because this will maybe save you a lot of time,” he said at the public lecture—you can give up on an exact solution and move on to solving an approximation or variation of the problem instead. 

In the grand sweep of history, Papadimitriou sees the phenomenon of NP-completeness and the P vs. NP quest as computer science’s destiny. Because as scientific serendipity would have it, a Soviet mathematician, Leonid Levin, converged on a result equivalent to Cook’s at more or less the same time. Levin, now at Boston University, did his work behind the Iron Curtain. After it received wider attention (he immigrated to America in 1978), the result became known as the Cook-Levin theorem.

And in a further coda a decade or so later, a “lost letter” was discovered in the Princeton archives of the Austrian logician Kurt Gödel. In 1956, he’d written to von Neumann asking whether a logic problem—which in modern parlance would be called NP-complete—could be solved in polynomial time. He opined that “this would have consequences of the greatest magnitude.”

billiard balls
The clique problem: Search for cliques in a graph, such as a certain subset of friends in a social network.
DEREK BRAHNEY

4. While a half-century of work hasn’t yielded anything close to a solution, some results at least capture the imagination: a paper in 2004 claimed a proof for P = NP using soap bubbles as a mechanism of analog computation (soap film, naturally aligning in the minimum-­energy configuration, solves the NP-complete Steiner tree problem in a fashion). 

These days it’s a rare bird of a computer scientist—for example, Ron Fagin, an IBM fellow—who tackles the problem head on. In the 1970s, he produced Fagin’s theorem, which characterized the class NP in terms of mathematical logic. And he’s solved the problem more than once, but the results never stood for more than a few days before he found a bug. Fagin recently got funding for a P vs. NP project from IBM’s Exploratory Challenges program supporting adventurous research. In explaining why he keeps at it, he likes to quote Theodore Roosevelt, who said that it is far better to “dare mighty things” than to rank among those who “live in a gray twilight that knows neither victory nor defeat.”

But most complexity theorists dream a little smaller, opting instead for indirect approaches—tilting the problem, reshaping or reframing it, exploring related environs, and further whittling down the arsenal of tools that could be deployed against it (many are now known to be useless). 

Ryan Williams, a computer scientist at MIT, is trying to illuminate the problem both from above and from below—investigating the nature of “upper bounds” and “lower bounds” on core computational problems. An upper bound, in simple terms, is a specific mathematical claim that there exists a concrete algorithm that solves a particular problem without exceeding a certain amount of resources (time, memory, energy). A lower bound is the intangible opposite: it’s a general claim of impossibility, showing that no such algorithm exists universally. One focus of Williams’s research is to make lower bounds constructive and concrete—mathematical objects with describable features. He believes that more constructive approaches to lower bounds are “precisely what we are missing from current approaches in complexity theory.”

Williams has pegged the likelihood that P ≠ NP at a fairly moderate 80%. But lately some researchers in the field are expressing doubts about even that level of certainty. “More and more, I’m starting to wonder whether P equals NP,” Toniann Pitassi, a computer scientist at the University of Toronto and a former PhD student of Cook’s, says. Her approach in circling around the problem is to study both scaled-up and scaled-down analogues, harder and easier models. “Sometimes generalizing the question makes it clearer,” she says. But overall, she hasn’t achieved clarity: “Most people think P doesn’t equal NP. And I don’t know. Maybe it’s just me, but I feel like it’s become less and less clear that that’s the truth.” 

Historically, Pitassi points out, surprising results have occasionally come out of nowhere—seeming impossibilities proved possible by smart algorithm designers. The same could happen with P vs. NP, maybe in another 50 years or a century. One of the most important results in all of complexity theory, for instance, was achieved by David Barrington, of the University of Massachusetts, Amherst, in 1989. The gist of it (for our purposes) is that he devised a clever algorithm, which set out to do something that seemingly should’ve required an unbounded amount of memory but in fact used an astonishingly small amount—just five bits of information, enough to specify a number between one and 32 (inclusive) or a two-­letter word.

A more recent and related result, from 2014, took James Cook by surprise. Drawing from Barrington’s theorem, it uses memory in a wonderfully weird way. As hinted in the title of the paper, by the University of Amsterdam’s Harry Buhrman and collaborators, it’s about “computing with a full memory.” James can rattle off the paper’s introductory paragraph practically verbatim:

Imagine the following scenario. You want to perform a computation that requires more memory than you currently have available on your computer. One way of dealing with this problem is by installing a new hard drive. As it turns out you have a hard drive but it is full with data, pictures, movies, files, etc. You don’t need to access that data at the moment but you also don’t want to erase it. Can you use the hard drive for your computation, possibly altering its contents temporarily, guaranteeing that when the computation is completed, the hard drive is back in its original state with all the data intact? 

The answer, counterintuitively, is yes. 

James thinks of it as “borrowed memory.” After the shock of this result sank in, he had fun figuring out how to apply it to a particular problem—picking up where his dad had left off. 

A couple of decades ago, Steve Cook moved on to other related problems in complexity theory. With one problem, he made a conjecture about the amount of memory an algorithm would need to solve the problem—honing it to the absolute minimum. In 2019, James, together with Ian Mertz, one of Pitassi’s PhD students, deployed the poetic idea of borrowing memory and proved that even less memory was needed. The result didn’t go all the way to refuting his dad’s conjecture, but it’s a bit of progress in the grand complexity quest nonetheless.

And problems in complexity theory, James observes, sometimes have a domino effect—if there’s a proof in one critical corner, then all the dominoes fall. The breakthrough results, the most important ones, come from a long line of work, by a lot of different people, making incremental progress and establishing connections between different questions, until finally a big result emerges.

He also mentions a caveat: while a truly devilishly fast P = NP algorithm would be earth-shattering, there is also a scenario in which P = NP might be a letdown. It might turn out that a P algorithm capable of solving the NP-complete problem is on a time scale of, say, n^100. “Technically that falls under P: it’s a polynomial,” says James. “But n^100 is still very impractical”—it would mean any sizable problems would still be out of reach on human time scales. 

That is, of course, assuming we can find the algorithm in the first place. Donald Knuth, an algorithmist at Stanford, in recent years changed his mind—he “flipped the bit.” His intuition is that P does indeed equal NP, but that we’ll probably never be able to make use of that fact, practically speaking—because we won’t actually know any of the algorithms that happen to work. There are mind-boggling numbers of algorithms out there, he explains, but most of them are beyond our ken. So whereas some researchers might insist that no P = NP algorithm exists, Knuth contends that “it’s more likely that no polynomial-time algorithm will ever be embodied—actually written down as a program—by mere mortals.” 

For Papadimitriou, any answer would quench a lifelong obsession. He believes the P vs. NP problem belongs in the realm of fundamental scientific conundrums such as the origin of life and the unification of nature’s force fields. It’s the kind of profound, consequential puzzle, “concrete yet universal,” he said, “that adds meaning not only to science, but to human life itself.”

“Imagine that we are lucky, and we are able to squeeze another couple of thousand years out of this planet, against the odds and despite the oddballs,” he said. “And we don’t solve these problems. What’s the point?!”

Deep Dive

Computing

Inside the hunt for new physics at the world’s largest particle collider

The Large Hadron Collider hasn’t seen any new particles since the discovery of the Higgs boson in 2012. Here’s what researchers are trying to do about it.

Why China is betting big on chiplets

By connecting several less-advanced chips into one, Chinese companies could circumvent the sanctions set by the US government.

How Wi-Fi sensing became usable tech

After a decade of obscurity, the technology is being used to track people’s movements.

Algorithms are everywhere

Three new books warn against turning into the person the algorithm thinks you are.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.