Skip to Content
Uncategorized

Google thinks it’s close to “quantum supremacy.” Here’s what that really means.

It’s not the number of qubits; it’s what you do with them that counts.
Google

Seventy-two may not be a large number, but in quantum computing terms, it’s massive. This week Google unveiled Bristlecone, a new quantum computing chip with 72 quantum bits, or qubits—the fundamental units of computation in a quantum machine. As our qubit counter and timeline show, the previous record holder is a mere 50-qubit processor announced by IBM last year.

John Martinis, who heads Google’s effort, says his team still needs to do more testing, but he thinks it’s “pretty likely” that this year, perhaps even in just a few months, the new chip can achieve “quantum supremacy.” That’s the point at which a quantum computer can do calculations beyond the reach of today’s fastest supercomputers.

When Google or another team finally declares success, expect a flood of headlines about the dawn of a new and exciting era. Quantum computers are supposed to help us discover new pharmaceuticals and create new materials, as well as turning cryptography on its head.

But the reality is more complicated. “You’ll struggle to find any [researcher] who likes the term ‘quantum supremacy,’” says Simon Benjamin, a quantum expert at Oxford University. “It’s very catchy, but it’s a bit confusing and oversells what quantum computers will be able to do.”

Quantum building blocks 

To understand why, some brief background. The magic of quantum computers lies in those qubits. Unlike the bits in classical computers, which store information as either 1 or 0, qubits can exist in multiple states of 1 and 0 at the same time—a phenomenon known as superposition. They can also influence one another even when they’re not physically connected, via a process known as entanglement.

What this boils down to is that even though a few extra bits make only a modest difference to a classical computer’s power, adding extra qubits to a quantum machine can increase its computational power exponentially. That’s why, in principle, it doesn’t take all that many qubits to outgun even the most powerful of today’s supercomputers.

Creating qubits, however, requires prodigious feats of engineering, such as building superconducting circuits kept at temperatures colder than outer space (the approach Google uses). That’s necessary to insulate them from the outside world. Changes in temperature or the slightest vibrations—phenomena known as “noise”—can cause qubits to “decohere,” or lose their fragile quantum state. As that happens, errors quickly creep into calculations.

And the greater the number of qubits, the more errors there are. They can be corrected using additional qubits or clever software, but that saps a lot of the machine’s computational capacity. In the past few years, advances in super-cooling technology and other areas have boosted the number of qubits that can be spun up and managed effectively. But it remains a constant battle between power and complexity.

Hopes of reaching quantum supremacy have been dashed before. For some time, researchers thought that a 49-qubit machine would be enough, but last year researchers at IBM were able to simulate a 49-qubit quantum system on a conventional computer (see “New twists in the road to quantum supremacy”). Nor are conventional computers standing still: China, in particular, has been investing heavily in the technology and now boasts the world’s two most powerful machines.

Google’s big moment

Still, says Daniel Gottesman of the Perimeter Institute for Theoretical Physics in Canada, while better algorithms and digital computers could shift the threshold of supremacy a bit, it would probably only require a few additional qubits for a quantum machine to really outstrip them. With Bristlecone’s 72 qubits, there is plenty of firepower to play with. 

Using Bristlecone, Martinis and his colleagues plan to run a test that seeks to demonstrate quantum supremacy. The strict definition of the benchmark is that the task should be impossible for a conventional computer to perform. But this raises a thorny issue: how do you really know if a quantum computer has produced a correct answer if you can’t check it with one that uses silicon bits?

To deal with this, the Google team plans to go just to the edge, using a quantum machine to solve an algorithm at the very limit of the capabilities of today’s supercomputers. “You can also show that the algorithm is exponentially complicated,” explains Martinis. Adding just one more qubit would then take the quantum device well beyond what a conventional machine could handle in any reasonable time.

Name game

Even if Google reaches the magic benchmark, though, the complexity and cost of managing quantum machines will limit how useful they can be.

Though there are some potentially promising applications, such as precisely designing molecules (see “10 Breakthrough Technologies 2018”), classical machines are still going to be better, faster, and far more economical at solving most problems. “Using a quantum computer would be like chartering a jumbo jet to cross the road,” says Oxford University’s Benjamin.

He suggests that rather than “quantum supremacy,” we should be talking about attaining “quantum inimitability”—in other words, specific tasks that only quantum computers can do. Other researchers have suggested names like “quantum advantage” or “quantum ascendancy.”

The semantics matter. Technologies such as AI went through multiple hype cycles before they really took off. There’s a risk that if expectations are raised too high now, quantum machines will fail to live up to them (see “Serious quantum computers are finally Here. What are we going to do with them?”). That could trigger an exodus of investors, who have been pumping millions of dollars into quantum startups.

Even the originator of “quantum supremacy” is trying to tamp down the buzz he helped create. John Preskill, a theoretical physicist at the California Institute of Technology, coined the term in a speech in 2011. In January of this year he published a paper in which he said quantum computing was about to enter a phase he called NISQ, or “noisy intermediate stage quantum,” where machines will have 50 to a few hundred qubits. “‘Noisy,’” he wrote, “means that we’ll have imperfect control over those qubits; the noise will place serious limitations on what quantum devices can achieve in the near term.” Preskill said he’s still convinced quantum computers will have a transformative effect on society, but that transformation, he concedes, “may still be decades away.”

The noise problem is a contentious issue. Gil Kalai, a professor at the Hebrew University of Jerusalem, has argued that the challenges posed by noise are so great they will prevent quantum machines from ever becoming really useful. Many experts disagree. “Noise can be managed,” says Andrew Childs, co-director of the Joint Center for Quantum Information and Computer Science at the University of Maryland. “You just need to understand how much of it you can tolerate.”

Google’s Martinis is also aware that expectations need to be managed. The algorithm that his team is planning to use is a very specific one for testing quantum machines’ capabilities rather than for achieving anything practical. “As soon as we get to quantum supremacy,” he says, “we’re going to want to show that a quantum machine can do something really useful.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.