Skip to Content

Q&A: D-Wave’s Geordie Rose

Did D-Wave really demonstrate “the world’s first commercial quantum computer”?
April 6, 2007

On February 13, D-Wave Systems, a startup based in Burnaby, British Columbia, claimed to have demonstrated “the world’s first commercial quantum computer.”

Quantum coldness: Orion’s 16-qubit processor is shown in the top image. It uses a superconducting metal called niobium. When chilled to 5 millikelvins, or 0.005 degrees above absolute zero, the electrons in the niobium superconductor form particles called Cooper pairs, which can occupy the same quantum state. A Leiden Cryogenics dilution fridge (bottom image) is used to cool the Orion chip to the point that the electrons in its niobium semiconductor enter a quantum state.

At the Computer History Museum in Mountain View, CA, Geordie Rose, the company’s founder and chief technology officer, showed how the Orion computer could search for a protein in a database and find the closest match, discover the optimal seating arrangement for the guests at a wedding reception, and solve a Sudoku puzzle.

Quantum computing, first proposed by the physicists Paul Benioff and Richard Feynman in the early 1980s, works by exploiting the strange ambiguities of quantum mechanics. According to the laws of quantum mechanics, the state of a particle such as an electron can be undecided: it could be on or off, spinning up or down.

In a quantum computer, each quantum bit of information–or qubit–can therefore be unfixed, a mere probability; this in turn means that in some mysterious way, a qubit can have the value of one or zero simultaneously, a phenomenon called superposition. Two qubits can thus represent four different values (00, 01, 10, and 11 in binary notation); four qubits can represent sixteen values; and so on. In theory, a quantum computer could solve in less than a minute problems that it would take a classical computer millennia to solve.

To date, most quantum computers have been more or less successful science experiments. None have harnessed more than 12 qubits, and the problems the machines have solved have been trivial. Quantum computers have been complicated, finicky machines, employing delicate lasers, vacuum pumps, and other exotic machinery to shepherd their qubits.

D-Wave (which has raised $44 million from investors such as venture capital firm Draper Fisher Jurvetson) claims that it has succeeded in constructing a practical quantum computer by employing a simple design, derived from technologies already used to make standard computer chips. The company describes the Orion as a 16-qubit “adiabatic quantum computer,” built around a chip made from a metal called niobium which, when sufficiently cold, becomes a superconductor. Chilled in a bath of liquid helium to nearly −273 ºC, the electrons in the niobium superconductor form particles called Cooper pairs, which can occupy the same quantum state, thus permitting the Orion to compute quantum algorithms.

Herb Martin, D-Wave’s chief executive, says this uncomplicated design will allow the Orion to “scale” to a 512-qubit machine later this year and to a 1,024-qubit computer by the middle of 2008.

But computer scientists who specialize in quantum computing have been profoundly skeptical of D-Wave’s demonstration. D-Wave provided no evidence to back up its claims: it has released only the sketchiest details about the inner workings of Orion. What computer scientists do know does not impress them.

Scott Aaronson, a theoretical computer scientist at the Institute for Quantum Computing in Waterloo, Ontario, and the author of a much-read blog called Shtetl-Optimized, began the obloquies when he denounced the Orion for being as useful at solving problems as “a roast-beef sandwich.”

Of Geordie Rose’s claims to having built the first practical quantum computer, Aaronson wrote in an e-mail, “Whatever else D-Wave might or might not have done, this can be instantly rejected as hype. If by ‘practical’ he means able to solve practical problems faster than existing classical computers, then this is clearly false. If he means able to solve tiny demonstration problems, he’s been beaten by loads of people. So I can’t think of any interpretation under which he’s telling the truth.”

Aaronson’s surly tone was typical. Umesh Vazirani, a professor of computer science at the University of California, Berkeley, said, “D-Wave is misleading the public by calling their device ‘a practical quantum computer.’ The whole point of quantum computing is achieving a large speedup over classical computers, something that D-Wave hasn’t accomplished.”

Something solved the problems at the demonstration, but it might not necessarily be a quantum computer. In particular, computer scientists do not know how well the Orion corrects for the crescendo of errors, caused by thermal noise and the decoherence of qubits, that is attendant upon any quantum computation. These errors must be carefully managed if a quantum computer is to work. Indeed, according to all the computer scientists to whom Technology Review spoke, because the Orion can function as a rather slow analog computer, it’s possible that the Orion was not really performing quantum operations at all when it was demonstrated at the Computer History Museum.

“Has D-Wave really implemented a 16-qubit quantum computer, or do their qubits decohere so quickly that they are in effect implementing a classical algorithm?” asked Vazirani. “D-Wave has not provided any evidence to favor the first possibility over the second.”

The most generous quantum computing scientists will grant that D-Wave has made an interesting gamble.

“I don’t know much about business, but I imagine that the reasoning at D-Wave is something like the following,” said Seth Lloyd, a professor of mechanical engineering at MIT, who proposed the first technologically feasible design for a quantum computer. “Say the odds are 10-to-1 against adiabatic quantum computing working, so the venture is likely to fail. But if it succeeds, then they’ll clean up. What D-Wave is doing is unlikely to succeed, but it is not quixotic.”

We asked Geordie Rose to defend the Orion to his critics.

Jason Pontin: Did you, in fact, demonstrate the world’s first practical quantum computer?

Geordie Rose: Yes.

JP: Well, that’s blunt. Is a truly fault-tolerant, adiabatic computer a quantum computer?

GR: Yes.

JP: That begs this question, I’m afraid: is the Orion fault-tolerant?

GR: Yes, it is.

JP: Really?

GR: If you’d like me to elaborate, I can.

JP: That would be nice.

GR: There are two different concepts here. Fault tolerance is first and foremost about whether the processor will continue to function as it was designed in the presence of faults. In the system that we operated during the demo, the chip had 2 broken components out of 56, and the thing operated beautifully in the presence of those faults. So the Orion is absolutely fault-tolerant. There’s no question. We’ve demonstrated that. But I think you’re really asking about decoherence.

JP: I am.

GR: The presence of noise in a quantum computer can cause errors. If you want to run a quantum computer coherently, to be able to do anything that a quantum computer possibly can do, you need to actively remove errors. In our approach, the adiabatic model, the physics of the device is quite different from conventional quantum computers like gate models. In order for an error to happen at all in our approach, you need to supply a certain amount of energy which physicists call an “energy gap.” If the noise doesn’t have at least that amount of energy, it can’t do anything bad. So if you don’t supply that amount of energy, there’s a natural gap that protects the system from noise. Adiabatic quantum computers are known to be much more robust to noise than other approaches.

JP: Are you really claiming that the Orion can solve NP-complete problems? [NP-complete problems, of which the most famous is the traveling salesman problem, are the hardest problems in complexity theory for which a solution might be efficiently verified. Very common in real life, they are hard to solve because their solution seems to require considering every permutation of a set of variables, which takes time that increases exponentially with the number of variables. Computer scientists doubt that NP-complete problems can be solved in any reasonable amount of time. Rose has attracted controversy by claiming that the Orion could create approximate solutions that are good enough for business.]

GR: It solves them in the sense that it provides approximate solutions to things that are good enough in the sense that they meet the requirements of the user. These classes of problems are ubiquitous in business. It’s suspected that no machine, regardless of the kind, can efficiently and exactly solve these types of problems, at least in the worst case. But that’s an overly restrictive definition of what “solving” means. Generally, if a business has one of these problems embedded in its daily operations, it uses what is called a heuristic to solve it, which is a set of rules of thumb that gives good approximate solutions quickly. Our machine has as its intended purpose competition with those heuristics. We’re not claiming that we can exactly, efficiently solve worst-case problems, no–but we are claiming that this thing will be competitive and eventually surpass all conventional approaches at solving these sets of problems.

JP: I am not a mathematician, but I play one on TV. What about the PCP theorem that says that an approximate solution in these cases is as difficult as the best solution?

GR: It depends on what you mean by “approximate.”

JP: Well, tell me what you mean by “approximate.” Is Geordie Rose using “approximate” in some special sense that no one else uses?

GR: “Approximate” means something specific in computer science. It’s not the way the term is used conventionally in business. So, say that you have yet to pick a route through a bunch of cities–

JP: The traveling salesman problem?

GR: Yes, for example. Any route is a solution. Any route is also an approximate solution. How good the approximate solution is is somehow the difference between the one that you’ve got and the best possible one. So, as the solutions get better and better, they get less and less approximate. So, what computer scientists tend to mean by “approximate” in these cases is something very specific about how great the approximation is, and they tend to mean something that is very close to exact.

JP: You mean that when you use “approximate” in this sense, you are using the word as businesspeople would use it, and not as computer scientists use it?

GR: It’s the same sense that people use when they solve these problems today. You need a solution; you’d prefer one that is the best possible solution with the resources that you have available to you, and that is by any definition an approximation solution. You’d like it to be better, but those things are not available to you because of the nature of the problem. So this particular machine that we’ve built is designed to compete with the machines that give those types of solutions.

JP: Scott Aaronson said that the Orion was as useful as “a roast-beef sandwich.” You obviously feel that is insulting; but wouldn’t you admit that your computer isn’t very useful since it solves problems more slowly than a classical computer?

GR: The purpose of the demo was not to show one-to-one performance superiority over conventional systems. The purpose of it was to do a systems proof of concept and to run commercially relative applications on a quantum computer, which has never even been done before–not even close. This is way above the state of the art. So in terms of the actual time it takes to solve problems, Orion as it currently stands is about 100 times slower than a PC running the best algorithms. If you were an expert, you could define a good algorithm on the Web, spend $1,000 on a PC, and you could beat the system by a factor of 100. So in that sense, Scott’s right, though that’s kind of not the point.

JP: Well, what’s the point, then?

GR: The point is that the demonstration shows a clear path from where we are today into the future. Those future machines will be significantly better.

JP: The plan is to demonstrate a 1,024-qubit machine in 2008?

GR: Yes, by mid 2008. But prior to that, we’re going to have an online system for people to use, for which they can program applications.

JP: That seems implausibly rapid. How will you do it?

GR: Well, there are three things that need to be done.

The first is that the design that you are using for the processor, specifically the input-output systems, need to be scalable, not just in principle but in practice. Most of the proposals that have been put forward for quantum-computing architectures, in fact all of them so far, are not scalable in that sense. In our case, we believe we’ve found a path to real scalability in the hardware. The primary thing that needs to be overcome is this issue of how do you get information into and out of the chip. We think we’ve found a way around that problem.

The second thing is how you build it, and that’s a fabrication issue. Part of the reason why we picked the approach that we picked is that the circuits that we’re using as the basis for these things can be designed, built, and tested using standard semiconductor procedures. So we don’t need to invent any new fabrication technology except for getting the process running in the first place.

The third thing, which is probably the most difficult question to answer, is this: given that we can build it and send information in and out of it, will it in fact continue to operate as a quantum computer? That’s a point that we simply cannot answer at the present time because no one has been able to model systems at that level with any predictive capability whatsoever. It’s too complicated. That’s a question that can only be answered empirically. So our philosophy is, do a new processor every month. Say we have 12 generations per year, something doesn’t appear to be working; we can fix it through iterative redesign.

JP: How is your commercial approach different from the academy’s?

GR: The academic’s approach is not necessarily worse than ours, but it’s different. Our approach is to throw as many qubits as you possibly can onto a chip, get it solving real problems, and then use the performance on those problems as the metric with which you gauge what’s better and what’s worse. So when you increase the capability of the machine, you are increasing the capability of the machine to solve problems faster, and bigger problems. Compared with the academic approaches, ours is quick and dirty, although I don’t think it’s any less careful.

JP: What kinds of things could I do with a 1,024-qubit quantum computer?

GR: There are lots and lots of existing commercial applications that require an optimal solution to a problem with a lot of variables. For example, in chip design, a lot of the issues that have to do with hardware design verification are of this sort. There are also lots of applications in financial engineering that investment banks have been very interested in pursuing with us: things like portfolio optimization, risk reduction, selecting and pricing derivatives. In addition, every single scheduling problem that exists in the world is one of these problems. You can imagine somebody like an airline or a federal-government organization that had to schedule lots of people where there are all sorts of issues about who works where and who gets access to what and why. These problems create these massive conflict-resolution scenarios that simply can’t be managed nowadays. They’re too hard to solve in the lengths of time that people want to solve them in. I think that the way it’s going to look in the future is that anybody who has a significant scheduling, routing, planning, application–all of those applications will be ported to our machines, which will be available online.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.