Skip to Content

And The World’s Fastest Supercomputer in 2012 Is…

…already under construction in the United States.
November 9, 2010

Last week, China stunned the world by unveiling the world’s fastest supercomputer. At 2.5 petaflops, the machine is 40% more powerful than the previous record-holder, the AMD processor-powered Cray Jaguar at the National Center for Computational Sciences. (Although it’s not clear that an apples to apples comparison can be made between the speed of the Tianhe 1A and its competitors.)

A water-cooled IBM Power7 processor node, courtesy NCSA

What made the announcement particularly surprising was that while experts in high performance computing have been watching China’s steady ascent up the Top500 rankings, the country’s efforts have been little remarked on by the non-technical press. Despite the speed at which the technology for high performance computing moves, the world’s supercomputers are in general a long time coming, with planning and design preceding first benchmarks by years.

Which is precisely why it’s possible to predict with some confidence the world’s fastest supercomputers – even, perhaps, the single fastest supercomputer – in the year 2012. According to Jack Dongarra, the keeper of the official Top500 list of the world’s fastest systems, there are five systems in the planning or construction phases that will exceed the Tianhe 1A in power.

Some of the systems that are in the works represent more than merely incremental improvement on existing systems.

The University of Illinois’ National Center for Supercomputing Applications (NCSA) will begin to take delivery of one such system in the first half of 2011, ramping up its massive scientific data crunching in the Fall of 2011, says Thom Dunning, director of NCSA. In 2012, the system, called Blue Waters, will be complete and the entire computer will be supporting a full range of scientific research.

Blue Waters will be unique in a number of ways. The first is that it will use the latest IBM Power chip, the Power 7. Even more essential to its performance will be the supercomputer’s new interconnect, which is the ultra fast network that allows all of the processor cores of the computer to communicate with one another. This interconnect has significantly higher bandwidth and lower latency than previous interconnects, such as the Infiniband interconnect that is standard on many other supercomputers.

Blue Waters will also be unusually compact - each 39 inch wide, 6 foot deep cabinet has only 3 racks. (A set three racks, called a building block, is approximately 10 feet wide and 6 feet deep.)

Dunning admits that it’s still unknown whether, by 2012, Lawrence Livermore National Laboratory will have a BlueGene system, also built by IBM, that will perform better than Blue Waters on a broad range of science and engineering applications.

Lawrence Livermore’s system is being built with a very specific application in mind - using simulation to help safeguard the aging supply of nuclear weapons in the U.S.

“There’s how well is it going to perform on the Linpack benchmark, and then how well will it perform on a broad range of science and engineering applications,” says Dunning. “We know of no project in the U.S. that will perform as well as Blue Waters on a broad range of science and engineering projects.”

This is part 2 of a three-part series. Part 1: Why China’s New Supercomputer Is Only Technically the World’s Fastest

Follow Mims on Twitter or contact him via email.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.