Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Experts are projecting that the rate of growth in supercomputing power is about to plateau, making predictions like this one obsolete

High Performance Computing expert Thomas Sterling would like you to know that a computing goal you’ve never heard of will probably never be reached. The reason you should care is that it means the end of Moore’s Law, which says that roughly every 18 months the amount of computing you get for a buck doubles.

Or at least, the end of Moore’s Law-style advances in the processing power of the world’s biggest supercomputers. For a while now, every 11 years or so, the planet’s smartest and best-funded computer scientists have managed to produce a supercomputer that’s 1,000 times faster than its predecessor. In 1999, we reached teraflops-scale computing, or a trillion (10^12) floating point operations per second. In 2008, Los Alamos’ Roadrunner supercomputer reached petascale computing, or a quadrillion (10^15) floating point operations per second.

In a mind-blowingly jargon-rich interview with HPC Wire, Sterling doesn’t just assert that Zeta-Scale (10^21 FLOPS) computing is impossible, he also makes it seem pretty unlikely we’re going to reach the next milestone, Exascale computing (10^18 FLOPS) without ripping  apart our existing ways of building supercomputers, root and branch. Emphasis mine:

“[I]ndustry will deliver the systems that will be used in the next decade. There is no other choice. It is clear that vendors would prefer not to have to retool and this is true for users as well. To do so will involve a degree of disruption that would be best avoided if it were possible. And for a portion of the overall workload, even at exascale, this may prove to be possible. But such systems are a placebo to an ailing HPC community that if not in triage, is already showing symptoms of underlying conditions that require attention.”

If you read the whole interview, basically, Sterling is showing that we’re not going to reach the next supercomputing milestone with more incremental improvements on existing systems, which is how we reached the last two milestones. Indeed, he says that without “innovative ways of managing vertical and lateral data movement,” current estimates posit that future exascale machines will use roughly ten times more power than is considered feasible.

If Sterling is right, and he is one of the deans of high performance computing, it seems likely that we’ll never reach the next milestone in supercomputing in silicon. Physicist Michio Kaku says Moore’s Law will collapse in about ten years. Sterling agrees, and says it comes down to the basic physical restrictions of working with atoms.

What will the supercomputers of the future be made from, then?

Kaku mentions machines based on protein, DNA, and optical devices as possible replacements. When the time comes to transition to a new medium, he thinks the world will migrate to 3-dimensional chips. That technology would be followed by molecular computers and, eventually, by quantum computers around the end of the 21st century.

 Follow @Mims or get in touch

18 comments. Share your thoughts »

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me