We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not a subscriber? Subscribe now for unlimited access to online articles.

Christopher Mims

A View from Christopher Mims

The U.S. Approach to Supercomputing May Be a Dead End

It’s a minor miracle that computer scientists reached the petascale. What comes next is anyone’s guess.

  • November 12, 2010

To try to express the speed of modern supercomputers in conventional notation is an exercise in absurdity: a petaflop is one quadrillion floating point operations per second, while the next milestone, an exaflop, is a million trillion calculations per second. Got that?

The Millenium Simulation, an attempt to model the history of the universe, is one example of a scientific problem that will benefit from exascale computing (cc) Rich Murray.

The easiest way to compare them is to note that an exaflop is a thousand times as large as a petaflop. An exascale computer would be a thousand times as powerful as the fastest supercomputers on earth today, each of which cost in excess of $100 million to construct.

“To me the major happening over the past few years is that we were able to reach petascale using conventional technology,” says Thom Dunning, head of the National Center for Supercomputing Applications. “It was an evolution of a technology path we’ve been on for the past five years. We can’t reach exascale with same technology path.”

No one knows how we’re going to get to that scale, but just about every high-performance computing scientist seems to view it as inevitable. The Defense Advanced Research Projects Agency (DARPA), the government body tasked with funding the riskiest, most far-out research projects, thinks getting there will require that we “reinvent computing.”

Dunning speculates that a move from conventional architectures (CPUs) to GPUs, while difficult, might be a piece of the puzzle. “It may be that GPUs and other new technology may be useful in the longer term, but it’s going to require a very substantial effort to rewrite all the [currently library of] code to make effective use of that technology,” he says.

A hybrid CPU / GPU approach with the GPUs doing most of the heavy lifting is exactly the strategy that was adopted for the world’s newly crowned champion supercomputer, China’s Tianhe 1A.

This approach is risky: GPUs can have explosive peak performance, but their sustained performance tends to be relatively sluggish because their memory systems simply can’t feed them data fast enough. More importantly, all existing supercomputer software wasn’t written to take advantage of their novel brand of parallelism, originally designed for processing high-end graphics.

Dunning is doubtful that, at least at first, the Tianhe 1A will have the kind of general utility for scientific applications that more conventional supercomputers possess.

Regardless, those machines are running up against the limits of current technology in terms of operations per watt. Incremental improvements can be wrung from further shrinking of the “process technology” – literally, the size of the individual features on the chip, which is now approaching 22 nanometers in the lab.

But basic physics – quantum tunneling, the difficulty of keeping electrons in their paths once the barriers between features are only hundreds or thousands of atoms thick – is likely to put the kibosh on further improvements by simply endlessly shrinking everything on a chip.

GPUs have the advantage of being a substantially different way of accomplishing the same thing as CPUs - but not every problem is amenable to their variety of parallelism, so the quest continues.

Will exascale computing be realized via optical computing? Quantum computing? Or some other technology yet to be envisioned?

Check out additional posts in this series of conversations with Thom Dunning of NCSA:

And The World’s Fastest Supercomputer in 2012 Is…

Why China’s New Supercomputer Is Only Technically the World’s Fastest

Follow Mims on Twitter or contact him via email.

Learn from the humans leading the way in intelligent machines at EmTech Next. Register Today!
June 11-12, 2019
Cambridge, MA

Register now
More from Intelligent Machines

Artificial intelligence and robots are transforming how we work and live.

Want more award-winning journalism? Subscribe to Print + All Access Digital.
  • Print + All Access Digital {! insider.prices.print_digital !}*

    {! insider.display.menuOptionsLabel !}

    The best of MIT Technology Review in print and online, plus unlimited access to our online archive, an ad-free web experience, discounts to MIT Technology Review events, and The Download delivered to your email in-box each weekday.

    See details+

    12-month subscription

    Unlimited access to all our daily online news and feature stories

    6 bi-monthly issues of print + digital magazine

    10% discount to MIT Technology Review events

    Access to entire PDF magazine archive dating back to 1899

    Ad-free website experience

    The Download: newsletter delivery each weekday to your inbox

    The MIT Technology Review App

You've read of three free articles this month. for unlimited online access. You've read of three free articles this month. for unlimited online access. This is your last free article this month. for unlimited online access. You've read all your free articles this month. for unlimited online access. You've read of three free articles this month. for more, or for unlimited online access. for two more free articles, or for unlimited online access.