Skip to Content

Supercomputer Salvo

Two U.S. installations will boost science and surpass Japan
September 1, 2004

When Japan’s Earth Simulator surged to life two years agoas the world’s most powerful supercomputer, it heightened concerns that computing efforts in the United States were falling behind (see “Supercomputing Resurrected,” TR February 2003). The machine performs more than 35 trillion operations per second, or 35 teraflops, at its peak speed. Now, two contenders that will vastly outperform the Earth Simulator are waiting in the wings: a 360-teraflops IBM-built machine at California’s Lawrence Livermore National Laboratory, scheduled for completion in 2005, and a 100-teraflops Cray system at Tennessee’s Oak Ridge National Laboratory, due to be up and running in 2006, and possibly expanding to 250 teraflops the following year.

While the Lawrence Livermore machine will be used primarily to project how well materials in nuclear stockpiles age, the Oak Ridge system will be open to research proposals. Likely projects for the superfast computer range from simulated protein-folding experiments to research in nanotechnology, aerospace, and energy.

One possible payoff: carmakers could run computer models of crashes, reducing their reliance on expensive vehicle crash tests. General Motors alone spends $500,000 on each crash test.

The new, ultrafast computers will also be able to more accurately predict when a material is likely to crack, an insight critical to the safety of everything from aircraft to nuclear-power-plant reactor vessels. Current simulations model individual atoms in an area no more than a few micrometers wide for no more than a millisecond. With today’s supercomputers, “You could never observe something as simple as ice melting,” says Don Dossa, program manager for the Lawrence Livermore machine. The new machine, he says, will model atoms in an area thousands of times larger for nearly one second, helping explain phenomena that people can actually see, such as a crack forming. It’s an advance that might seal the fissure in U.S. supercomputing.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.