Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Is Internet history about to repeat itself?

Maybe. Back in the 1980s, the National Science Foundation created the NSFnet: a communications network intended to give scientific researchers easy access to its new supercomputer centers. Very quickly, one smaller network after another linked in-and the result was the Internet as we now know it. The scientists whose needs the NSFnet originally served are barely remembered by the online masses.

Fast-forward to 2002. This summer, the National Science Foundation will begin to install the hardware for the TeraGrid, a transcontinental supercomputer that should do for computing power what the Internet did for documents. First, clusters of high-end microcomputers will be set up at four sites: the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign; the U.S. Department of Energy’s Argonne National Laboratory outside Chicago; Caltech in Pasadena, CA; and the San Diego Supercomputer Center at the University of California, San Diego. Then, by early next year, those four clusters will be networked together so tightly that they will behave as a single entity.

This virtual computer will rip through problems at up to 13.6 trillion floating-point operations per second, or teraflops-eight times faster than the most powerful academic supercomputer available today. Such speed will enable scientists to tackle some of the most computationally intensive tasks on the research docket-from problems in protein folding that will form the basis for new drug designs to climate modeling to deducing the content and behavior of the cosmos from astronomical data.

But more than that, the TeraGrid will be a prime example of what has come to be known as “grid computing”-the massive integration of computer systems to offer performance unattainable by any single machine. The integration of these systems will be so transparent that users will no more notice they are on a network than motorists pay attention to which cylinder is firing at any given moment. To people logging onto the TeraGrid, the system will look like just another set of programs running on their office computers. But that look will be deceptive: what appear to be applications that reside on the local desktop machine might actually be data analysis tools running on the cluster at San Diego, or visualization software crunching bits at Argonne. The “files” TeraGrid users are working on might consist of databases scattered all over the country, containing thousands of gigabytes-a.k.a. terabytes.

Grid computing visionaries hope that this will be only the beginning-that the $53 million TeraGrid will catalyze a new era of grid computing for the masses, much as the NSFnet broke down barriers that led to the blossoming of the Internet. Just within the past year or two, dozens of such projects have been announced in Europe, Asia and the United States, with more likely to come. And the developers of grid computing are now settling on a single standard-called the Globus Toolkit-that will help grid projects under development all around the world coalesce into a worldwide network of tappable computer power.

“Completely transformational” is how Larry Smarr, director of the California Institute for Telecommunications and Information Technology, sums up grid computing. Smarr, renowned for his role in developing the communications system that evolved into the Internet’s backbone, says the technology is what the Internet has been building toward for the past three decades. “In the first phase,” he explains, “we got the wires up and hooked in all the computers. Then with the World Wide Web, we started hooking in all the online documents.” Now, he says, with grid computing, we’ll be hooking in everything else (see “Planet Internet,” TR March 2002).

This means that users will begin to experience the Internet as a seamless computational universe. Software applications, databases, sensors, video and audio streams-all will be reborn as services that live in cyberspace, assembling and reassembling themselves on the fly to meet the tasks at hand. Once plugged into the grid, a desktop machine will draw computational horsepower from all the other computers on the grid. “What we’re seeing,” says Smarr, “is the emergence of a new infrastructure upon which first science, and then the whole economy, will be built.”

Pages

0 comments about this story. Start the discussion »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me