Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Because of the complex designs that high-speed ­single-core chips now require, multiple cores can deliver the same amount of processing power while consuming less electricity. Less electricity generates less heat. What’s more, the use of multiple cores spreads out whatever heat there is.

Most computer programs, however, weren’t designed with multiple cores in mind. Their instructions are executed in a linear sequence, with nothing happening in parallel. If your computer seems to be doing more than one thing at a time, that’s because the processor switches between activities more quickly than you can comprehend. The easiest way to use multiple cores has thus been through a division of labor–for example, running the operating system on one core and an application on another. That doesn’t require a whole new programming model, and it may work for today’s chips, which have two or four cores. But what about tomorrow’s, which may have 64 cores or more?

Revisiting Old Work
Fortunately, says Leslie Valiant, a professor of computer science and applied mathematics at Harvard University, the fundamentals of parallelism were worked out decades ago in the field of high-performance computing–which is to say, with supercomputers. “The challenge now,” says Valiant, “is to find a way to make that old work useful.”

The supercomputers that inspired multicore computing were second-generation devices of the 1980s, made by companies like Thinking Machines and Kendall Square Research. Those computers used off-the-shelf processors by the hundreds or even thousands, running them in parallel. Some were commissioned by the U.S. Defense Advanced Research Projects Agency as a cheaper alternative to Cray supercomputers. The lessons learned in programming these computers are a guide to making multicore programming work today. So Grand Theft Auto might soon benefit from software research done two decades ago to aid the design of hydrogen bombs.

In the 1980s, it became clear that the key problem of parallel computing is this: it’s hard to tear software apart, so that it can be processed in parallel by hundreds of processors, and then put it back together in the proper sequence without allowing the intended result to be corrupted or lost. Computer scientists discovered that while some problems could easily be parallelized, others could not. Even when problems could be parallelized, the results might still be returned out of order, in what was called a “race condition.” Imagine two operations running in parallel, one of which needs to finish before the other for the overall result to be correct. How do you ensure that the right one wins the race? Now imagine two thousand or two million such processes.

“What we learned from this earlier work in high-performance computing is that there are problems that lend themselves to parallelism, but that parallel applications are not easy to write,” says Marc Snir, codirector of the Universal Parallel Computing Research Center (UPCRC) at the University of Illinois at Urbana-Champaign. Normally, programmers use specialized programming languages and tools to write instructions for the computer in terms that are easier for humans to understand than the 1s and 0s of binary code. But those languages were designed to represent linear sequences of operations; it’s hard to organize thousands of parallel processes through a linear series of commands. To create parallel programs from scratch, what’s needed are languages that allow programmers to write code without thinking about how to make it parallel–to program as usual while the software figures out how to distribute the instructions effectively across processors. “There aren’t good tools yet to hide the parallelism or to make it obvious [how to achieve it],” Snir says.

21 comments. Share your thoughts »

Credits: Illustration by The Heads of State, ©Thinking Machines Corporation, 1987. Photo: Steve Grohe
Video by Robert Brilliant

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me