Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Bright lights: In 1987, Thinking Machines released its CM-2 supercomputer (above), in which 64,000 processors ran in parallel. The company declared bankruptcy in 1994, but its impact on computing was significant.

To help solve such problems, companies have called back to service some graybeards of 1980s supercomputing. David Kuck, for example, is a University of Illinois professor emeritus well known as a developer of tools for parallel programming. Now he works on multicore programming for Intel. So does an entire team hired from the former Digital Equipment Corporation; in a previous professional life, it developed Digital’s implementation of the message passing interface (MPI), the dominant software standard for multimachine supercomputing today.

In one sense, these old players have it easier than they did the last time around. That’s because many of today’s multicore applications are very different from those imagined by the legendary mainframe designer Gene Amdahl, who theorized that the gain in speed achievable by using multiple processors was limited by the degree to which a given program could be parallelized.

Computers are handling larger volumes of data than ever before, but their processing tasks are so ideally suited to parallelization that the constraints of Amdahl’s Law–described in 1967–are beginning to feel like no constraints at all. The simplest example of a massively parallel task is the brute-force determination of an unknown password by trying all possible character combinations. Dividing the potential solutions among 1,000 processors can’t help but be 1,000 times faster. The same goes for today’s processor-intensive applications for encoding video and audio data. Compressing movie frames in parallel is almost perfectly efficient. But if parallel processing is easier to find uses for today, it’s not necessarily much easier to do. Making it easier will require a concerted effort from chip makers, software developers, and academic computer scientists. Indeed, Illinois’s UPCRC is funded by Microsoft and Intel–the two companies that have the most to gain if multicore computing succeeds, and the most to lose if it fails.

Inventing New Tools
If software keeps getting more complex, it’s not just because more features are being added to it; it’s also because the code is built on more and more layers of abstraction that hide the com­plexity of what programmers are really doing. This is not mere bloat: programmers need abstractions in order to make basic binary code do the ever more advanced work we want it to do. When it comes to writing for parallel processors, though, programmers are using tools so rudimentary that James Larus, director of software architecture for the Data Center Futures project at Microsoft Research, likens them to the lowest-level and most difficult language a programmer can use.

“We couldn’t imagine writing today’s software in assembly language,” he says. “But for some reason we think we can write parallel software of equal sophistication with the new and critical pieces written in what amounts to parallel assembly language. We can’t.”

That’s why Microsoft is releasing parallel-programming tools as fast as it can. F#, for example, is Microsoft’s parallel version of the general-purpose ML programming language. Not only does it parallelize certain functions, but it prevents them from interacting improperly, so parallel software becomes easier to write.

Intel, meanwhile, is sending Ghuloum abroad one week per month to talk with software developers about multicore architecture and parallel-programming models. “We’ve taken the philosophy that the parallel-programming ‘problem’ won’t be solved in the next year or two and will require many incremental improvements–and a small number of leaps–to existing languages,” ­Ghuloum says. “I also tend to think we can’t do this in a vacuum; that is, without significant programmer feedback, we will undoubtedly end up with the wrong thing in some way.”

21 comments. Share your thoughts »

Credits: Illustration by The Heads of State, ©Thinking Machines Corporation, 1987. Photo: Steve Grohe
Video by Robert Brilliant

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me