Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Yesterday, in a teleconference, Intel and Microsoft announced that they will start to fund serious efforts to make parallel computing mainstream. The companies are injecting $20 million into computer-science research at the University of California, Berkeley, and at the University of Illinois, Urbana-Champaign, dubbing them the Universal Parallel Computing Research Centers (UPCRC).

For decades, parallel computing–the tricky job of dividing programming tasks among a number of processors–has been the province of academics who write programs for supercomputers. But with the advent of multicore chips–chips with more than one processing center–personal computers are on track to become supercomputers as well.

The main problem outlined at yesterday’s teleconference is that there is still no consensus on the best way to easily program general-purpose chips, like the ones in consumer computers, which have a large number of cores. Historically, programmers have been able to wait for the next generation of chips to come out to make their programs run faster. Now that the next generation of chips includes multiple cores, it’s not as obvious how to divvy up the tasks among the cores to achieve those same gains.

“Programmers have no choice: if they want fast programs, they’re going to have to write parallel programs,” says Dave Patterson, the director of the UPCRC and a computer-science professor at Berkeley.

Although no details were given on specific research projects, Tony Hey of Microsoft Research mentioned some areas of interest, which include developing parallel code libraries with chunks of code that are ready to use, testing different memory approaches, and exploring different types of parallel languages.

Below is a roundup of the most interesting quotes from the teleconference.

“The shift in hardware technology from a single core to multicore will have a profound effect in the way we do programming in the future. We’re really in the midst of a revolution in the computing industry.” –Tony Hey, Microsoft Research

“Parallelism is coming not just to high-end systems, but across a very broad slice of computer science, and we expect to permeate every corner of computer science going forward.” –Andrew Chien, director of Intel Research

“The technology revolution that Andrew [Chien] outlined means that every computer will be a parallel computer. Every problem must be a parallel problem … We want to democratize parallelism.” –Marc Snir, codirector of the UPCRC and a professor of computer science at the University of Illinois

“My perspective here is, there’s not that much that universities haven’t invented [in parallel computing], but there was no strong pull from industry because parallelism wasn’t pervasive; it was a small subset of the industry … Now that the pipeline is unclogged, now that there’s strong interest from Microsoft and Intel … work that’s been going on for many years will become much more fruitful.” –Marc Snir

8 comments. Share your thoughts »

Tagged: Business, Microsoft, Intel, multicore, supercomputing, parallel computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me