Skip to Content

Multicore Processors Create ­Software Headaches

April 20, 2010

For decades, improving processor performance meant cranking up a chip’s clock speed. The payoff was immediately obvious to users: applications ran faster. But a faster chip consumes more electricity, draining batteries dry in mobile devices. Consequently, chip makers moved to energy-saving multicore designs, where multiple low-power processors on a single chip combine to replicate the performance of a single, faster processor (see “Designing for Mobility”).

Unfortunately, applications on multicore systems don’t get faster automatically as cores are added. Software has to be written to take advantage of the parallel processing power. And writing programs that run efficiently and stably across multiple cores is hard. Unless we solve this programming problem, says Prith Banerjee, Hewlett-Packard’s senior vice president of research, users won’t see any speed advantage in new microprocessors. Banerjee adds, “This is a very fundamental problem.”

DATA SHOT

8.2 gigahertz
The current speed record for a desktop microprocessor, achieved by enthusiasts who “overclocked” a chip designed to run at three gigahertz. To prevent the chip from melting, it was cooled with liquid nitrogen.

A promising potential solution is to take human programmers out of the loop as much as possible: rather than have individual programmers work out how to make their applications run across two, four, or more cores, the messy details could be left to compilers, the software used to convert high-level programming languages into the machine code a computer can understand. All the major software and chip companies, along with many academic researchers, are working to develop compilers that can handle such tasks. The biggest obstacle is that it’s difficult to identify the parts of a program that don’t depend on other parts, so that a core won’t be left idle while it waits for some piece of data. Simply persuading developers to write cleaner programs, with well-defined interfaces between blocks of code, would make the job much easier, says Wen-mei Hwu, a professor of electrical and computer engineering at the University of Illinois. But he estimates that it will be five years before multicore-friendly compilers and matching programming practices diffuse through the computer industry.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.