Skip to Content

Making Multicore Fly

Before multiple-core processors can help PCs soar, the industry must solve some tricky software and hardware challenges.

This article is part 2 of a two-part series on the advent of multicore processing in consumer PCs; part 1 appeared on December 15.

Throwing an extra engine in a car won’t make it run twice as efficiently. And engineers designing microprocessors with multiple cores (two or more central processing units), as well as the PCs that will take advantage of them, face a similar reality. Rather, for PCs to make use of multiple processor cores, the industry will need to modify hardware subsystems, and revise applications software.

One of the biggest hardware challenges with multicore processors will be picking the right memory technologies. Traffic on the chip’s system bus, which carries requests from software applications in and out of the processor, must also be optimized. “If you can’t keep the cores fed fast enough from memory, you haven’t gained anything,” says AMD chief technology officer Phil Hester. “We’re trying to bridge as much of that gap as possible.”

On the software side, multithreaded applications (which are written to recognize and use multiple cores) require more development time than usual. “It will take years before programmers have all the tools and training to make multithreaded code the norm – and not the hand-crafted exception,” says Kevin Krewell, editor-in-chief of InStat’s Microprocessor Report.

Some PC parts won’t need major revampings with multicore designs, though. For example, the motherboards that house multicore processors won’t be radically different from today’s versions, although they will be smaller, since the chips draw less power, according to Jeff Austin, marketing manager for Intel’s Digital Enterprise Group. (These chips also won’t require huge cooling systems on the motherboards.) This difference will help PC makers who want to design small or unusual cases.

Memory, though, will require a serious makeover. “You need more memory to support the additional cores, and you need memory that can keep up with the speed,” says Shane Rau, program manager of semiconductor research for IDC.

The amount of cache memory included directly on a processor can make a huge difference to the performance of application software. Cache memory stores frequently used chunks of application data and instructions, for quick pass-offs when individual applications make requests to the CPU. Today’s chips are designed with two separate caches, L1 and L2, which serve as holding areas for frequently accessed data.

For multicore chips, Intel and AMD are trying new cache arrangements. Today, Intel’s dual-core chips have two L2 caches. Sometimes one core will need data residing in the other core’s L2 cache, which takes more time to grab. For the company’s next-generation Merom and Conroe dual-core chips, for notebooks and desktops, expected next fall, designers have included a shared L2 cache, so that both cores have continuous full access to the cache.

Intel is also working on a technique to allow direct transfer of data from one core’s Level 1 cache to the other, a method that should greatly reduce the amount of traffic on the system bus, says Intel’s Austin.

For its multicore chips, AMD plans to increase the size of the L1 and L2 caches and improve their efficiency, Hester says. The company is also considering a third shared cache, to pump through application data requests and graphics rendering.

As for PC main memory, today’s standard DDR – double-data-rate SDRAM, also called synchronous dynamic random-access memory – may not prove fast enough for dual and quad-core systems. DDR2, an emerging successor, should solve that dilemma, Hester says. DDR already works faster than plain SDRAM because it “double pumps” data, to increase memory bandwidth. But DDR2 adds some sophisticated tricks, including memory buffers, to work even faster.

A competing technology, fully-buffered DIMMs (dual in-line memory modules), takes less space on motherboards and could speed up communication between memory and memory controllers. It may help out later; for now, it costs more and draws more power than DDR2.

Power efficiency also ranks as a principal concern for multicore systems. Today, PC makers want to keep power requirements to a minimium, as they create, say, small machines that could reside in a kitchen.

“Some apps won’t exploit more than one or two cores,” says Hester. “You’d like the processor to realize that,” he says, and power down a few cores when they’re not in use. This would also help keep the machine quiet – another priority as computing devices get smaller.

Hardware gurus must also spend time worrying about software. Tomorrow’s multicore PC users will need a wide variety of multithreaded programs. Yet the toolkits for building such applications and developers with the skills to do so remain scarce.

“Multithreaded software is hard [to write],” says AMD’s Hester. “A lot of programming disciplines don’t teach you a lot about writing multithreaded code. That’s starting to change.” Learning to write efficient algorithms that process information in parallel on two or more cores will be the next major focus for developers of PC software, he says.

Even when developer education and tools catch up, though, not all software will benefit from multicore processing. Some applications will run faster with more cores available, while for others more cores won’t make much difference. Image filtering, for example, can be parallelized very efficiently, while video decoding cannot be pushed beyond a certain limit. Multicore also may not make much difference for favorites like Microsoft Word and Excel.

Another software issue: Will vendors have to revise their programs for dual-core processors, then again for processors with more than two cores? Probably not, says Intel’s Austin; instead, most vendors will try to code in support for two or more cores from the start.

For years now, the PC industry has dreamed of a revolutionary product that would compel people to buy tens of thousands of new PCs. Will multicore chips that require new software be the ticket? “I don’t see additional cores leading to fundamental software breakthroughs,” says Rau at IDC. “Multicores will enable us to do what we’re doing now – but better.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.