A View from Christopher Mims
Why CPUs Aren't Getting Any Faster
Making computers faster means relying on the central processing unit (CPU) less than ever before.
The Central Processing Unit (CPU)–the component that has defined the performance of your computer for many years–has hit a wall.
In fact, the next-generation of CPUs, including Intel’s forthcoming Sandy Bridge processor, have to contend with multiple walls–a memory bottleneck (the bandwidth of the channel between the CPU and a computer’s memory); the instruction level parallelism (ILP) wall (the availability of enough discrete parallel instructions for a multi-core chip) and the power wall (the chip’s overall temperature and power consumption).
Of the three, the power wall is now arguably the defining limit of the power of the modern CPU. As CPUs have become more capable, their energy consumption and heat production has grown rapidly. It’s a problem so tenacious that chip manufacturers have been forced to create “systems on a chip”–conurbations of smaller, specialized processors. These systems are so sprawling and diverse that they’ve caused long-time industry observers like Linley Gwennap of Microprocessor Report to question whether the original definition of a CPU even applies to today’s chips.
In releasing Sandy Bridge, Gwennap observes, Intel has little to tout in terms of improved CPU performance:
Sure, they found a few places to nip and tuck, picking up a few percent in performance here and there, but it is hard to improve a highly out-of-order four-issue CPU that already has the world’s best branch prediction.
Instead, Intel is touting the chips’ new integrated graphics capabilities and improved video handling, both of which are accomplished with parts of the chip dedicated to these tasks–not the CPU instelf, which would be forced to handle them in software and in the process burn up a much larger percentage of the chip’s power and heat budget.
And what of general-purpose computing tasks? Gennap explains that here, paradoxically, the key to conquering the power wall isn’t more power–it’s less. Fewer watts per instruction means more instructions per second in a chip that is already running as hot as it possibly can:
The changes Intel did make were more often about power than performance. The reason is that Intel’s processors (like most others) are against the power wall. In the old days, the goal was to squeeze more megahertz out of the pipeline.
Today’s CPUs have megahertz to burn but are throttled by the amount of heat that the system can pull out. Reduce the CPU power by 10% and you can push the clock speed up to compensate, turning power into performance gains. Most CPU design teams are now more focused on the power budget than on the timing budget.
This means that, at least with this generation of chips, Intel is innovating anywhere but in the CPU itself.
As task-specific processors become more and more common, one can’t help but notice parallels with the world’s most powerful computer–the human brain.
The brain is full of high specialized processing cores as well as general computational capabilities. If silicon continues to follow the trend laid down by evolution, we can expect future “CPUs” in which the “central” processing unit is less and less important, and task-specific processors proliferate until Systems on a Chip come to resemble the tangled, sprawling metropolis that appears to have conquered massive parallelization in a way that computer scientists can now only dream of.
Keep up with the latest in intelligent machines at EmTech Digital.
The Countdown has begun.
March 25-26, 2019
San Francisco, CA