To wrest their chips from outdated instructions, microprocessor designers will periodically throw everything out and start over with an entirely new chip, complete with a brand new instruction set. It’s a process that Intel is struggling through with its much-delayed Itanium microprocessor, which will be the company’s first chip that routes data around in digital swaths of 64 bits-that is, on a 64-bit-wide “bus.” Freeing designers from the current generation’s 32-bit bus will result in a great leap forward in performance. But starting over also results in a chip that will initially have no software to run on it, hardly an ideal state. Even if software developers cooperate and begin to write code to the new instruction set, this approach only works once: Then you’re back where you started, with legacy software and a years-long cycle to make any fundamental change.
Ditzel has tried the start-over approach to chip design more than once in his career. Two decades ago, as a graduate student at the University of California at Berkeley, he co-authored a paper entitled “The Case for Reduced Instruction Set Computing.” This seminal work inspired an entire school of microprocessor design; today, so-called RISC chips are everywhere.
After graduate work on RISC design at Berkeley, he moved on to design a RISC-chip variation called CRISP at Bell Labs; CRISP, however, never gained wide support from software developers. Ditzel then made a third attempt to design a new microprocessor when he worked on a gallium arsenide chip at Sun that was never produced. “It was like I was telling people: Look! You can use this great new microprocessor-all you have to do is throw out all your software and start over!’” Ditzel said. “I’ve fought that fight for 20 years, and I’ve given up.”
But he didn’t really give up. Instead, he found a way out.
While at Sun in the early ’90s, Ditzel was influenced by the work of Russian supercomputer expert Boris Babayan, with whom he had informally collaborated, and whom he names as a key mentor in his developing thought about chip design. At the time, Babayan and his company Elbrus were experimenting with a technique known as dynamic binary translation and compilation (which Transmeta has given the much more market-friendly name “code-morphing,” a term they have since trademarked).
Writing code so that one kind of software can run on another kind of hardware is an old idea: IBM, for example, did it back in the 1960s. Results of these attempts, however, were always hopelessly sluggish. But chips were getting faster all the time. By the early 1990s, designers were postulating that there might be a way to translate from one instruction set to another so rapidly that performance would barely suffer. Instead of being a static, one-to-one translation of each instruction, the technique could be dynamic, examining the application for inefficiencies in real time, correcting them and remembering the corrections.
It is counterintuitive to think that putting an additional layer of software between an application and a CPU wouldn’t slow things down-it’s like saying a curved line between two points is shorter than a straight line. But the relationship between software and hardware is no longer a straight line: Because of the inefficiencies caused by years of developing around the same instruction set, dynamic translation could, in theory, improve performance. On the hardware side, the process of jamming more and more circuits onto a chip to eke out the last performance gains can actually backfire, slowing things down. Software, too, is rarely as efficient as it could be out-of-the-box: Applications developers with an eye on a ship date will freeze code when it works, not when it’s perfect. Dynamic translation could theoretically find the slack and tighten it.