Novel Chip Architecture Could Extend Moore's Law
Hewlett-Packard researchers have designed a faster, more energy-efficient chip by packing in more transistors–without shrinking them.
In the chip-making industry, the best way to increase the speed of electronics and make them cheaper has always been to shrink a chip’s transistors to create room for more. But now researchers at Hewlett-Packard (HP) Labs have announced a radically different approach: a design that creates room for eight times more transistors on a chip, while avoiding the need to make the transistors smaller.
“For a long time, we in the industry have been obsessed with this idea that higher capacity [chips] and lower cost equals smaller transistors, and we’ve been investing the bulk of our efforts in this area,” says Stanley Williams, senior fellow and director of quantum-science research at HP Labs. The new research, Williams says, “is the first proof that it’s possible to dramatically improve integrated circuits without shrinking transistors.”
Chip components have steadily gotten smaller since the 1960s, following Moore’s Law: the prediction that approximately every two years, integrated circuits will double in transistor capacity and speed. However, engineers know that transistor size will reach its physical limit within the next decade or so. HP’s new design could extend Moore’s Law years beyond that, says Williams.
The problem with today’s chip architecture is that a large percentage of silicon isn’t actually used for transistors. Instead, much of the silicon real estate is populated with aluminum-wire interconnects that supply power and instructions to the circuit. So to make room for more transistors, Williams and HP researcher Greg Snider designed a chip with the wires on top, instead of between transistors. The research will be published in the January 24 issue of Nanotechnology.
This top layer of wiring is based on a “crossbar” structure–a sort of nanoscale wire mesh–that researchers at HP Labs have been developing for molecular memory devices since the 1990s. At each junction in the mesh, Williams says, is a switch that controls the flow of electrons to and from the transistor beneath it.
The HP work follows research done by Konstantin Likharev, professor of physics at Stony Brook University, in New York, who first proposed connecting wires atop transistors. However, Likharev’s scheme required atomic manipulation of the nanowires–a manufacturing impossibility, says Williams. In contrast, says Williams, HP’s design has the potential to be easily integrated into a chip-making facility.
Currently, HP researchers are developing a laboratory prototype using the design, and Williams expects it to be complete by the end of the year. By 2010, he says, the technology should be ready for manufacturing.
The first application of the technology will most likely be in a type of chip called field-programmable gate arrays (FPGAs), which have the flexibility to be programmed to complete a variety of tasks. FPGAs are typically used in the design stages of electronics and communication systems. However, once the bugs are worked out of the design, manufacturers replace FPGAs with faster, cheaper chips called application-specific integrated circuits (ASICs). Reducing the size and cost of FPGAs and increasing their speed has the potential to shift the balance between FPGAs and ASICs, says Williams.
Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today