Moore’s Law, which celebrated its 40th anniversary this spring, has been the semiconductor industry’s greatest blessing. In 1965, Intel cofounder Gordon Moore projected that the number of transistors on a computer chip would double every two years. At the time, a chip held just a few dozen transistors. Today, Intel’s high-end chip contains more than 1.7 billion transistors, and that number is expected to exceed 10 billion by 2012. This steady four-decade march has fueled the modern computer revolution and made Intel into a tech powerhouse.
But the ability to pack more and more transistors and other circuitry onto chips is exacerbating a host of problems that could, if they become severe enough, threaten the growth of the existing silicon-based digital economy. Just a few of the trouble areas: heat buildup, electrical currents leaking out of circuits, electrical crosstalk between neighboring wires. The latest CPUs for desktop computers, for example, consume 100 watts of power. Laptop CPUs are generally more efficient, since they’re intended to maximize battery life. But even they now consume as much as 75 watts. “It’s like putting a toaster on your lap,” says Pat Gelsinger, an Intel senior vice president. One fix that is expected to become widespread is to boost the number of transistors on a chip not by making them smaller, but simply by plunking down the same circuit pattern two or more times on the same slab of silicon. Intel released its first such “dual core” chips this spring. And Intel executives envision a future of “many core” chips, with up to a thousand processors side by side.
But there’s a rub. The copper wires that convey the stream of digital 1s and 0s into and out of a computer, and between processors in some computers, can carry only so much data so quickly. “If I double the performance [of a processor], I need to double the performance on and off the chip,” Gelsinger says. “Copper, our traditional interconnect technology, is running out of speed.”
The problem is that electrical pulses traveling through a copper wire encounter electrical resistance, which degrades the information they carry. As a result, data bits traveling through copper must be spaced far enough apart and move slowly enough that devices on the other end of the wire can pick them up. This limitation is already producing data traffic jams on local-area networks that use copper wires to connect computers. And many experts predict it will create bottlenecks for data traffic between multiple processors within individual computers. The upshot is that even if Moore’s Law continues to hold, computers will no longer be able to take advantage of the increased power it delivers, since they won’t be able to move data onto and off chips quickly enough to keep up with the processors. It’s a fundamental challenge: computers need to find a faster way to move a vast amount of data both within and between chips.
Enter the silicon laser. Optical connections can carry thousands of times more data per second than copper wires can. But existing optical components, which are made out of such exotic semiconductors as gallium arsenide and indium phosphide, are far too expensive for use in individual computers or even local networks. If you could make optical devices out of silicon, which is cheap and, at least for a company like Intel, easy to manufacture, that would change everything. The move to silicon optics would add a basic new capability to silicon chips: the ability to manipulate and respond to light. Companies would likely exploit that capability first by replacing copper connections with optical links in networks. But eventually, silicon photonics might also replace copper wires between processors within a single chip. Chip designers also envisioned using silicon optics in the internal clocks that microprocessors use to execute instructions, dramatically increasing clock speeds–and thus computing speeds.
Until recently, all that speculation about the potential of silicon optics was hypothetical: suitable silicon lasers didn’t exist. But things changed last winter when the lab of Intel scientist Mario Paniccia reported the first continuous all-silicon laser. Built using the same manufacturing methods that produce silicon chips, the experimental device turned out a steady stream of infrared photons, an achievement that many researchers had believed impossible in silicon.
It’s still early days for silicon photonics. But the Intel result, which built on findings reported over the past year in a flurry of papers describing advances in silicon-based optical components, is convincing many experts that it could become practical to closely link optical and electronic technology at the computer level. The progress made by Paniccia’s team has been remarkable, says Graham Reed, a silicon-photonics pioneer at the University of Surrey in England. “Now all of the skeptics are starting to believe that silicon will have a real impact on optics.”
Anticipated advances in silicon technology will almost certainly keep Moore’s Law going for the foreseeable future, creating ever faster computers. By speeding immense amounts of data into and out of chips and between machines, silicon photonics could help people access this vast computational power.