Intel’s new three-dimensional transistor design, announced early this week, is the culmination of more than a decade of research and development work that began in a lab at the University of California, Berkeley in 1999.
The 22-nanometer transistors, which Intel says will make chips 37 percent faster and half as power hungry, will be used for every element on the company’s 22-nanometer scale chips, including both the logic and memory circuits. Processors that use the “tri-gate” transistors have been demonstrated in working systems, and the company will begin volume production in the second half of this year. It’s unclear just how device-makers will take advantage of the chips, but they’re likely to enable improved battery life and greater sophistication for portable devices, as well as faster processing for desktops and servers.
Intel turned to the new design because existing designs have begun running up against a performance roadblock. Conventional transistors are made up of a metal structure called a gate that’s mounted on top of a flat channel of silicon. The gate controls the flow of current through the channel from a source electrode to a drain electrode. With every generation of chips, the channel has gotten smaller and smaller, enabling companies like Intel to make faster chips by packing in more transistors. But it has become more difficult for the gate to fully cut off the flow of current. Leaky transistors that don’t turn off completely waste power.
The tri-gate transistors use rectangular silicon channels that stick up from the surface of the chip, allowing the gate to contact the channel on three sides, instead of just one. This more intimate contact means the gate can turn the transistor off nearly completely even at the 22-nanometer scale, which is responsible for the energy-efficiency gains in Intel’s new chips. It’s also possible to make tri-gate transistors with more than one silicon channel connected to each gate in order to increase the amount of current that can flow through each transistor, enabling higher performance.
Moore's Law: The history of transistors in images.
A timeline of microprocessor development.
Intel didn’t invent this transistor design, but the company is the first to get it into production. If the company had stuck with planar transistors in the move from 32- to 22-nanometer transistors, the chips would have demonstrated 20 to 30 percent gains in efficiency and performance, says industry analyst Linley Gwennap. There had been speculation that the company would use the new transistor design for memory elements and not logic, and so not completely eliminate the planar transistors. By using the tri-gate technology for both memory and logic, says Gwennap, “Intel is really surging for the fences and seeing a large improvement in performance, which could be a huge advantage” over its competitors.
These three-dimensional transistors were first imagined and built by three researchers at the University of California, Berkeley, in the late 1990s, in response to a call from the United States Defense Advanced Research Projects Agency for designs that would allow transistors to scale below 25 nanometers, an order of magnitude smaller than the ones in production at the time. Chenming Hu wrote out the technical specs for the new transistor on a plane ride to Japan in 1996. A Berkeley group made up of Hu, Jeffrey Bokor, and Tsu-Jae King Liu first made these transistors, which they called FinFETs, in 1999.
“It was an instant hit,” says Hu. The university opted to release the intellectual property into the public domain instead of patenting it; as the Berkeley researchers kept refining the designs, Hu presented the work at several companies, including Intel. By 2002, the FinFET and a second Berkeley design, known as “silicon on insulator,” were the devices favored by the International Technology Roadmap of Semiconductors as the technologies likely to meet the industry’s needs in the next 15 years. But at Intel, at least, FinFET pulled ahead of the second design, which relies on adding a very thin layer of silicon to a transistor. Until about two years ago, the companies who make silicon wafers weren’t able to make the active layer thin enough. French company Soitec can now manufacture the necessary wafers for this alternate design, and Hu says Intel’s competitors may at some point adopt it.
Getting the promising three-dimensional device design out of the lab and into production took about a decade. Intel hasn’t disclosed many of the details of what fab upgrades are necessary to make the new transistors, but based on the fact that no new materials or machines are apparently required—and the marginal increase in production cost of 2 to 3 percent promised by the company—the changes appear to be minor. The company has said that making the three-dimensional channels only involves an extra etching step.
Hu says the Berkeley researchers decided from the start that their new design would have to be compatible with the industry’s existing infrastructure, and that has proved to be the case. The main hurdle in getting the technology ready for volume production, says Hu, was likely dealing with reliability: getting the dimensions of the very thin channel under control when billions of them must be made on every single wafer.
Hu says the Berkeley group designed these transistors so that they would not require circuit designers to completely redesign chip architectures. That’s part of the reason why Intel can get products out so quickly. Hu’s group has been working on circuit-simulation tools for the tri-gate transistors for the past five years.
Still, circuit designers see new opportunities that could open up with these transistors. They offer new ways of tuning the behavior of individual gates, which “gives designers new knobs to play with in order to further improve power efficiency and reliability,” says Subhasish Mitra, professor of electrical engineering and computer science at Stanford University. Seeing a totally new transistor go into volume production within the span of about a decade is an encouraging sign that the industry “is not stale” and that good technology ideas can still make it out of academic labs, Mitra adds.
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.