Skip to Content

Moore’s Law Lives Another Day

The three-dimensional transistors of Intel’s new generation of chips continue the 50-year trend of faster, more tightly packed chips.
April 24, 2012

“[Gordon] Moore is my boss, and if your boss makes a law, then you’d better follow it,” says Mark Bohr, who leads Intel’s efforts to make advances in microchip design practical to manufacture. Moore’s Law, of course, was first proposed by Bohr’s boss in 1965, when Moore pointed out that the number of transistors on a chip doubles every year. The current form of Moore’s law has been set since 1975, when Moore altered the pace to a doubling every two years. Remarkably, the computer industry has maintained that pace ever since, training us to expect computers to become ever faster in the process.

After Monday’s launch of Intel’s newest line of processors, named Ivy Bridge, Moore’s prediction is still looking sound. The chips are the first to become available from any company with features as small as 22 nanometers (the finest details on today’s chips are 32 nanometers), allowing transistors to be smaller and packed more densely. Ivy Bridge chips offer 37 percent more processing speed than the previous generation of chips, and can match their performance while using just half the energy.

Transistors on an Ivy Bridge processor are packed roughly twice as densely as in the most recent line of Intel chips, with 1.4 billion on a 160 square millimeter die instead of 1.16 billion on a 212 square millimeter die. Upholding Moore’s Law like that required a significant redesign of the transistor, the tiny electronic switches that make up digital computer chips. Existing transistor designs—little changed in decades—could not simply be made smaller, with 22-nanometer features. That would cause them to become leaky, so that a transistor would allow some current to flow even when set to off. Intel got around that by adding an extra dimension to transistors, which for decades have been made as a stack of flat layers of material on top of one another.

A transistor’s basic design comprises separate electrodes for incoming and outgoing current, known as the source and drain; material connecting the two, known as the channel; and a third electrode known as the gate, which controls the flow of current. Rather than being a flat layer, the channel of Intel’s reinvented transistors is a long “fin” that protrudes up into the gate electrode above, creating a more intimate electrical connection between the layers. Intel refers to its three-dimensional transistors as having a “tri-gate” design.

Similar designs were first suggested in Japan in the 1980s, and developed for many years at the University of California, Berkeley, starting in the 1990s. Intel started investigating the design around 2000, says Bohr, and in 2008 committed to using it. “It’s one thing to make a lab device, but a very different thing to make sure it can produce chips at low cost and high volume,” says Bohr. He says Intel is reusing many existing factory processes, and, as a result, patterning a silicon wafer with Ivy Bridge designs costs only around 2 percent more than it did for Intel’s previous generation of chips.

Intel’s launch of desktop Ivy Bridge chips this week leaves it technologically ahead of its competitor, AMD, which doesn’t have public plans to adopt three-dimensional transistors or use 22-nanometer technology. Versions of the new technology for laptops are due in the summer, but more important to Intel may be the potential for Ivy Bridge chips to help it break into the market for energy-efficient processors needed for tablets and smart phones.

Intel’s three-dimensional transistors will debut in the company’s Atom line of mobile processors in 2013. Intel wants those to be used in smart phones and tablets, and has signed deals with Lenovo and Motorola to do so.

As for the future prospects for Moore’s Law, Bohr says that his group is already working on manufacturing processes for a version of the three-dimensional transistors with 14-nanometer features, scheduled for production in 2014. “It’s becoming more challenging, but I don’t see the end [to Moore’s Law],” says Bohr.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.