Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

The Path Not Taken

The founders of modern computer technology contemplated asynchronous design as early as 1946. But these early computer engineers chose instead to go with a clock. “At the time, it was the right choice,” says Jo Ebergen, a senior staff engineer at Sun who works in an asynchronous research group headed by Sun fellow and vice president Ivan Sutherland. (In 1989, Sutherland, best known as a pioneer in computer graphics, wrote a paper that nearly single-handedly reignited interest in clockless-chip technology.) “The circumstances in which they had to design, using vacuum tubes and relay circuits, meant that they really couldn’t build a reliable computer without a clock governing the whole thing,” he adds. By using a clock, engineers could build in fail-safe measures that made computers reliable even when the parts they were made from weren’t.

From that first choice came the steamroller effect of Moore’s Law, wherein nearly all research, development and production in the semiconductor industry has focused on clocked chips. By the 1960s, the notion of clockless chips had all but disappeared-kept alive only by an esoteric paper or two coming out of universities. In today’s chips, therefore, the clock remains the key part of the action. As a microprocessor performs a given operation, electronic signals travel along microscopic strips of metal-forking, intersecting again, encountering logic gates-until they finally deposit the results of the computation in a temporary memory bank called a register. Let’s say you want to multiply 4 by 6. If you could slow down the chip and peek into the register as this calculation was being completed, you might see the value changing many times, say, from 4 to 12 to 8, before finally settling down into the correct answer. That’s because the signals transmitted to perform the operation travel along many different paths before arriving at the register; only after all signals have completed their journey is the correct value assured. The role of the clock is to guarantee that the answer will be ready at a given time. The chip is designed so that even the slowest path through the circuit-the path with the longest wires and the most gates-is guaranteed to reach the register within a single clock-tick.

With a central timepiece governing the action, engineers don’t have to worry about the varying lengths of millions of infinitesimally small wires; signals can arrive at the register in any order, as long as they all settle in before the clock next ticks. Teams of hundreds of engineers can coordinate their work around the unifying principle of the clock. And we all benefit: the discipline of clock-based design has enabled the magic of exponential growth in chip performance to endure for more than 30 years. “The clock has to go down as one of the most brilliant ideas in design,” says Kevin Normoyle, a Distinguished Engineer at Sun who works on the design of Sun’s Sparc microprocessors. “It’s so simple, and yet it’s an approach that has scaled up and now works for millions of transistors.”

But after a point, cranking up the clock speed becomes an exercise in diminishing returns. That’s why a one-gigahertz chip doesn’t run twice as fast as a 500-megahertz chip. The clock, through the work it must do to coordinate millions of transistors on a chip, generates its own overhead. The faster the clock, the greater the overhead becomes. The clock in a state-of-the-art microprocessor can consume up to 30 percent of the chip’s computing capability, with that percentage increasing at an ever faster rate as clock speeds increase. It’s as if a factory became overrun with stopwatch-wielding supervisors who improved efficiency but also took up more and more space held by workers and machines.

Clocked chips are becoming serious power hogs, too: the job of coordinating tens of millions of transistors at a billion ticks per second requires the consumption of a lot of energy, most of which ends up as heat. Patrick Gelsinger, chief technology officer at Intel, referred to the problem in his keynote speech at the International Solid-State Circuits Conference last February. Gelsinger was only half-joking when he said that if microprocessors continue to be run by ever faster clocks, then by 2005 a chip will run as hot as a nuclear reactor.

Perhaps the most pressing problem with conventional microprocessors, though, is that you can only speed up the chip’s clock so much before banging into some inconvenient physical realities. In today’s one-gigahertz chips, electronic pulses signifying binary ones and zeroes can-just barely-make it across the chip within a single beat of the clock. But in the two-gigahertz chips expected to arrive in the next couple of years that will no longer be true. The role the clock plays now, synchronizing all the work on a chip, will begin to break down.

2 comments. Share your thoughts »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me