Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

In the mid-’90s, Blinder says, “the rate of computer deflation moved from minus 10 percent to minus 25 percent per annum. And although the computer industry is a small fraction of GNP-less than 2 percent-the drop in costs has been so severe that as a matter of arithmetic it knocks a noticeable piece off the overall price index.” In fact, the recent declines in the price of computers are so big that Gordon, the economist at Northwestern, argues that they largely explain the bump in productivity-except for durable goods manufacturing the economy is stagnant.

Gordon’s argument is “too extreme,” in the view of Chris Varvares, president of Macroeconomic Advisers, an economic modeling firm in St. Louis. “Why would business invest in all this equipment if they didn’t have the expectation of a return? And since it’s gone on for so long, why wouldn’t they have the reality?” Instead, he says, computers and the Internet are finally paying off in ways that statistics can measure. When banks introduce automated teller machines, the benefits don’t show up in government statistics. Bank customers are better off, because they can withdraw and deposit money at any time and in many more places. But the bank itself is still doing what it did before. “The benefits are captured by consumers, and don’t show up in the bottom line as output,” says Varvares. Only recently, he argues, did computers hit a kind of critical mass; workers had so much digital power on their desks that it muscled its way into the statistics.

Not every economist agrees. “You’d like to be able to tell yourself a story about how something could be true,” Thurow says. “In this case, are we saying that people suddenly figured out how to use computers in 1996?” No, other economists say, but businesses do need time to accommodate new technologies. Electricity took more than two decades to exert an impact on productivity, according to Stanford University economic historian Paul A. David. Computers simply encountered the same lag. But by now, Brynjolfson says, “computers are the most important single technology for improving living standards. As long as Moore’s Law continues, we should keep getting better off. It will make our children’s lives better.”

The explosion in computer power has become so important to the future, these economists say, that everyone should be worried by the recent reports that Moore’s Law might come to a crashing halt.

The end of Moore’s Law has been predicted so many times that rumors of its demise have become an industry joke. The current alarms, though, may be different. Squeezing more and more devices onto a chip means fabricating features that are smaller and smaller. The industry’s newest chips have “pitches” as small as 180 nanometers (billionths of a meter). To accommodate Moore’s Law, according to the biennial “road map” prepared last year for the Semiconductor Industry Association, the pitches need to shrink to 150 nanometers by 2001 and to 100 nanometers by 2005. Alas, the road map admitted, to get there the industry will have to beat fundamental problems to which there are “no known solutions.” If solutions are not discovered quickly, Paul A. Packan, a respected researcher at Intel, argued last September in the journal Science, Moore’s Law will “be in serious danger.”

Packan identified three main challenges. The first involved the use of “dopants,” impurities that are mixed into silicon to increase its ability to hold areas of localized electric charge. Although transistors can shrink in size, the smaller devices still need to maintain the same charge. To do that, the silicon has to have a higher concentration of dopant atoms. Unfortunately, above a certain limit the dopant atoms begin to clump together, forming clusters that are not electrically active. “You can’t increase the concentration of dopant,” Packan says, “because all the extras just go into the clusters.” Today’s chips, in his view, are very close to the maximum.

Second, the “gates” that control the flow of electrons in chips have become so small that they are prey to odd, undesirable quantum effects. Physicists have known since the 1920s that electrons can “tunnel” through extremely small barriers, magically popping up on the other side. Chip gates are now smaller than two nanometers-small enough to let electrons tunnel through them even when they are shut. Because gates are supposed to block electrons, quantum mechanics could render smaller silicon devices useless. As Packan says, “Quantum mechanics isn’t like an ordinary manufacturing difficulty-we’re running into a roadblock at the most fundamental level.”

1 comment. Share your thoughts »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me