Skip to Content

The Very Large Impact of the Very Small

A Bell Labs veteran looks into the future ­at the start of the microelectronics age.
April 20, 2010

Every year since 1960, when the integrated circuit industry began, the number of components per chip has about doubled. This phenomenal rate of progress has brought us to the era of “very large scale integrated” (VLSI) circuits. Today, over 150,000 components can be fabricated and interconnected on a single silicon chip about one-tenth the size of a postage stamp, and the number of components per chip can be expected to grow dramatically for at least another 10 to 15 years.

Jam-Packed: This state-of-the-art (for 1982) Intel chip held 134,000 transistors.

So it went in 1981, when John S. Mayo, a researcher and executive at Bell Labs who was part of the team that built the first computer to use transistors instead of vacuum tubes, described how VLSI was making powerful microelectronics possible. What Mayo described is known as Moore’s Law–the prediction, in 1965, by Intel cofounder Gordon Moore that the number of components on a chip would double every two years. (Initially, Moore predicted an annual doubling.) In the 29 years since Mayo’s essay, Moore’s Law has held up nicely: the latest chips feature more than two billion components.

Even then Mayo saw that the microprocessor industry would maintain its level of innovation. He saw that engineers could use the power of the computer chips themselves to design chips that were even more powerful. He understood that complex, efficient computers could be used to design computers that were even more complex and efficient.

A primary example is computer-aided design (CAD). In the last five years, integrated circuits have become so complex that without the advanced analyses and extensive simulation techniques available through CAD, it would be virtually impossible to design VLSI chips. But with CAD, even complex circuits can be designed in a few months with no real increase in labor.

The advances in fabrication and design drove down the cost of a digital logic gate to the point where the average person in 1981 could buy a cheap pocket calculator to do the same work that previously would have required a huge, expensive computer. Microprocessors were on their way to becoming ubiquitous.

Some discrete high-frequency components have been built with submicron dimensions, and there are indications that dimensions in the tens of nanometers (one-billionth of a meter) are technologically feasible. Such dimensions are 100 times smaller (10,000 times smaller in area) than the current chips and may lead to packing more than 1 billion components on each chip.

Mayo had joined Bell Labs in 1955, eight years after the first transistor had been invented there, so he had an insider’s view of what cramming a billion transistors onto a chip would mean. He was especially excited about distributed computing–that is, networking–and telecommunications.

VLSI will make possible systems even more complex in hardware and simpler in software, opening a new area of software science–distributed software systems, wherein a whole family of computers of many sizes operates under centralized software control. …

The impact of microelectronics on communications is nearly as profound. … Electronic switchers now make it possible to forward calls automatically, to reach frequently called numbers through abbreviated dialing codes, to notify users of other incoming calls, and to conduct three-way conference calls.

The full implications for communications were beyond anything Mayo could have predicted. Cell phones, wireless networks, downloadable apps–in other words, many of the things we take for granted (see Briefing)–were all made possible by the increasing power and availability of microprocessors. And yet Mayo clearly sensed the developments to come, even pointing out an early foray by the media into interactive content:

Other communications concepts are on the horizon. Advanced Mobile Telephone Service can provide service to large numbers of people in vehicles and is working well on a trial basis in Chicago. And the VIEWTRON system, on trial in Coral Gables, Fla., enables one to display on a television screen some 15,000 “frames” of information–some interactive–transmitted via telephone lines from a data bank.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.