Skip to Content

Computing After Silicon

How will computers be built after 2015? Hewlett-Packard’s Stan Williams thinks he has a good recipe. It’s not perfect-but that’s the beauty of it.
September 1, 1999

Four years ago, UCLA chemistry professor R. Stanley Williams and computer giant Hewlett-Packard (HP) made mid-career changes at the same time. The company had grown into one of the world’s leading computer and microprocessor makers, but it still didn’t have a fundamental research group. Williams had spent the previous fifteen years in academia and feared he was losing contact with the realities of the business (earlier in his career he had worked for several years at Bell Laboratories.) The solution: a basic research lab at HP directed by Williams.

As head of the lab, Williams’ chief concern is the future of computing. The progressive miniaturization of silicon-based integrated circuits has led to smaller, cheaper, more powerful machines. State-of-the-art chips now have features as small as several hundred nanometers across (a nanometer is a billionth of a meter). That’s small. But according to Williams’ calculations, the ability to continue to shrink silicon-based devices is likely to grind to a halt somewhere around 2010. Such predictions are hardly shocking-other Silicon Valley experts have reached similar conclusions. What is surprising is that Williams believes he and his collaborators at HP and UCLA have hit on a solution: a viable heir to silicon.

If Williams is right, computing will one day rely on nanometer-scaled components cheaply and easily assembled using simple chemistry. Instead of today’s technique of precisely carving features onto silicon chips to create complex and near-perfect patterns, technicians will dip substrates into vats of chemicals. And if the mix is right, wires and switches will chemically assemble themselves from these materials. It would make possible tiny, inexpensive and immensely powerful computers. This is a fascinating vision. Yet, after all, Silicon Valley (and the popular press) are full of fascinating visions of the future of computing. What makes the concoctions Williams is cooking up at HP more compelling is that they’re not just ideas. Last year Williams and his co-workers published a report in Science describing a computer architecture that could make chemically assembled circuits feasible; and this July the group published a second Science paper, this time describing the synthesis of a first potential component of their computer-molecular electronic switches. The results made headlines in newspapers around the country.

In the weeks before the media frenzy, TR Senior Editor David Rotman chatted with Williams about computing after silicon, basic research in high-tech corporations, and his own personal transition from the university to the private sector.

TR: You came to HP in 1995 to establish a basic research lab after being a professor at UCLA. What was your mission?
WILLIAMS: Hewlett-Packard never really had a basic research group. In the past, there had been discussions within HP in which people said, we really ought to be doing more basic research, we really ought to be somehow returning knowledge to the well-those kind of philosophical discussions. And there were always a few people doing some fundamental work. But HP realized that it had to create a separate group that was more isolated from the daily demands of product research to have a sustained effort. I was contacted and asked if I would be interested in trying to bootstrap up a basic research group. I firmly believed, and in fact I believe even more strongly now, that fundamental research has real value for a corporation.

TR: How do you demonstrate that value?
WILLIAMS: There are several ways. One is to provide a vision for what electronics and computing are going to look like in a 10-year time frame. We also act as a technology radar. We often hear about developments before the people in the trenches, and we can alert them that there are interesting opportunities or perhaps threats that are coming along. Also, we’re working on such fundamental issues that if we do succeed the payoff for the company is going to be enormous. And they know it. Every intelligent investment portfolio has a few long shots.

TR: Have things worked out as you expected since starting up the lab?
WILLIAMS: When I came to HP, I had very nebulous ideas about the electronics of the future. Now we have a roadmap. That has been amazing. There are a couple of things that haven’t worked out as I expected. I had hoped to have several joint research projects with the more applied labs. Even though the researchers themselves are interested in working with us and their managers encourage them to do so, when people have deadlines to meet, those collaborations can’t be sustained. Another issue is that we’ve been in competition for funding with a lot of economically crucial projects and so basic research has not grown as fast as was envisioned when I was hired. We’re just starting to grow a little bit.

TR: How well is the high-tech industry doing in carrying out basic research? Is it achieving the right balance of providing for fundamental science while watching out for the bottom line?
WILLIAMS: In general, no. In today’s viciously competitive environment, any high-tech company can go bankrupt within three years-or considerably less with the introduction of Internet time. It’s very difficult to pay attention to the long term, which for the board of directors of some companies is the quarter after next. Even in corporate research labs, the pressure to get better aligned with product divisions, shorten research and development cycles, and fight day-to-day fires has collapsed the view of most managers and researchers to just a few years out.

TR: What does that mean for the computer industry?
WILLIAMS: I think that having a strong basic research component in a corporate laboratory is becoming a strategic advantage. This is especially the case for the high-tech companies that depend on advances in electronics. There will be a huge economic reward for the companies and countries that are successful in harnessing nanometer-scale structures and quantum phenomena for computation, communication and measurement applications. These are all still at the level of basic research, but they will be the foundations of technology long before I am ready to retire. Companies that are not keeping up with the developments will not be able to catch up later. The Fortune 100 will look much different in ten years than it does now, and a significant differentiator will be investments in basic research.

TR: Let’s talk about the future of computing more specifically. You often refer to the limits of silicon-based computing. What are those limits?
WILLIAMS: There are two very different issues facing the semiconductor industry over the next decade. One is economic. The cost of building factories to fabricate each new generation of silicon chip has been increasing by a factor of about two every three years. A $10 billion fabrication plant, or “fab,” is not far off. By 2010 a fab is likely to cost $30 billion. The second issue, which is one of the main reasons for the first, is that silicon-based transistors are starting to experience some fundamental physics and materials limitations as they get smaller and smaller. For example, the number of electrons utilized to switch a field effect transistor-the mainstay of today’s computers-on and off is getting down into the hundreds, and as that gets much lower there will be severe problems with statistical fluctuations that could act to randomly turn it on and off. There are also the issues associated with the physics of traditional lithography [the use of light to etch patterns on silicon chips], such as how to accurately position wafers with a precision of a few nanometers. Each of these problems has a technological fix that can squeeze out one or two more generations of shrinkage, but the fact that so many issues now have to be addressed simultaneously is nearly overwhelming.

TR: Will silicon-based technology suddenly hit a wall?
WILLIAMS: From the physics standpoint, there are no reasons why the industry can’t get down to devices as small as 50 nanometers. But the problem is that getting there is becoming more and more challenging and more and more expensive. Rather than try to play the game, many companies will make an economic decision that they’re not going to make state-of-the-art chips. I’ve been preaching this for some time, and even I’m surprised at how fast this is happening. National Semiconductor-here’s a company with semiconductor right in its name-is not going to make next-generation microprocessors anymore. In fact, Hewlett-Packard announced recently that it will have its advanced processors built in a foundry (foundries are fabs that produce devices on a contract basis). Eventually there will be one or two fabs in the world building devices at the state of the art, and those fabs will probably be financed in large part by governments. Which means it probably won’t happen in the United States.

TR: And at this rate, how long will that take?
WILLIAMS: My guess is that it will be before 2012. It’s a big game of chicken. Who’s willing to spend the money for a new fab?

TR: How will the rapidly rising production costs, and the subsequent effect of companies exiting manufacturing, affect microelectronics?
WILLIAMS: The prices for the items we are buying today will not go up substantially, but we will not see the dramatic improvements in performance and decreases in cost for silicon-based devices that we have seen in the past. And the fact that so many big companies are getting out of silicon process research will definitely hurt innovation in microelectronics for a while. However, this is also going to open the door for a lot of small-scale entrepreneurs and inventors looking to create entirely new electronic devices and fabrication processes. I think the next decade will provide one of the greatest explosions of creativity we have seen since the invention of the transistor.

TR: You have predicted that, at the current rate of shrinkage, silicon-based devices will start to reach fundamental limits around 2010. In terms of finding and developing new technologies to replace silicon, it’s really not that far in the future, is it?
WILLIAMS: It’s frighteningly close. There is not yet a definite heir to silicon technology. To have a new technology ready by then, we have to be working hard right now. At HP, we have what we think is a pretty good candidate, but I think that technology and the future economics of this country would be a lot better off if there were more than one heir, if there were several groups with unique ideas competing. There are a few good ideas out there, but not enough.

TR: I’m surprised that there are not more, given what’s at stake.
WILLIAMS: A lot of the research is at the level of discrete devices. But there’s very little architectural-scale work going on. Instead of looking at discrete basic units, we’re looking at the function of an entire circuit.

TR: Rather than trying to make things at a nanometer scale, and then worry about how you might be able to use them, you already have in mind…
WILLIAMS: A potential overall structure. Most of the people who are working in this area are essentially trying to figure out how to make a molecular analogue of an existing electronic device; then they’re hoping they’ll figure out how to connect all these things to make a circuit or a system. People are essentially working hard to make a single brick and hoping that once they make it they can figure out how to build something out of it. On the other hand, we have the architectural drawing of the entire building, and we’re looking for the best materials to construct that building.

TR: Your ambition is to use this blueprint to build an entirely new type of computer, one fabricated using chemistry rather than lithography, isn’t it?
WILLIAMS: Our goal is to manufacture circuits in simple chemical fume hoods using beakers and normal chemical procedures. Instead of making incredibly complex and perfect devices that require very expensive factories, we would make devices that are actually very simple and prone to manufacturing error. They would be extraordinarily inexpensive to make, and most of the economic value would come in their programming.

TR: It seems slightly counterintuitive that the way to make microelectronics even smaller and more powerful is to allow them to be defective.
WILLIAMS: A year ago we published a paper in Science in which we talk about what is going to be required to make a computer using chemical assembly. The answer was that you need to have a computing architecture that would allow the systems to have a lot of manufacturing defects, a lot of mistakes. We call that architecture defect-tolerant. We discussed an example of a computer that has been built here at Hewlett-Packard called Teramac. This is our computer archetype; we think that in the future things that are based on molecular-scale or nanometer-scale objects are going to have to have as part of their organizing principles these defect-tolerant designs because it’s going to be impossible to make such small things perfectly.

TR: Tell us a little about the origins of your interest in Teramac.
WILLIAMS: James Heath, a UCLA chemistry professor, and I spent at least a year and a half studying it before we were ready to build anything. We were having a series of discussions with a computer architect at HP, Philip Kuekes, about defect tolerance, and Phil started talking to us about this computer that he had helped build. They had decided to build it from imperfect or defective silicon components, because those would be much cheaper, and just deal with whatever problems that came up by using clever software.

TR: In other words, you pay for a material’s perfection.
WILLIAMS: Absolutely. Perfection costs a lot of money. And as you get more and more complex, the cost of perfection gets higher and higher. That’s the main reason why the cost of fabs is increasing exponentially. What we’re saying is that if we can make things that are imperfect but still work perfectly then we can build them a lot more cheaply.

TR: How do you make something that is imperfect work perfectly?
WILLIAMS: Teramac has an architecture that relies on very regular structures called crossbars, which allows you to connect any input with any output. If any particular switch or wire in the system is defective, you can route around it. You can avoid the problems. It turned out that Teramac had a huge bonus. Not only is it capable of compensating for manufacturing mistakes, but Teramac could also be programmed very rapidly and it executed those programs with blinding speed because it had this huge communications bandwidth.

TR: As constructed, Teramac uses silicon chips, albeit defective ones. But your interest is in using this architecture to build a computer using chemical processes. Why is it so promising for that application?
WILLIAMS: Teramac was built as a tool to demonstrate the utility of defect tolerance for building complex systems more cheaply. Even though it was a success, a desktop Teramac is not yet economically viable. It may be that Teramac-like architectures will help to extend silicon integrated circuits a generation or so by making fabs cheaper to build, but we see the huge potential for this architecture in chemical manufacture of integrated circuits. Assembling devices and ordering them by chemical means will be an inherently error-prone process. However, we now have proof that a highly defective system can operate perfectly.

TR: This actual architecture could provide an actual way to do computing?
WILLIAMS: It’s real. The hardware was built, tested and programmed. The concepts are very well understood and very robust. Now the second stage of all this is to see if we can use the ideas coming out of basic research in nanotechnology-the ideas of self-assembly, constructing little regular units using chemical procedures-to actually make something that would be useful. Our Science paper this July is, we believe, the first major step in that direction in that we demonstrate that molecular electronic switching is possible.

TR: What’s next?
WILLIAMS: Within two years, we hope to assemble chemically an operational 16-bit memory that fits in a square 100 nanometers on a side. Today, one bit in a silicon memory is much larger than a square micrometer. So, we’re looking for a scaleup of at least three orders of magnitude in memory density. Our longer-term goal, frankly, is to build an entire computer using nothing but chemical processes. That particular goal is 10 years from now if everything goes well, and even then we’ll be making fairly simple circuits. But it’s got to start someplace.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.