Skip to Content

Nano-Hype

Just as chip manufacturers reach the limits of silicon’s abilities, nanotechnology will save the day with self-assembling “molecular computers.” Sound too good to be true? It is.

Imagine a microscopic computer that assembles itself, atom by atom, then calculates at a speed faster than today ‘s zippiest electronic chips. Now imagine this same computer is unbelievably cheap -dirt cheap, in fact.Sounds too good to be true? Well,some people think it ‘s real. This is the idea behind nanotechnology: that individual molecules can serve as digital switches and, acting in concert with billions of other molecular switches, replace digital computers.

If this vision can be realized,molecular computers could in one swoop destroy the enormous investment by the semiconductor industry in “fabs,”the plants that fabricate chips. The most advanced plants today cost billions of dollars. Not only would molecular computers disrupt what’s probably the world’s most important manufacturing industry next to cars, they also solve a looming “problem” presented by the laws of nature. Chip makers face physical limits in etching circuits on the wafer-thin material called silicon. One widely accepted estimate says the limits of silicon will be reached by 2014.

So molecular computers-or talk of them in the nation’s most prestigious newspapers and magazines, including this one (see “Computing After Silicon,” TR September/October 1999)-appear to be coming along just in the nick of time. Like the cavalry in a John Wayne movie, they will rescue high-technology from the specter of stagnation. This is a beautiful story, one that warms the heart of the capitalists who pay for each new round of innovation in computing and other fields. There is only one problem with this story: It’s a lie. And not a small lie either. In journalism, the story of molecular computing is a Big Lie.

The fact that it’s a lie isn’t all that surprising, however. For as long as innovators have been around, they’ve lied. Lied about the possible obstacles to further innovation. Lied about the utility of their innovations. Lied about the economic advantages of their breakthroughs. Lied about the breakthroughs themselves-all in the service of promoting their innovative technologies.

Remember artificial intelligence? Computers were going to automatically translate from one language to the next. Take dictation. Run factories without human intervention. Lead space missions. And we’re not talking about predictions made a couple of years ago. These fanciful ideas were promoted way back in the middle of the last century: in the 1960s. How about the energy that was going to result from nuclear power: “too
cheap to meter,” one enthusiast crowed. Or the nuclear-powered airplane. Does that ring any bells?

This impulse to misrepresent is natural. Innovators have it tough. There are no sure things. They must battle for “mind share.” Especially when an innovation attacks an existing technology-as most do-it takes a lot of sizzle to get consumers to pay attention. And investors don’t like to spend their money on losers. So every new technology must be a winner, which is how little lies grow up to be big ones.

In the claims for molecular computing-claims that have periodically erupted since the 1980s but without hard evidence-the little lies are growing up at a rapid clip.

Begin with economics. Though molecular computers have only been crudely demonstrated, leading researchers already are touting their presumed efficiency. The molecular computer will not just be cheap, says Mark Reed, head of electrical engineering at Yale, “it will be dirt cheap.” We’ve heard that before. Or consider the perennial problem of scaling up from a simple molecular device to a real working computer. It’s one thing to demonstrate a single molecular switch, which has been done. But no one has yet shown how to tie together gangs of billions of switches with “wires” only a dozen atoms thick.

Still, there’s enough potential here to create a buzz. Hewlett-Packard, a top computer maker, is experimenting with both molecular switches and molecular “wires” in its labs. Academic research teams are doing the same. Papers are getting published. The Clinton administration is even talking about launching a national nanotech initiative. And predictably, the Pentagon is already taking a few bows, boasting of the fore-
sight of its Defense Advanced Research Projects Agency, which has financed much of the early work in this field.

Don’t get me wrong: The advances in molecular computing deserve attention. But that attention should be balanced by a tough-minded skepticism. And that’s not happening. Unfortunately, the suspension of disbelief that lets us believe the tall tales is being fueled by the mania for the “new new thing” (to quote the title of a recent book by Michael Lewis) and the abject fear that some unheralded innovation will change the world as we know it-but that we will have missed seeing it coming. The over-the-top quality of the current nanotech hoopla seeps out in odd ways. One sign that the nano-bubble will burst is the admission by a leading nano-advocate that until this latest news flurry, he and his fellow travelers harbored their own doubts. “Although we believed in some rational way this was the way to go,” he told The New York Times, “among ourselves we were continually forced to reassure ourselves that we weren’t crazy.”

Excuse me, but maybe you are crazy.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.