Skip to Content

Gadgets in the Superchip Age

Novel chip designs and manufacturing techniques keep the 40-year computing explosion going strong. What consumer devices will they enable?

In a lab at Philips Electronics in the Netherlands, researchers are stalking the solution to one of the great problems of modern life: having to hunt through hundreds of television channels for something you’d like to watch. The lab’s answer is a TV that recognizes you when you walk into the room, knows you like occult thrillers, finds one it recorded at three in the morning, and puts it up on the screen. Alongside will be smaller images of a British news report on the company you just invested in, the Web page carrying the eBay auction you bid in, and the high-resolution video scene you recorded on your cell phone earlier in the day. Ready to switch channels? Just speak up and tell the TV what you want.

Perhaps the best thing about this talented device is that you’ll be able to buy it in about seven years for about what you’d pay for a dumb television today. Philips has already demonstrated these sorts of capabilities in its lab and recently rolled out a semi-intelligent prototype. “We can already produce a mostly digital television that allows you to add functions through software and that will cost in the ballpark of a conventional analog set,” says Theo Claasen, chief technology officer for the company’s semiconductor group.

We’ve come to take for granted that the electronics industry keeps hurling new and improved products at us, and it’s a solid bet that this won’t slow down in the near future. Electronic products are largely defined by the microprocessors inside them, and the power and speed of these chips continue to climb exponentially. The amazing resiliency of Moore’s Law-Intel cofounder Gordon Moore’s prediction nearly 40 years ago that the number of transistors on a chip would double every year-means that chips have gone from having a few thousand transistors three decades ago to over 100 million today, while the price per transistor has dropped from $1 to a millionth of a cent. And since transistor density roughly translates to computing and communications speed, you can thank Moore’s Law for innovations like online shopping, in-car navigation systems, and cheap cell phones. “Transistors are free,” says Krishnamurthy Soumyanath, director of communications-circuits research at Intel. “We can solve problems by throwing more transistors at them.”

Despite skeptics’ perennial warnings that Moore’s Law will peter out, the industry is set to hew to it for at least the next three generations of microprocessors, expected to come out over the next six years. Right now the smallest standard features of the fastest silicon transistors are 90 nanometers wide. Before the end of 2005, manufacturers expect to make 65-nanometer transistors. And blueprints for reducing that to 45 nanometers by 2007 are in the works.

Miniaturization means that more transistors can be squeezed onto a chip. This makes microprocessors faster, in part because electrons have less distance to travel between transistors. It also makes memory chips more capacious. Today, the fastest consumer microprocessors have about 180 million transistors and operate at a speed of about three gigahertz-or roughly speaking, three billion simple operations per second-while the adjacent random-access memory chips hold two gigabytes of data or more. By 2007, processors will pack more than a billion transistors, hit speeds approaching 10 gigahertz, and be backed up by several gigabytes of RAM. With that kind of power and memory, PCs will be able to transport you to ultrarealistic online virtual worlds, hold up their end of a conversation (on certain topics, anyway), and quickly search through hours of your vacation videos for that bit where Uncle Arnold capsizes his canoe.

Predicting what other sorts of gadgets will result from this explosion in computing power is, of course, the $64,000-make that the $64 billion-question. For all his prescience about chips, Gordon Moore himself failed to foresee the PC or the Internet, never mind the personal digital assistant or smart cell phone. Home videophones and pen-based computers, on the other hand, have managed to stay off consumers’ radar screens despite decades of hype. “If ten years ago someone told you about the World Wide Web, MP3 players, and video cameras that fit in the palm of your hand, you wouldn’t have believed them,” says Jeffrey Bokor, a professor in the Department of Electrical Engineering and Computer Science at the University of California, Berkeley. “What we’re going to see over the coming years will be equally hard to imagine.”

Movies on Your Phone

But plenty of experts are willing to take a stab at it. Near the top of everyone’s list is the cell phone, which appears to be due for a serious makeover. For starters, says Peter Kastner, chief researcher at the Aberdeen Group, a market research firm in Boston, cell phones will pack in the electronics needed to communicate via a number of different frequencies and data-encoding schemes, so that they can constantly hunt for the channels that will give them the best data transfer rates at the lowest costs.

That means these new phones will receive data 20 or more times faster than today’s mobile phones, without sending service bills through the roof. To handle these data transfer speeds, the phones will operate at frequencies in the two-gigahertz range and above, well beyond the frequency range of most cell phones today. That hasn’t been cost-effective until recently because the analog circuits that process traditional audio and visual signals enlist specialized transistor designs and materials. Analog circuits are also sensitive to the electronic “noise” from digital circuits, meaning they’re usually stuck on separate chips-a costly and inefficient arrangement that limits devices’ ability to handle ultrafast signals. But now, thanks to the performance boost that comes from more densely packed transistors, digital circuits are becoming quick enough to mimic many of the functions of analog circuits, including dealing with fast-changing, high-bandwidth radio signals. “We can take an analog radio signal right off an antenna and quickly move it into digital logic,” says Dennis Buss, vice president of silicon-technology development at Texas Instruments, which is already rolling out integrated, single-chip wireless devices based on the new techniques.

With their near-broadband connections, these new phones will enable fast, high-resolution Web surfing, and even passable real-time video, meaning that they could incorporate video cameras for recording, videoconferencing, and sophisticated game playing-possibly even movie watching. They’ll be smarter, too, assuming more and more of the functions of PDAs and even PCs, including online shopping, e-mail and calendar features, and navigation aids with detailed maps-all accessed via a voice interface. Right now, about half of the transistors in a cell phone go toward interacting with the user rather than processing calls, but Philips’s Claasen says the number of transistors dedicated to the user interface will increase by a factor of 10 over the next several years. That will “drive a new cycle of cell-phone buying,” predicts Kastner.

And it’s not just cell phones that will benefit from microprocessor enhancements. PCs and gadgets will also become friendlier. As devices and the network gain intelligence, they’ll require less attention from you. That’s critical to their acceptance, says James Meindl, director of the Microelectronics Research Center at the Georgia Institute of Technology. “Until now, we haven’t had enough electronics to make the operations of these machines completely simple,” he says.

Take televisions. In the sets Philips is planning, says Claasen, fully 80 percent of the computing power on the main chip will be used, not for image-processing chores, but for an adaptive interface that will assemble content from multiple sources geared to your viewing habits and present you with choices in whatever format you’re most comfortable with. TVs will become so dependent on computing power, says Claasen, that consumers will soon be shopping for them the way they now select PCs: according to processing speeds, memory size, and communications capabilities rather than their functionality, which will be provided by software and will upgrade itself automatically over the Internet.

And say goodbye to annoyances like having to wrestle your way through four screens of menus to get your PDA to cough up the name you’re looking for. Most usability problems will go away, says Aberdeen’s Kastner, when electronics start understanding plain English (or Finnish or Mandarin) commands. Speech recognition is often portrayed as a software problem, he notes, but it can in fact be solved with the vast increases in processing power and memory that will be afforded by the coming generation of chips. Appliances and handheld devices that can handle simple spoken commands are already hitting the shelves, and according to Kastner, machines should be able to engage in rudimentary conversation with us by 2010. “With all that power, you can throw multiple algorithms at the problem,” he explains. “We won’t have all the capabilities of HAL from 2001, but we’ll be a lot closer.”

Patrick Gelsinger, chief technology officer at Intel, says the company has already achieved significant improvements in speech recognition in its labs by using multiple microphones to add directionality to incoming sound information and adding lip-reading capabilities via video camera. “If homes are going to go from having four computers to having 400, we’ve got to make those other 396 a lot easier to use,” he says. That increased user-friendliness, he adds, will result in large part from the improvements in microprocessor speed coming down the pike.

Silicon Magic

What sort of huge breakthroughs will allow the semiconductor industry to make these leaps? Actually, none. The experts all pretty much agree: the next three generations of microprocessors, at least, will simply extend the familiar properties of silicon. It’s not that there aren’t plenty of dramatic innovations at the ready, including more-exotic semiconducting materials like germanium and indium phosphide and techniques for stacking layers of transistors into three-dimensional chips. It’s just that the industry can do it with silicon, so it will-because it’s cheaper. “Each time someone develops workable new materials or exotic device structures, silicon researchers keep catching up,” says Berkeley’s Bokor. “There’s a very strong interest in industry in making the least-radical change possible.”

Chip makers will still have to make a few key modifications to today’s methods, starting with the photolithographic process used to chemically etch circuit patterns onto chips. In photolithography machines, lenses focus ultraviolet light through a stencil-like “mask” onto silicon wafers coated with a photosensitive material. The photolithography machines used to produce today’s chips aren’t precise enough to project 65-nanometer features. But new, higher-resolution techniques are being worked out-for example, ultrafine gratings that break up and recombine the light beams so that they reinforce each other at the tiny spots where light is needed and cancel each other out everywhere else. To get to 45 nanometers and below, manufacturers may switch to machines now under development that use either extreme ultraviolet light, which has a shorter wavelength and can therefore be used to etch smaller features, or beams of electrons, which can be finely controlled to etch patterns onto silicon directly, without a mask.

New forms of silicon will also lend a hand. For instance, chips will get a speed boost from silicon that has been deposited over a layer of silicon germanium, whose atoms cause the slightly misaligned atoms of pure silicon to stretch out a bit. This “strained” silicon speeds the journey of electrons through transistors. An additional boost will come from adding a layer of insulating material underneath the semiconducting layers, further enhancing their electrical properties. Microprocessor maker AMD has reported speed jumps of up to 25 and 30 percent, respectively, for the two techniques. IBM and Intel have already begun making chips with strained silicon, and IBM says products combining strained silicon with “silicon-on-insulator” designs could be on the market within several years.

Transistors are also getting a makeover. As the features of transistors shrink, electrons are more likely to stray off their intended course and leak across barriers, even when the transistor is supposed to be off. This leakage wastes power and interferes with transistors’ ability to switch between their 0 and 1 states reliably-and it’s going to get worse. To plug the leak, the industry is turning to a slightly different transistor design, one pioneered by Bokor and his Berkeley colleagues Tsu-Jae King and Chenming Hu in the late 1990s.

In a conventional transistor, the main point of leakage is a channel of material squeezed between the source and the drain, two larger blocks of silicon that define electrons’ principal entry and exit points. A structure called a gate lies atop the channel, like a pontoon bridge across a canal. When a positive voltage is applied to the gate, negatively charged electrons are drawn toward it, opening up a pathway for more electrons to flow through the channel from the source to the drain. The problem, as transistors get smaller, is that electrons can sneak through the thin channel even when the gate isn’t charged. The Berkeley group’s “fin” design ameliorates leakage by raising the whole transistor above the silicon’s surface and reshaping the channel as a narrow, vertical fin that stretches from source to drain like the crossbar of an H. The fin sits on an insulating material, which reduces electron leakage, and the gate drapes over the fin, touching it on both vertical surfaces, which doubles the effect of the positive voltage. Intel is already turning to a variation of this design, which should start showing up in microprocessors by 2007.

As a bonus, higher-performing materials and transistor designs make it possible to run chips at lower voltages. This reduces power consumption and, consequently, the risk of overheating, which rises as chips get denser.

The Fab Reality

New generations of far more powerful microprocessors are not a done deal. Even if the chips come off assembly lines with all the hoped-for performance, the industry might have trouble keeping their costs low enough that the cell phones and televisions they go into will still seem like bargains. The culprit is the mushrooming cost of constructing a leading-edge chip factory, which is already about $3 billion-out of reach for all but perhaps a dozen companies worldwide.

Of course, the makers of the best-selling electronic products will be able to spread those up-front costs over tens of millions of chips, keeping prices down for at least some products. But rising fab costs could lead to yet another problem for consumers: finding products in stock. Capital investment in the semiconductor industry has fallen by about half in the down economy of the past few years, and observers have issued predictions that the industry will face a shortage of chip-making capacity just as consumer demand for new-wave devices skyrockets.

“Everyone assumes the industry is capable of coming up with whatever capacity is required,” says Richard Gordon, vice president of research for the semiconductor group at market research firm Gartner. “But bringing on more capacity is difficult, and with production concentrated in a handful of companies, there’s going to be a problem.”

But there’s good reason to bet the industry will dodge these and any other bullets that come its way. After all, the chip world’s ability to prove Moore right year after year without making the daunting leap away from silicon has defied even optimistic expectations. “No matter what the constraints, this industry always pulls off miracles,” says Steve Jurvetson, managing director of venture capital firm Draper Fisher Jurvetson.

Just tell your cell phone to keep you posted on the latest developments.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.