When the first transistor was created, in 1947, few could have imagined the eventual impact of this device—the switch that lies at the heart of logic chips.
We have silicon to thank for computing’s great takeover. Add a minute pinch of impurities to the element, and silicon forms a material almost ideal for transistors in computer chips.
For more than five decades, engineers have shrunk silicon-based transistors over and over again, creating progressively smaller, faster, and more energy-efficient computers in the process. But the long technological winning streak—and the miniaturization that has enabled it —can’t last forever. “There is a need for technology to beat silicon, because we are reaching tremendous limitations on it,” says Nicholas Malaya, a computational scientist at AMD in California.
What could this successor technology be? There has been no shortage of alternative computing approaches proposed over the last 50 years. Here are five of the more memorable ones. All had plenty of hype, only to be trounced by silicon. But perhaps there is hope for them yet.
Computer chips are built around strategies to control the flow of electrons—more specifically, their charge. In addition to charge, however, electrons also have angular momentum, or spin, which can be manipulated with magnetic fields. Spintronics emerged in the 1980s, with the idea that spin can be used to represent bits: one direction could represent 1 and the other 0.
In theory, spintronic transistors can be made small, allowing for densely packed chips. But in practice it has been tough to find the right substances to construct them. Researchers say that a lot of basic materials science still needs to be worked out.
Nevertheless, spintronic technologies have been commercialized in a few very specific areas, says Gregory Fuchs, an applied physicist at Cornell University in Ithaca, New York. So far, the biggest success for spintronics has been nonvolatile memory, the sort that prevents data loss in the case of power failure. STT-RAM (for “spin transfer torque random access memory”) has been in production since 2012 and can be found in cloud storage facilities.
Classic electronics is based on three components: capacitor, resistor, and inductor. In 1971, the electrical engineer Leon Chua theorized a fourth component he called the memristor, for “memory resistor.” In 2008, researchers at Hewlett-Packard developed the first practical memristor, using titanium dioxide.
It was exciting because memristors can in theory be used for both memory and logic. The devices “remember” the last applied voltage, so they hold onto information even if powered down. They also differ from ordinary resistors in that their resistance can change depending on the amount of voltage applied. Such modulation can be used to perform logic operations. If done within a computer’s memory, those operations can cut down on how much data needs to be shuttled between memory and processor.
Memristors made their commercial debut as nonvolatile storage, called RRAM or ReRAM, for “resistive random access memory.” But the field is still moving forward. In 2019, researchers developed a 5,832-memristor chip that can be used for artificial intelligence.
Carbon isn’t an ideal semiconductor. But under the right conditions it can be made to form nanotubes that are excellent ones. Carbon nanotubes were first crafted into transistors in the early 2000s, and studies showed they could be 10 times more energy efficient than silicon.
In fact, of the five alternative transistors discussed here, carbon nanotubes may be the farthest along. In 2013, Stanford researchers built the world’s first functional computer powered entirely by carbon nanotube transistors, albeit a simple one.
But carbon nanotubes tend to roll into little balls and clump together like spaghetti. What’s more, most conventional synthesis methods make semiconducting and metallic nanotubes in a messy mix. Material scientists and engineers have been researching ways to correct and work around these imperfections. In 2019, MIT researchers used improved techniques to make a 16-bit microprocessor with more than 14,000 carbon nanotube transistors. That’s still far from a silicon chip with millions or billions of transistors, but it’s progress nonetheless.
In 1994, Leonard Adleman, a computer scientist at the University of Southern California in Los Angeles, made a computer out of a soup of DNA. He showed that DNA could self-assemble in a test tube to explore all possible paths in the famous “traveling salesman” problem. Experts predicted DNA computing would beat silicon-based technology, particularly with massively parallel computing. Later, researchers concluded that DNA computing isn’t fast enough to do that.
But DNA holds some advantages. Researchers have shown that it’s possible to encode poetry, GIFs, and digital movies into the molecules. The potential density is staggering. All of the world’s digital data could be stored in a coffee mug full of DNA, biological engineers at MIT estimated in a paper earlier this year. The catch is cost: one coauthor later said that DNA synthesis would need to be six orders of magnitude cheaper to compete with magnetic tape.
Unless researchers can cut the cost of DNA storage, the stuff of life will stay stuck in cells.
It’s a compelling vision: transistors keep getting smaller and smaller, so why not jump ahead and make them out of individual molecules? Nanometer-scale switches would make for a supremely cost-effective, densely packed chip. The chips might even be able to assemble themselves thanks to interactions between molecules.
Groups at Hewlett-Packard and elsewhere in the early 2000s raced to make the chemistry and electronics work together.
But after decades of work, the dream of molecular electronics is still just that. Researchers have found that single molecules can be finicky, working as transistors under only very narrow conditions. “No one has shown how single-molecule devices can be reliably integrated into massively parallel microelectronics,” says Richard McCreery, a chemist at the University of Alberta.
The dream of molecular electronics has not completely died, but these days it is largely relegated to the chemistry and physics labs, where researchers continue struggling to make endlessly fickle molecules behave.
What comes next?
Silicon still reigns supreme, but time is running out for everyone’s favorite semiconductor. The latest International Roadmap for Devices and Systems (IRDS) suggests that transistors are expected to stop shrinking after 2028 and that integrated circuits will need to be stacked in three dimensions to keep making faster and more efficient chips possible.
This might be the time when other computing devices find an opening, but only in conjunction with silicon technology. Researchers are exploring hybrid approaches to making chips. In 2017, researchers who had made progress with carbon nanotube transistors integrated them with layers of nonvolatile memristors and silicon devices—a prototype for an approach to improving speed and energy consumption in computing by moving away from traditional architecture.
Classic silicon-based chips will still make some progress, says AMD’s Malaya. But, he adds, “I think the future will be heterogeneous, in which all the technologies are used probably in a complementary way to traditional computing.”
In other words, the future will still be silicon. But it will be other things as well.
Lakshmi Chandrasekaran is a freelance science writer based in Chicago.
A chip design that changes everything: 10 Breakthrough Technologies 2023
Computer chip designs are expensive and hard to license. That’s all about to change thanks to the popular open standard known as RISC-V.
Modern data architectures fuel innovation
More diverse data estates require a new strategy—and the infrastructure to support it.
Chinese chips will keep powering your everyday life
The war over advanced semiconductor technology continues, but China will likely take a more important role in manufacturing legacy chips for common devices.
What’s next in cybersecurity
“When it comes to really cutting off ransomware from the source, I think we took a step back.”
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.