Computer scientist Carver Mead gave Moore’s Law its name in around 1970 and played a crucial role in making sure it’s held true in the decades since. He pioneered an approach to designing complex silicon chips, called very large scale integration (VLSI), that’s still influential today. Mead was responsible for a string of firsts in the semiconductor industry, and as a professor at the California Institute of Technology he taught many of Silicon Valley’s most famous technologists. In the 1980s, frustration with the limitations of standard computers led him to begin building chips modeled on mammalian brains—creating a field known as neuromorphic computing, which is now gaining new momentum. Now 79, Mead retains an office at Caltech, where he told MIT Technology Review why computer engineers should be investigating new forms of computing.
What are the big challenges for the chip industry today?
One problem I’ve been talking about for years is power dissipation. Chips are getting too hot to keep running them faster and faster.
It’s a common theme in technology evolution that what makes a group or company or field successful becomes an impediment to the next generation. This is an example of that. Everyone was richly rewarded for making things run faster and faster with lots of power. Going to multicore chips helped, but now we’re up to eight cores and it doesn’t look like we can go much further. People have to crash into the wall before they pay attention.
Power dissipation was one reason I started thinking about neuromorphic designs. I was thinking about how you would make massively parallel systems, and the only examples we had were in the brains of animals. We built lots of systems. We did retinas, cochleas—a lot of things worked. A lot of my students are still working on this. But it’s a much bigger task than I had thought going in.
More recently you’ve been working on a new, unified framework to explain both electromagnetic and quantum systems, summarized in your book Collective Electrodynamics. Do you think that could help discover new kinds of electronics?
The personal preface to that is I got frustrated because what people are doing now is basically a bunch of hacks. You do this problem this way, and you do that problem that way, and to me that’s a symptom of not having a coherent conceptualization of everything. It’s frustrating to me because I’ve always loved this subject.
The optics guys have sort of found a way through all that, in spite of the way that quantum mechanics is taught. Charlie Townes [inventor of the maser, precursor to the laser] went and visited Heisenberg, Bohr, and Von Neumann, and they basically said, “Sonny, you don’t seem to understand how quantum mechanics works.” Well, it wasn’t Charlie that didn’t understand. Optical communication has just bypassed everything we’re doing electronically, because it’s so much more effective—working deep in the quantum limit has really paid off.
We don’t know what a new electronic device is going to be. But there’s very little quantum about transistors. I’m not close to it, but I’m generally supportive of these people doing what they call quantum computing. People have got into trying to build real things based on quantum coupling, and any time people try to build stuff that actually works, they’re going to learn a hell of a lot. That’s where new science really comes from.
Quantum computing and neuromorphic computing are still such tiny, peripheral things compared to the semiconductor industry, though.
It always starts that way. The transistor was a tiny little wart off a big industry, and people said, “Oh, well, you can make hearing aids out of them.” You never know when something’s going to click.
I remember the guy from GE’s vacuum tube plant showing me their integrated circuits, which were little stacks of vacuum tubes each about the size of a pencil. It was called a thermionic integrated micromodule, TIMM. They would package them, put the little tabs that hooked to the cathode and the grid at different angles, and then they would run wires along and braze the whole thing together so they had a little integrated system.
It was an extremely clever technology. If the semiconductor things hadn’t come along, we’d still be flying to Mars with these thermionic integrated micromodules; they were extremely reliable, although they weren’t very power efficient. Well, it didn’t play out that way.
It could be that in a hundred years we still have integrated circuits pretty much as we have them today for a lot of things, and there will be other things for different applications. When a technology doing real work in the real world gets to a certain point, the evolution doesn’t stop but it becomes sort of logarithmic [levels off], and the technology becomes part of the infrastructure we take for granted.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.