Skip to Content

Three Questions for Computing Pioneer Carver Mead

Carver Mead christened Moore’s Law and helped make it come true. Now he says engineers should experiment with quantum mechanics to advance computing.
November 13, 2013

Computer scientist Carver Mead gave Moore’s Law its name in around 1970 and played a crucial role in making sure it’s held true in the decades since. He pioneered an approach to designing complex silicon chips, called very large scale integration (VLSI), that’s still influential today. Mead was responsible for a string of firsts in the semiconductor industry, and as a professor at the California Institute of Technology he taught many of Silicon Valley’s most famous technologists. In the 1980s, frustration with the limitations of standard computers led him to begin building chips modeled on mammalian brains—creating a field known as neuromorphic computing, which is now gaining new momentum. Now 79, Mead retains an office at Caltech, where he told MIT Technology Review why computer engineers should be investigating new forms of computing.

Carver Mead
Quantum leap: Carver Mead says computer scientists ought to focus on quantum phenomena to advance their field.

What are the big challenges for the chip industry today?

One problem I’ve been talking about for years is power dissipation. Chips are getting too hot to keep running them faster and faster.

It’s a common theme in technology evolution that what makes a group or company or field successful becomes an impediment to the next generation. This is an example of that. Everyone was richly rewarded for making things run faster and faster with lots of power. Going to multicore chips helped, but now we’re up to eight cores and it doesn’t look like we can go much further. People have to crash into the wall before they pay attention.

Power dissipation was one reason I started thinking about neuromorphic designs. I was thinking about how you would make massively parallel systems, and the only examples we had were in the brains of animals. We built lots of systems. We did retinas, cochleas—a lot of things worked. A lot of my students are still working on this. But it’s a much bigger task than I had thought going in.

More recently you’ve been working on a new, unified framework to explain both electromagnetic and quantum systems, summarized in your book Collective Electrodynamics. Do you think that could help discover new kinds of electronics?

The personal preface to that is I got frustrated because what people are doing now is basically a bunch of hacks. You do this problem this way, and you do that problem that way, and to me that’s a symptom of not having a coherent conceptualization of everything. It’s frustrating to me because I’ve always loved this subject.

The optics guys have sort of found a way through all that, in spite of the way that quantum mechanics is taught. Charlie Townes [inventor of the maser, precursor to the laser] went and visited Heisenberg, Bohr, and Von Neumann, and they basically said, “Sonny, you don’t seem to understand how quantum mechanics works.” Well, it wasn’t Charlie that didn’t understand. Optical communication has just bypassed everything we’re doing electronically, because it’s so much more effective—working deep in the quantum limit has really paid off.

We don’t know what a new electronic device is going to be. But there’s very little quantum about transistors. I’m not close to it, but I’m generally supportive of these people doing what they call quantum computing. People have got into trying to build real things based on quantum coupling, and any time people try to build stuff that actually works, they’re going to learn a hell of a lot. That’s where new science really comes from.

Quantum computing and neuromorphic computing are still such tiny, peripheral things compared to the semiconductor industry, though.

It always starts that way. The transistor was a tiny little wart off a big industry, and people said, “Oh, well, you can make hearing aids out of them.” You never know when something’s going to click.

I remember the guy from GE’s vacuum tube plant showing me their integrated circuits, which were little stacks of vacuum tubes each about the size of a pencil. It was called a thermionic integrated micromodule, TIMM. They would package them, put the little tabs that hooked to the cathode and the grid at different angles, and then they would run wires along and braze the whole thing together so they had a little integrated system.

It was an extremely clever technology. If the semiconductor things hadn’t come along, we’d still be flying to Mars with these thermionic integrated micromodules; they were extremely reliable, although they weren’t very power efficient. Well, it didn’t play out that way.

It could be that in a hundred years we still have integrated circuits pretty much as we have them today for a lot of things, and there will be other things for different applications. When a technology doing real work in the real world gets to a certain point, the evolution doesn’t stop but it becomes sort of logarithmic [levels off], and the technology becomes part of the infrastructure we take for granted.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.