We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not a subscriber? Subscribe now for unlimited access to online articles.

Intelligent Machines

Three Questions for Computing Pioneer Carver Mead

Carver Mead christened Moore’s Law and helped make it come true. Now he says engineers should experiment with quantum mechanics to advance computing.

Carver Mead was responsible for several crucial inventions that enabled the development of today’s computers.

Computer scientist Carver Mead gave Moore’s Law its name in around 1970 and played a crucial role in making sure it’s held true in the decades since. He pioneered an approach to designing complex silicon chips, called very large scale integration (VLSI), that’s still influential today. Mead was responsible for a string of firsts in the semiconductor industry, and as a professor at the California Institute of Technology he taught many of Silicon Valley’s most famous technologists. In the 1980s, frustration with the limitations of standard computers led him to begin building chips modeled on mammalian brains—creating a field known as neuromorphic computing, which is now gaining new momentum. Now 79, Mead retains an office at Caltech, where he told MIT Technology Review why computer engineers should be investigating new forms of computing.

Carver Mead
Quantum leap: Carver Mead says computer scientists ought to focus on quantum phenomena to advance their field.

What are the big challenges for the chip industry today?

One problem I’ve been talking about for years is power dissipation. Chips are getting too hot to keep running them faster and faster.

It’s a common theme in technology evolution that what makes a group or company or field successful becomes an impediment to the next generation. This is an example of that. Everyone was richly rewarded for making things run faster and faster with lots of power. Going to multicore chips helped, but now we’re up to eight cores and it doesn’t look like we can go much further. People have to crash into the wall before they pay attention.

Power dissipation was one reason I started thinking about neuromorphic designs. I was thinking about how you would make massively parallel systems, and the only examples we had were in the brains of animals. We built lots of systems. We did retinas, cochleas—a lot of things worked. A lot of my students are still working on this. But it’s a much bigger task than I had thought going in.

More recently you’ve been working on a new, unified framework to explain both electromagnetic and quantum systems, summarized in your book Collective Electrodynamics. Do you think that could help discover new kinds of electronics?

The personal preface to that is I got frustrated because what people are doing now is basically a bunch of hacks. You do this problem this way, and you do that problem that way, and to me that’s a symptom of not having a coherent conceptualization of everything. It’s frustrating to me because I’ve always loved this subject.

The optics guys have sort of found a way through all that, in spite of the way that quantum mechanics is taught. Charlie Townes [inventor of the maser, precursor to the laser] went and visited Heisenberg, Bohr, and Von Neumann, and they basically said, “Sonny, you don’t seem to understand how quantum mechanics works.” Well, it wasn’t Charlie that didn’t understand. Optical communication has just bypassed everything we’re doing electronically, because it’s so much more effective—working deep in the quantum limit has really paid off.

We don’t know what a new electronic device is going to be. But there’s very little quantum about transistors. I’m not close to it, but I’m generally supportive of these people doing what they call quantum computing. People have got into trying to build real things based on quantum coupling, and any time people try to build stuff that actually works, they’re going to learn a hell of a lot. That’s where new science really comes from.

Quantum computing and neuromorphic computing are still such tiny, peripheral things compared to the semiconductor industry, though.

It always starts that way. The transistor was a tiny little wart off a big industry, and people said, “Oh, well, you can make hearing aids out of them.” You never know when something’s going to click.

I remember the guy from GE’s vacuum tube plant showing me their integrated circuits, which were little stacks of vacuum tubes each about the size of a pencil. It was called a thermionic integrated micromodule, TIMM. They would package them, put the little tabs that hooked to the cathode and the grid at different angles, and then they would run wires along and braze the whole thing together so they had a little integrated system.

It was an extremely clever technology. If the semiconductor things hadn’t come along, we’d still be flying to Mars with these thermionic integrated micromodules; they were extremely reliable, although they weren’t very power efficient. Well, it didn’t play out that way.

It could be that in a hundred years we still have integrated circuits pretty much as we have them today for a lot of things, and there will be other things for different applications. When a technology doing real work in the real world gets to a certain point, the evolution doesn’t stop but it becomes sort of logarithmic [levels off], and the technology becomes part of the infrastructure we take for granted.

Learn from the humans leading the way in intelligent machines at EmTech Next. Register Today!
June 11-12, 2019
Cambridge, MA

Register now
More from Intelligent Machines

Artificial intelligence and robots are transforming how we work and live.

Want more award-winning journalism? Subscribe to Print + All Access Digital.
  • Print + All Access Digital {! insider.prices.print_digital !}*

    {! insider.display.menuOptionsLabel !}

    The best of MIT Technology Review in print and online, plus unlimited access to our online archive, an ad-free web experience, discounts to MIT Technology Review events, and The Download delivered to your email in-box each weekday.

    See details+

    12-month subscription

    Unlimited access to all our daily online news and feature stories

    6 bi-monthly issues of print + digital magazine

    10% discount to MIT Technology Review events

    Access to entire PDF magazine archive dating back to 1899

    Ad-free website experience

    The Download: newsletter delivery each weekday to your inbox

    The MIT Technology Review App

You've read of three free articles this month. for unlimited online access. You've read of three free articles this month. for unlimited online access. This is your last free article this month. for unlimited online access. You've read all your free articles this month. for unlimited online access. You've read of three free articles this month. for more, or for unlimited online access. for two more free articles, or for unlimited online access.