Skip to Content
Uncategorized

Carver Meads Natural Inspiration

Caltech researcher and Silicon Valley legend Carver Mead explains his secret for founding successful companies: let the science lead the way.
September 1, 2005

Conventional wisdom descries a black hole between the infinite uncertainty of modern theoretical physics and the can-do spirit of entrepreneurship and engineering. One more reason to ignore conventional wisdom, says Carver Mead, who became a technology legend by working both sides of what often seems an uncrossable divide. A Caltech stalwart – he is the emeritus Gordon and Betty Moore Professor of Engineering and Applied Science – Mead is one of the seminal figures in the story of Silicon Valley, with a résumé stretching back to integrated-circuit pioneer Fairchild Semiconductor and more than 20 startups to his credit.

Mead’s early work in “electron tunneling” provided insights crucial to the development of solid-state electronics. His calculation of the theoretical potential for shrinking transistors gave Intel founder Moore the basis for his eponymous law, which predicts the steadily increasing power of microchips. And in the early 1980s, Mead and Caltech colleague Richard Feynman, the late Nobel laureate physicist, took circuitry into a new dimension by exploring “neuromorphic” electronics modeled on living organisms. Along the way Mead has stacked up prizes, including the $500,000 Lemelson-MIT Prize for invention and innovation and the National Medal of Technology in 2003. But his proudest achievement is a string of companies that includes touch pad maker Synaptics and the revolutionary image-sensor and camera startup Foveon, both outgrowths of his work in neuromorphic computing.

Spencer Reiss talked with Mead, who turned 70 this year, at his house among the redwoods in Woodside, CA.

Technology Review: You’re famous for saying, “Listen to the technology.”

CARVER MEAD: To understand reality, you have to understand how things work. If you do that, you can start to do engineering with it, build things. And if you can’t, whatever you’re doing probably isn’t good science. To me, engineering and science aren’t separate endeavors. It’s like, “Are you a husband or a father?”

TR: How do you decide what to pursue?

MEAD: Are you kidding? Research is a matter of love. It’s not a left-brain thing. Once you figure out something, then you construct an elaborate rationale – the talks you eventually give that make it all sound so simple. Until then, I get angry when people ask me what I’m working on, because I have no way yet to express it.

TR: Is that what venture capitalists are for – to be cold-blooded about what to put resources into?

MEAD: All my favorite VC types – I know that sounds like an oxymoron, but actually I do like some of those guys – say the same thing: they go with their gut. Does the technology have enough potential applications to score at least one? Spreadsheets won’t answer that.

TR: What about looking at the marketplace?

MEAD: Sure, you can analyze the marketplace, talk to customers, do all the things they teach you in business school. The problem with “demand pull” is that by the time you have a real product, the market will have moved on. You’re doomed to playing catch-up. I prefer “technology push” – find an interesting new technology and try to come up with uses for it. “A solution looking for a problem” is supposed to be a terrible epithet, but in my experience it works.

TR: For example?

MEAD: Impinj, a company started by a former student of mine at Caltech, Chris Diorio. I’m on the board. Starting out with something completely unrelated – neurally inspired computing – he came up with a very precise and low-power way to put a charge on a floating-gate transistor, which is the basis for flash memory. It was a classic “solution looking for a problem,” which is turning out to be RFID, the little [radio frequency] identity tags to put on things. They’re the ultimate lower-power device – picowatts, whatever you can get out of a little antenna. So instead of just having a “dumb” tag that can tell you its name and nothing more, you get a smart one that updates itself as it goes. You get a package or a product that can tell you its whole history, right there.

TR: Peter Drucker says, don’t solve problems, seize opportunities.

MEAD: Right. If Impinj had looked around and said, “Hey, let’s do RFID,” they would have ended up with a nonrewritable tag. Just like a dozen other companies out there now.

TR: RFID tags for Wal-Mart are a long way from trying to reverse-engineer computers from biological models…

MEAD: When you’ve finally got a product, the fact that you were inspired to go that way by thinking about touch and vision and hearing or whatever doesn’t matter much. You’re on to making products, and everything that led up to that falls away.

TR: That’s a little sad, no?

MEAD: Of course it is, but it’s what happens when you start a company. The unlimited potential of your new technology – it’s a huge high just thinking about it. But once it’s manifest, once it becomes a product, it’s not a myriad of anything; it’s one thing. So inevitably, there is a huge postpartum – a sense of all the things you weren’t able to do.

TR: Is that when you pull up stakes?

MEAD: It’s happened with every company I’ve worked with. They get to the point where they’re successful, they’re on a track, and there’s less and less that someone like me can contribute. You actually become a distraction: they’re trying to focus, and you’re wandering around thinking about all these interesting new questions. That’s when it’s time to leave.

TR: Some people think young technologists need to spend more time learning how to market their ideas.

MEAD: Science is not just about self-expression; you have to be able to explain what you’re doing. Dick Feynman was one of the best marketers I have ever met. He never wanted to admit it – in his day, anything entrepreneurial was socially unacceptable for an academic – but he was able to position physics as something exciting, in a way that has survived to this day.

TR: You and Feynman were behind a big neuromorphic-computing project launched at Caltech in the ’80s. What happened?

MEAD: Part of the problem was the refusal of the CS [computer science] community to have a new thought – the fact that there might be inherently more powerful ways to do computing. People said, “Everything’s a Turing machine, and that’s that.” No matter that we already have a working example of a massively parallel machine – the animal brain. And meanwhile, now, the quantum computing guys have come along and showed yet another alternative model – one that in theory will solve problems that are exponentially unsolvable by a Turing machine. I’m making no statement about the realization of quantum computers – we still don’t know about that. I’m just talking about our understanding of computing in the abstract. You need a fundamentally new conception of that if you want to try to make a better machine.

TR: Another neurally inspired company you’ve been involved with makes advanced hearing aids, Sonic Innovations.

MEAD: The thought process there came from thinking about how human hearing works, but again the actual device is just a little digital signal processor. The same thing happened with the idea of neural networks, by the way. They became just another algorithm for existing computers.

TR: What about Foveon, the camera company you founded in 1997? Most people probably don’t realize that its roots are in studies of the eye.

MEAD: We started out making models of the retina, which by itself might make a big difference to a few people, but it’s not enough of a commercial opportunity to justify big investment. What we realized was that if you took what we were doing and strip out the retina part, that’s a really good image sensor – so let’s do that. Foveon technology captures light directly, consuming less power and requiring far less processing than the file captured by a conventional digital camera. But when we explain it today, we don’t have any reference to anything neural.

TR: So we’re still at square one with neuromorphic computing?

MEAD: Actually, quite a lot of progress has been made. One of the exciting things that grew out of neuromorphic thinking is Lloyd Watts’s company Audience. They’ve got a working cochlear model that builds a significant portion of the auditory pathway – including precision signal recovery and sophisticated analysis – into a chip-level component. It’s more than just a better microphone; think of it as the auditory front end for any device that wants to use sound as an input.

TR: Voice recognition lives!

MEAD: Voice recognition as we know it is really brain dead. I shouldn’t say brain dead – a lot of smart people have worked on it for many years. But it’s an old paradigm. It’s advancing logarithmically with processing power; that’s about it. And yet we have these incredible working models right here – our own eyes and ears. That’s where we want to be looking.

TR: Hearing, vision – the same problems you picked out nearly 20 years ago are still interesting problems.

MEAD: They’re even more interesting, because we’re starting to know enough about them to make some progress. It’s taken this long to get the engineering-oriented people talking to the physiology people. Lawyers talk about “Chinese walls” in organizations; well, the barriers between scientific disciplines have been fierce.

TR: Is it the inherent difficulty of adapting digital technologies to our mostly analog human world?

MEAD: Digital abstraction is a wonderful thing. It substitutes a very simple set of logic operations – “and,” “or,” and “not” – for an infinite set of physical things. Working in analog is much harder, because there are essentially countless ways for the thing to go wrong. You’re working with the physics itself, rather than with some very small set of circuits that have been crafted to show digital behavior.

TR: We can’t let you get away without asking about Moore’s Law. You get a lot of credit for its formulation.

MEAD: Gordon had observed what was happening and asked me how far things could go, how small you could make the transistors. We did some work in the lab, and the answer turned out to be .15 microns [150 nanometers], maybe smaller. That was shocking at the time, but it turns out to have been conservative.

TR: So how far can it go?

MEAD: I looked at things again a few years ago, and if you don’t do anything differently, you can get down to 30 nanometers – a factor of five from what we originally said was going to be easy, and still a long ways from where things are today. So it’s certainly not going to stop.

And at the same time, we don’t have to keep doing things exactly the way we are doing them today. I for one certainly hope we don’t.

Salisbury, CT-based writer Spencer Reiss likes to interview people smarter than he is. The last time he did it for TR was with venture capitalist Michael Moritz, the man behind Google (April 2004).

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.