Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Gordon Moore is most famous for coining Moore’s Law, his 1965 prediction that the number of transistors that could be packed into an integrated circuit would double every year. A decade later, he revised that estimate to every two years-a prediction that has held remarkably true ever since and is often used as a baseline for evaluating performance in other spheres of computing. But the semiconductor pioneer and cofounder of Intel claims he has never really been good at predicting the future. In fact, he says he’s pretty bad at it.

In a wide-ranging conversation from his vacation home on Hawaii’s big island, this 72-year-old avid fisherman and pragmatic environmentalist spoke with Technology Review editor at large Robert Buderi about the future of Moore’s Law, which he’s revising again. He also discussed the newly created Gordon E. and Betty I. Moore Foundation, which he plans to endow with $5 billion worth of Intel stock (about half of his estimated wealth). The new foundation will be geared to supporting far-out university research and advancing programs that protect the environment.

Finally, Moore reveals why he and a few pals named Hewlett and Packard picked up the tab for the Search for Extraterrestrial Intelligence-and shares how it’s possible to succeed wildly in business without having a crystal ball.

TR: Where do you think we are heading technologically? What excites you most?
Moore: I calibrate my ability to predict the future by saying that in 1980 if you’d asked me about the most important applications of the microprocessor, I probably would have missed the PC. In 1990 I would have missed the Internet. So here we are just past 2000, I’m probably missing something very important.
Certainly the things going on in molecular biology these days are exciting. We really are understanding increasingly how all the life processes work. In the information sciences, the increasing capability of networks is changing the way we do everything. It’s going to result in all of us having a lot of bandwidth whenever we want it, to go with a lot of computing power. As far as what are we going to do with all the computing power, I’m sure I’ll miss the most important things. But the one capability that to me will make a qualitative difference in how we do things is truly good speech recognition. That is, a machine that can recognize if you mean t-o, or t-w-o, or t-o-o by understanding in context what you’re saying.

Once a machine understands in context like that, you can actually hold an intelligent conversation with a machine. That will dramatically change the way people interact with machines-and broaden the number of people that can use them. I have no idea how far away that is. At the level I’m talking, it’s probably 20 to 50 years away. But there’s certainly nothing impossible about it.

TR: What about the future of Moore’s Law?
Moore: We’re at a time where we’re kind of doubling about every two years, which is where we’ve been for the last 25 years. But sometime probably between 2010 and 2020, we lose the largest single factor that lets us continue on that curve-our ability to make things smaller [For a complete look at this issue and possible future computing technologies, see TR May/June 2000]. You run into the problem that materials are made of atoms, and we’re getting down to small enough dimensions where they no longer behave like bulk materials. So we’ll have to depend on other factors to continue on this curve. And that means finding better ways to pack things. Some people are talking about moving into the third dimension. A little bit of that has been done. You could also make bigger chips, losing some of the economic advantages. But maybe the doubling time will change from every two years to every four or five years. It’s not the end of progress by any stretch of the imagination. We’ll be putting a billion transistors on a logic chip; it’ll keep designers busy for decades figuring out what they can do with that.

Pages

0 comments about this story. Start the discussion »

Tagged: Communications

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me