Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

All this is very different from the early days of artificial intelligence, in the ’50s and ’60s, when researchers made bold predictions about matching human ability and tried to use high-level rules to create intelligence. Are your machine-learning systems working out those same high-level rules for themselves?

Horvitz: Learning systems can derive high-level situational rules for action, for example, to take a set of [physiological] symptoms and test results and spit out a diagnosis. But that isn’t the same as general rules of intelligence.

It may be that the more low-level work we do today will meet the top-down ideas from the bottom up one day. The revolution that Peter and I were part of in AI was that decision making under uncertainty was so important and could be done with probabilistic approaches. Along with the probabilistic revolution in the AI comes perspective: we are very limited agents and incompleteness is inescapable.

Norvig: In the early days, there was logic that set artificial intelligence apart, and the question was how to use it. The study became the study of what these tools were good for, like chess. But you can then only have things that are true or false and you can’t do a lot of things we want to do, so we went toward probability. It took the field a while to recognize those other fields, like probability and decision theory, were out there. Bringing those two approaches together is a challenge.

As we see more direct evidence of AI in real life, for example, Siri, it seems that a kind of design problem has been created. People creating AIs need to make them palatable to our own intelligence.

Norvig: That’s actually a set of problems at various levels. We know the human vision system and what making buttons different colors might mean, for example. At a higher level, the expectations in our head of something and how it should behave are based on what we think it is and how we think of its relationship to us.

Horvitz: AI is intersecting more and more with the field of computer human interaction [studying the psychology of how we use and think about computers]. The idea that we will have more intelligent things that work closely with people really focuses attention on the need to develop new methods at the intersection of human intelligence and machine intelligence.

What do we need to know more about to make AIs more compatible with humans?

Horvitz: One thing my research group has been pushing to give computers is a systemwide understanding of human attention, to know when best to interrupt a person. It’s been a topic of research between us researchers and the product teams.

Norvig: I think we also want to understand the human body a lot more, and you can see in Microsoft’s Kinect a way to do that. There’s lots of potential to have systems understand our behavior and body language.

Is there any AI in Kinect?

Horvitz: There’s quite a lot of machine learning at the core of it. I think the idea that we can take leading-edge AI and develop a consumer device that sold faster than any other before in history says something about the field of AI. Machine learning also plays a central role in Bing search, and I can only presume is also important in Google’s search offering. So, people searching the Web use AI in their daily lives.

One last question: Can you tell me one recent demo of AI technology that impressed you?

Norvig: I read a paper recently by someone at Google about to go back to Stanford about unsupervised learning, an area where the curves of our improvement over time have not looked so good. But he’s getting some really good results, and it looks like learning when you don’t know anything in advance could be about to get a lot better.

Horvitz: I’ve been very impressed by apprentice learning, where a system learns by example. It has lots of applications. Berkeley and Stanford both have groups really advancing that: for example, helicopters that learn to fly on their backs [upside-down] from [observing] a human expert.

11 comments. Share your thoughts »

Credit: Bart Nagel (top); Microsoft (bottom)

Tagged: Computing, Web

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me