Hello,

We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not an Insider? Subscribe now for unlimited access to online articles.

Intelligent Machines

Will Machines Eliminate Us?

People who worry that we’re on course to invent dangerously intelligent machines are misunderstanding the state of computer science.

Yoshua Bengio leads one of the world’s preëminent research groups developing a powerful AI technique known as deep learning. The startling capabilities that deep learning has given computers in recent years, from human-level voice recognition and image classification to basic conversational skills, have prompted warnings about the progress AI is making toward matching, or perhaps surpassing, human intelligence. Prominent figures such as Stephen Hawking and Elon Musk have even cautioned that artificial intelligence could pose an existential threat to humanity. Musk and others are investing millions of dollars in researching the potential dangers of AI, as well as possible solutions. But the direst statements sound overblown to many of the people who are actually developing the technology. Bengio, a professor of computer science at the University of Montreal, put things in perspective in an interview with MIT Technology Review’s senior editor for AI and robotics, Will Knight.

Should we worry about how quickly artificial intelligence is advancing?

There are people who are grossly overestimating the progress that has been made. There are many, many years of small progress behind a lot of these things, including mundane things like more data and computer power. The hype isn’t about whether the stuff we’re doing is useful or not—it is. But people underestimate how much more science needs to be done. And it’s difficult to separate the hype from the reality because we are seeing these great things and also, to the naked eye, they look magical.

This story is part of our March/April 2016 Issue
See the rest of the issue
Subscribe

Is there a risk that AI researchers might accidentally “unleash the demon,” as Musk has put it?

It’s not like somebody found some magical recipe suddenly. Things are much more complicated than the simple story some people would like to tell. Journalists would sometimes like to tell the story that someone in their garage will have this amazing idea, and then we have a breakthrough and have AI. Similarly, companies want to tell a nice little story that “Oh, we have this revolutionary technology that’s going to change the world—AI is almost here, and we are the company that’s going to deliver it.” That’s not at all how it works.

What about the idea, central to these concerns, that AI could somehow start improving itself and then become difficult to control?

It’s not how AI is built these days. Machine learning means you have a painstaking, slow process of acquiring information through millions of examples. A machine improves itself, yes, but very, very slowly, and in very specialized ways. And the kind of algorithms we play with are not at all like little virus things that are self-programming. That’s not what we’re doing.

What are some of the big unsolved problems with AI?

Unsupervised learning is really, really important. Right now, the way we’re teaching machines to be intelligent is that we have to tell the computer what is an image, even at the pixel level. For autonomous driving, humans label huge numbers of images of cars to show which parts are pedestrians or roads. It’s not at all how humans learn, and it’s not how animals learn. We’re missing something big. This is one of the main things we’re doing in my lab, but there are no short-term applications—it’s probably not going to be useful to build a product tomorrow.

Another big challenge is natural language understanding. We’ve been making pretty fast progress in the past few years, so it’s very encouraging. But it’s still not at the level where we would say the machine understands. That would be when we could read a paragraph and then ask any question about it, and the machine would basically answer in a reasonable way, as a human would. We are still far from that.

What approaches beyond deep learning will be needed to create a true machine intelligence?

Traditional endeavors, including reasoning and logic—we need to marry these things with deep learning in order to move toward AI. I’m one of the few people who think that machine learning people, and especially deep learning people, should pay more attention to neuroscience. Brains work, and we still don’t know why in many ways. Improving that understanding has a great potential to help AI research.

And I think that neuroscience people would gain a lot from keeping track of what we do and trying to fit what they observe of the brain with the kinds of concepts we are developing in machine learning.

Did you ever think you’d have to explain to people that AI isn’t about to take over the world? That must be odd.

It’s certainly a new concern. For so many years, AI has been a disappointment. As researchers we fight to make the machine slightly more intelligent, but they are still so stupid. I used to think we shouldn’t call the field artificial intelligence but artificial stupidity. Really, our machines are dumb, and we’re just trying to make them less dumb.

Now, because of these advances that people can see with demos, now we can say, “Oh, gosh, it can actually say things in English, it can understand the contents of an image.” Well, now we connect these things with all the science fiction we’ve seen and it’s like, “Oh, I’m afraid!”

Okay, but surely it’s still important to think now about the eventual consequences of AI.

Absolutely. We ought to be talking about these things. The thing I’m more worried about, in a foreseeable future, is not computers taking over the world. I’m more worried about misuse of AI. Things like bad military uses, manipulating people through really smart advertising; also, the social impact, like many people losing their jobs. Society needs to get together and come up with a collective response, and not leave it to the law of the jungle to sort things out.

Couldn't make it to Cambridge? We've brought EmTech MIT to you!

Watch session videos

Uh oh–you've read all of your free articles for this month.

Insider Premium
$179.95/yr US PRICE

More from Intelligent Machines

Artificial intelligence and robots are transforming how we work and live.

Want more award-winning journalism? Subscribe to Insider Plus.
  • Insider Plus {! insider.prices.plus !}*

    {! insider.display.menuOptionsLabel !}

    Everything included in Insider Basic, plus the digital magazine, extensive archive, ad-free web experience, and discounts to partner offerings and MIT Technology Review events.

    See details+

    What's Included

    Unlimited 24/7 access to MIT Technology Review’s website

    The Download: our daily newsletter of what's important in technology and innovation

    Bimonthly print magazine (6 issues per year)

    Bimonthly digital/PDF edition

    Access to the magazine PDF archive—thousands of articles going back to 1899 at your fingertips

    Special interest publications

    Discount to MIT Technology Review events

    Special discounts to select partner offerings

    Ad-free web experience

/
You've read all of your free articles this month. This is your last free article this month. You've read of free articles this month. or  for unlimited online access.