A View from Henry Lieberman
A Worthwhile Contest for Artificial Intelligence
IBM’s computer can achieve a victory even if it doesn’t beat the humans on Jeopardy.
The face of Watson. Credit: IBM
If IBM’s Watson machine defeats people on TV’s Jeopardy this week, does that mean that computers are smarter than humans? Maybe not. But the performance could tell us something far more interesting.
For a person, Jeopardy is a test of how much specific knowledge you know and how fast you can access it. For the computer, storing a lot of stuff and accessing it quickly are a piece of cake. What’s hard is understanding what the question is really asking for. What’s hard is the wordplay and puns in many Jeopardy questions. What’s hard is that you can’t prepare a database in advance that is certain to have the subject matter. Type a Jeopardy question to a search engine and you’re unlikely to get the answer directly.
To me, the most interesting part would be if the Jeopardy computer, like people, made essential use of commonsense knowledge about everyday life. That kind of knowledge is automatic to humans, yet often absent from a computer. Take, for example, the following Jeopardy question:
Category: Rhyme Time
Clue: It’s where Pele stores his ball.
Answer: What’s a soccer locker?
Knowing that Pele is a soccer player is the easy part. The rhyming, no problem. But how would it know that an athlete stores belongings in a locker? And how would it figure out that this particular fact was the essential one? That’s the hard part, and that’s also one of the main topics of my research.
What becomes clear from an article in AI Magazine last fall is the astonishing breadth of techniques employed. The project leader, David Ferrucci, cast a wide net, not only within IBM, but also to other researchers of widely varying (and sometimes conflicting) approaches. Watson, above all, is a feat of engineering. Not so much in introducing new techniques: it’s mainly trying to make sense of the range of existing ones and see if they will play nicely with each other. It’s healthy for the field to have somebody, once in a while, try to do that.
If successful, it may provide support for Marvin Minsky’s contention that the true secret to intelligence is to have lots of different problem-solving methods. You need good ways of figuring out which one is appropriate for which sort of problem. As Minsky says, “If you understand something in only one way, you don’t understand it at all.”
Let’s be clear: winning a particular contest is not, by itself, a scientific achievement. Science is not a contest. Science advances by learning general problem-solving principles. If it happens that scientists introduce new, general principles that enable them to win a particular contest, then a contest can serve as a public demonstration of their prowess. It’s great PR. But sometimes contests can be won by tricks or specialized techniques that don’t cause scientists to learn anything really new. It all depends on how it’s done. Scientists judge by the principles and techniques, not by the contest results. Even after the Jeopardy event, we won’t really know “who won” until all the details of how it was done are published in the scientific literature.
In the past few years, there’s been a fad for contests, “challenges,” “grand prizes,” etc. in scientific and engineering fields. I have no objection if it’s only good, clean fun between consenting adults. But on the whole, I think this fad has been detrimental to science. Contests encourage competitive attitudes and secrecy between contestants. They focus people on incremental progress in very specialized areas, for one-shot tests. Science needs exactly the opposite—collaboration between researchers, openness, a diversity of approaches and “out of the box” and long-term thinking. It needs the freedom to choose what problem to work on, rather than have it dictated by the arbitrary rules of the contest.
Contests also encourage a gambling mentality, sometimes pathological. Happy stories of the winners are trumpeted, but vast majority of losers merely get their time and money wasted. A few years ago the Defense Advanced Research Projects Agency (DARPA), which has had a glorious history of funding innovative work in artificial intelligence, became enamored of contests. It browbeat researchers into participating, turning off many creative people who refused to “gamble with the rent money.” It set the field back by years. I myself have declined countless invitations to participate in contests.
But as contests go, this one is pretty good. IBM is the only research contestant, so it isn’t pitting researchers against each other. And the IBM team has been pretty open about its methods. Though not every detail was revealed, and the team has asked certain collaborators to sign non-disclosure agreements, I assume that more details will be forthcoming in the scientific literature after the contest.
Artificial Intelligence has suffered from the perception that it is tackling an impossible problem, tilting at windmills—that human intelligence can’t be understood in detail. That attitude sometimes discourages young people from entering the AI field.
My hope would be that a Jeopardy victory now, or victories in future computer-versus-human contests, will cause the researchers who win them, like Ferrucci and his team, to be role models for young researchers, like the astronauts of years past who inspired youngsters to embark on science careers. The point isn’t to show that computers are smarter than humans. It’s to show that the intelligence that enables a person to play Jeopardy is something that’s worth studying; that it can be understood, and put to work to help people. That would be the real victory. Go Watson!
Henry Lieberman is a research scientist who works on artificial intelligence at the Media Laboratory at MIT.
Keep up with the latest in AI at EmTech MIT.
Discover where tech, business, and culture converge.
September 11-14, 2018
MIT Media Lab