Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Computer science is of two minds about artificial intelligence (AI). Some computer scientists believe in so-called “Strong” AI, which holds that all human thought is completely algorithmic, that is, it can be broken down into a series of mathematical operations. What logically follows, they contend, is that AI engineers will eventually replicate the human mind and create a genuinely self-conscious robot replete with feelings and emotions. Others embrace “Weak” AI, the notion that human thought can only be simulated in a computational device. If they are right, future robots may exhibit much of the behavior of device. If they are right, future robots may exhibit much of the behavior of persons, but none of these robots will ever be a person; their inner life will be as empty as a rock’s.

Past predictions by advocates of Strong and Weak AI have done little to move the debate forward. For example, Herbert Simon, professor of psychology at Carnegie Mellon University, perhaps the first and most vigorous adherent of Strong AI, predicted four decades ago that machines with minds were imminent. “It is not my aim to surprise or shock you,” he said. “But the simplest way I can summarize is to say that there are now in the world machines that think, that learn and create. Moreover, their ability to do these things is going to increase rapidly until-in a visible future-the range of problems they can handle will be coextensive with the range to which the human mind has been applied.”

On the other side of the equation, Hubert Dreyfus, a philosophy professor at Berkeley, bet the farm two decades ago that symbol-crunching computers would never even approach the problem-solving abilities of human beings, let alone an inner life. In his book, What Computers Can’t Do (HarperCollins 1978), and again in the revised edition, What Computers Still Can’t Do (MIT Press 1992), he claimed that formidable chess-playing computers would remain forever in the realm of fiction, and dared the AI community to prove him wrong.

The victory last spring by IBM’s Deep Blue computer over the world’s greatest human chess player, Gary Kasparov, obliterated Dreyfus’s prediction. But does it also argue for Strong rather than Weak AI? Kasparov himself seems to think so. To the delight of Strong AI supporters, Kasparov declared in Time last March that he “sensed a new kind of intelligence” fighting against him.

Moreover, the well-known philosopher Daniel Dennett of Tufts University would not find such a reaction hyperbolic in light of Deep Blue’s triumph. Ever the arch-defender of Strong AI, Dennett believes that consciousness is at its core algorithmic, and that AI is rapidly reducing consciousness to computation.

But in their exultation, Kasparov, Dennett, and others who believe that Deep Blue lends credence to Strong AI are overlooking one important fact: from a purely logical perspective chess is remarkably easy. Indeed, as has long been known, invincible chess can theoretically be played by a mindless system, as long as it follows an algorithm that traces out the consequences of each possible move until either a mate or draw position is found.

Of course, while this algorithm is painfully simple (undergraduates in computer science routinely learn it), it is computationally complex. In fact, if we assume an average of about 32 options per play, this yields a thousand options for each full move (a move is a play by one side followed by a play in response). Hence, looking ahead five moves yields a quadrillion (1015) possibilities. Looking ahead 40 moves, the length of a typical game, would involve 10120 possibilities. Deep Blue, which examines more than 100 million positions per second, would take nearly 10112 seconds, or about 10104 years to examine every move. By comparison, there have been fewer than 1018 seconds since the beginning of the universe, and the consensus among computer-chess cognoscenti is that our sun will expire before even tomorrow’s supercomputers can carry out such an exhaustive search.

But what if a computer can look very far ahead (powered, say, by the algorithm known as alpha-beta minimax search, Deep Blue’s main strategy), as opposed to all the way? And what if it could combine this processing horsepower with a pinch of knowledge of some basic principles of chess-for example, those involving king safety, which, incidentally, were installed in Deep Blue just before its match with Kasparov? The answer, as Deep Blue resoundingly showed, is that a machine so armed can best even the very best human chess player.

Pages

0 comments about this story. Start the discussion »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me