Skip to Content

Chess is Too Easy

Forget about Big Blue vs. Kasparov–the best test of artificial intelligence is to ask a computer to write a story. Meet Brutus.1, a software agent that creates short tales of betrayal,self-deception, and evil worthy of a human creator.

Computer science is of two minds about artificial intelligence (AI). Some computer scientists believe in so-called “Strong” AI, which holds that all human thought is completely algorithmic, that is, it can be broken down into a series of mathematical operations. What logically follows, they contend, is that AI engineers will eventually replicate the human mind and create a genuinely self-conscious robot replete with feelings and emotions. Others embrace “Weak” AI, the notion that human thought can only be simulated in a computational device. If they are right, future robots may exhibit much of the behavior of device. If they are right, future robots may exhibit much of the behavior of persons, but none of these robots will ever be a person; their inner life will be as empty as a rock’s.

Past predictions by advocates of Strong and Weak AI have done little to move the debate forward. For example, Herbert Simon, professor of psychology at Carnegie Mellon University, perhaps the first and most vigorous adherent of Strong AI, predicted four decades ago that machines with minds were imminent. “It is not my aim to surprise or shock you,” he said. “But the simplest way I can summarize is to say that there are now in the world machines that think, that learn and create. Moreover, their ability to do these things is going to increase rapidly until-in a visible future-the range of problems they can handle will be coextensive with the range to which the human mind has been applied.”

On the other side of the equation, Hubert Dreyfus, a philosophy professor at Berkeley, bet the farm two decades ago that symbol-crunching computers would never even approach the problem-solving abilities of human beings, let alone an inner life. In his book, What Computers Can’t Do (HarperCollins 1978), and again in the revised edition, What Computers Still Can’t Do (MIT Press 1992), he claimed that formidable chess-playing computers would remain forever in the realm of fiction, and dared the AI community to prove him wrong.

The victory last spring by IBM’s Deep Blue computer over the world’s greatest human chess player, Gary Kasparov, obliterated Dreyfus’s prediction. But does it also argue for Strong rather than Weak AI? Kasparov himself seems to think so. To the delight of Strong AI supporters, Kasparov declared in Time last March that he “sensed a new kind of intelligence” fighting against him.

Moreover, the well-known philosopher Daniel Dennett of Tufts University would not find such a reaction hyperbolic in light of Deep Blue’s triumph. Ever the arch-defender of Strong AI, Dennett believes that consciousness is at its core algorithmic, and that AI is rapidly reducing consciousness to computation.

But in their exultation, Kasparov, Dennett, and others who believe that Deep Blue lends credence to Strong AI are overlooking one important fact: from a purely logical perspective chess is remarkably easy. Indeed, as has long been known, invincible chess can theoretically be played by a mindless system, as long as it follows an algorithm that traces out the consequences of each possible move until either a mate or draw position is found.

Of course, while this algorithm is painfully simple (undergraduates in computer science routinely learn it), it is computationally complex. In fact, if we assume an average of about 32 options per play, this yields a thousand options for each full move (a move is a play by one side followed by a play in response). Hence, looking ahead five moves yields a quadrillion (1015) possibilities. Looking ahead 40 moves, the length of a typical game, would involve 10120 possibilities. Deep Blue, which examines more than 100 million positions per second, would take nearly 10112 seconds, or about 10104 years to examine every move. By comparison, there have been fewer than 1018 seconds since the beginning of the universe, and the consensus among computer-chess cognoscenti is that our sun will expire before even tomorrow’s supercomputers can carry out such an exhaustive search.

But what if a computer can look very far ahead (powered, say, by the algorithm known as alpha-beta minimax search, Deep Blue’s main strategy), as opposed to all the way? And what if it could combine this processing horsepower with a pinch of knowledge of some basic principles of chess-for example, those involving king safety, which, incidentally, were installed in Deep Blue just before its match with Kasparov? The answer, as Deep Blue resoundingly showed, is that a machine so armed can best even the very best human chess player.

Creativity Ex Machina?

But the kind of thinking that goes into chess, stacked against the full power and range of the human mind, is far from the whole story. Nineteenth century mathematician Ada Byron, known as Lady Lovelace, was perhaps the first to suggest that creativity is the essential difference between mind and machine-the defining essence that goes beyond what even the most sophisticated algorithm can accomplish. Lovelace argued that computing machines, such as that contrived by her contemporary, Charles Babbage, can’t create anything, for creation requires, minimally, originating something. Computers can originate nothing; they can merely do that which we order them, via programs, to do.

A century later Alan Turing, the grandfather of both AI and computer science, responded to Lady Lovelace’s objection by inventing the now-famous Turing Test, which a computer passes if it can fool a human into thinking that it is a human. Unfortunately, while chess is too easy, the Turing Test is still far too difficult for today’s computers. For example, deception-which a potent computer player in the Turing Test should surely be capable of-is an incredibly complex concept. To urge a person to mistakenly accept a false notion requires that the computer understand not only that the idea is false, but also the myriad subtle connections that exist between the idea and that person’s beliefs, attitudes, and countless other ideas.

Though the Turing Test is currently out of the reach of the smartest of our machines, there may be a simpler way of deciding between the strong and weak forms of AI-one that highlights creativity, which may well be the real issue in the Strong vs. Weak clash. The test I propose is simply: Can a machine tell a story?

Although the virtue of this test might not seem obvious at first glance, there are some interesting reasons for thinking that it’s a good index of “mindedness.” For example, the dominant test of creativity in use in psychology-Torance Tests of Creative Thinking-request subjects to produce narratives.

Nor is the presence of narrative in these tests arbitrary; many cognitive scientists plausibly argue that narrative is at the very heart of human cognition. Roger Schank, a well-known cognitive scientist at Northwestern University, boldly asserts that “virtually all human knowledge” is based on stories. His fundamental claim is that when you remember the past, you remember it as a set of stories, and when you communicate information you also deliver it in the form of stories.

But perhaps most significant for this discussion, the story game would strike right to the heart of the distinction between Strong and Weak AI. Humans find it impossible to produce literature without adopting the points of view of characters, that is, without feeling what it’s like to be these characters; hence human authors generate stories by capitalizing on the fact that they are conscious in the fullest sense of the word-which is to be conscious simultaneously of oneself, of another person, and of the relation (or lack thereof) between the two persons.

Deep Story

It looks as though a “story game” would therefore be a better test of whether computers can think than the chess and checkers games that currently predominate at AI conferences. But what would the story game look like? In the story game, we would give both the computer and a master human storyteller a relatively simple sentence, say: “Gregor woke to find that his abdomen was as hard as a shell, and that where his right arm had been, there now wiggled a tentacle.” Both players must then fashion a story designed to be truly interesting, the more literary in nature-in terms of rich characterization, lack of predictability, and interesting language-the better. We could then have a human judge the stories so that, as in the Turing Test, when such a judge cannot tell which response is coming from the mechanical muse and which is from the human, we say that the machine has won the game.

How will future machines fare in such a game? I think the length of the story is a key variable. A story game pitting mind against machine in which the length and complexity of the narrative is open-ended would certainly seal the machine’s defeat for centuries to come. Though advocates of Strong AI would hold that a machine could eventually prevail in a contest to see whether mind or machine could produce a better novel, even they would agree that trying to build such a machine today is unthinkable. The task would be so hard that no one would even know where to begin.

In short, though the Turing test is, as noted, too hard to provide the format for mind-machine competition at present, many people think they can imagine a near future when a machine will hold its own in this test. When it comes to the unrestricted story game, however, such a future simply can’t be conceived. We can of course imagine a future in which a computer prints out a novel-but we can’t imagine the algorithms that would be in operation behind the scenes.

So, just to give Strong AI supporters a fighting chance, I would restrict the competition to the shortest of short stories, say, less than 500 words in length. This version of the game should prove a tempting challenge to Strong AI engineers. And, like the full version, it demands creativity from those-mind or machine-who would play it.

How then might future machines stack up against human authors when each is given that one sentence as the jumping-off point toward a short short story?

I may not be positioned badly to make predictions. With help from the Luce Foundation, Apple Computer, IBM, Rensselaer Polytechnic Institute (RPI), and the National Science Foundation, I have spent the past seven years (and about three-quarters of a million dollars) working with a number of researchers-most prominently Marie Meteer, a scientist at Bolt, Beranek and Newman; David Porush, a professor at RPI; and David Ferrucci, a senior scientist at IBM’s T.J. Watson Research Center-to build a formidable artificial author of short short stories.

Part of what drives me and other researchers in the quest to create such synthetic Prousts, Joyces, and Kafkas is a belief that genuinely intelligent stand-alone entertainment systems of the future will require, among other things, AI systems that know how to create and direct stories. In the virtual story worlds of the future, replete with artificial characters, things will unfold too quickly in real time for a human to be guiding the process. The gaming industry currently walks a fine line between rigidly prescripting a game and letting things happen willy-nilly when humans make choices. What is desperately needed is an artificial intelligence that is able to coax events into a continuous narrative thread while at the same time allowing human players to play in a seemingly infinite space of plot trajectories.

The most recent result of my toil in this regard (in collaboration with Ferrucci and Adam Lally, a software engineer with Legal Knowledge Systems of Troy, N.Y.) is an artificial agent called Brutus.1, so named because the literary concept it specializes in is betrayal. Unfortunately, Brutus.1 is not capable of playing the short short story game. It has knowledge about the ontology of academia-professors, dissertations, students, classes, and so forth; but it would be paralyzed by a question outside its knowledge base. For instance, it doesn’t know anything about insect anatomy. Therefore, the sentence involving Gregor would draw a blank.

Nonetheless, Brutus.1 is capable of writing short short stories-if the stories are based on the notion of betrayal (as well as self-deception, evil, and to some extent voyeurism), which are not unpromising literary conceits (see sidebar, “Betrayal,” by Brutus.1-as well as Richard III, Macbeth, Othello.)

Such near-belletristic feats are possible for Brutus.1 only because Ferrucci and I were able to devise a formal mathematical definition of betrayal and endow Brutus.1 with the concept (see sidebar, “The Mathematization of Betrayal”). But to adapt Brutus.1 to play well in a short short story game, it would certainly need to understand not only betrayal, but other great literary themes as well-unrequited love, revenge, jealousy, patricide, and so on.

Forever Unconscious

I have three more years to go on my ten-year project to build a formidable silicon Hemingway. At this point, however, even though Brutus.1 is impressive and even though our intention is to craft descendants of Brutus.1 that can understand a full complement of literary concepts and more, it seems pretty clear that computers will never best human storytellers in even a short short story competition.

It is clear from our work that to tell a truly compelling story, a machine would need to understand the “inner lives” of his or her characters. And to do that, it would need not only to think mechanically in the sense of swift calculation (the forte of supercomputers like Deep Blue), it would also need to think experientially in the sense of having subjective or phenomenal awareness. For example, a person can think experientially about a trip to Europe as a kid, remember what it was like to be in Paris on a sunny day with an older brother, smash a drive down a fairway, feel a lover’s touch, ski on the edge, or need a good night’s sleep. But any such example, I claim, will demand capabilities no machine will ever have.

Renowned human storytellers understand this concept. For example, playwright Henrik Ibsen said: “I have to have the character in mind through and through, I must penetrate into the last wrinkle of his soul.” Such a modus operandi is forever closed off to a machine.

Supporters of Strong AI, should they strive to build a machine that is able to prevail in the short short story game, must therefore strive to build precisely what distinguishes Strong from Weak AI: a conscious machine. Yet in striving for such a machine, Strong AI researchers are waiting for a culmination that will forever be arriving, never present.

Believers in Weak AI, like myself, will seek to engineer systems that, lacking Ibsen’s capacity to look out through the eyes of another, will create richly drawn characters. But though I expect to make headway, I expect that, unlike chess playing, first-rate storytelling, even at the humble length of short short stories, will always be the sole province of human masters.

Still, I’ll continue with the last three years of my project, largely because I expect to have a lot of fun, as well as to be able to say with some authority that machines can’t be creative and conscious (seeing as how I’m using state of the art techniques), and to produce working systems that will have considerable scientific and economic value.

Kasparov no doubt will return soon for another round of chess with Deep Blue or its descendants, and he may well win. In fact, I suspect it will be another 10 years before machine chess players defeat grand masters in tournament after tournament. Soon enough, however, Kasparov and those who take his throne will invariably lose.
But such is not the case when we consider the chances of those would seek to humble not only great chess players, but great authors. I don’t believe that John Updike or his successors will ever find themselves in the thick of a storytelling game, sweating under lights as bright and hot as those that shone down on Gary Kasparov.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.