Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Brilliance Proves Brittle

To a great extent, artificial-intelligence researchers had no choice but to exchange their dreams of understanding intelligence for a more utilitarian focus on real-world applications. “People became frustrated because so little progress was being made on the scientific questions,” says David L. Waltz, an artificial-intelligence researcher who is president of the NEC Research Institute in Princeton, NJ. “Also, people started expecting to see something useful come out of A.I.” And “useful” no longer meant “conscious.”

For example, the Turing test-a traditional trapping of A.I. based on British mathematician Alan Turing’s argument that to be judged truly intelligent a machine must fool a neutral observer into believing it is human-came to be seen by many researchers as “a red herring,” says Lenat. There’s no reason a smart machine must mimic a human being by sounding like a person, he argues, any more than an airplane needs to mimic a bird by flapping its wings.

Of course, the idea that artificial intelligence may be on the verge of fulfilling its potential is something of a chestnut: A.I.’s 50-year history is nothing if not a chronicle of lavish promises and dashed expectations. In 1957, when Herbert Simon of Carnegie Tech (now Carnegie Mellon University) and colleague Allen Newell unveiled Logic Theorist-a program that automatically derived logical theorems, such as those in Alfred North Whitehead and Bertrand Russell’s Principia Mathematica, from given axioms-Simon asserted extravagantly that “there are now in the world machines that think, that learn and that create.” Within 10 years, he continued, a computer would beat a grandmaster at chess, prove an “important new mathematical theorem” and write music of “considerable aesthetic value.”

“This,” as the robotics pioneer Hans Moravec would write in 1988, “was an understandable miscalculation.” By the mid-1960s, students of such artificial-intelligence patriarchs as John McCarthy of Stanford University and Marvin Minsky of MIT were producing programs that played chess and checkers and managed rudimentary math; but they always fell well short of grandmaster caliber. Expectations for the field continued to diminish, so much so that the period from the mid-1970s to the mid-1980s became known as the “A.I. winter.” The best expert systems, which tried to replicate the decision-making of human experts in narrow fields, could outperform humans at certain tasks, like the solving of simple algebraic problems, or the diagnosis of diseases like meningitis (where the number of possible causes is small). But the moment they moved outside their regions of expertise they tended to go seriously, even dangerously, wrong. A medical program adept at diagnosing human infectious diseases, for example, might conclude that a tree losing its leaves had leprosy.

Even in solving the classic problems there were disappointments. The IBM system Deep Blue finally won Simon’s 40-year-old wager by defeating chess grandmaster Garry Kasparov in 1997, but not in the way Simon had envisioned. “The earliest chess programs sought to duplicate the strategies of grandmasters through pattern recognition, but it turned out that the successful programs relied more on brute force,” says David G. Stork, chief scientist at Ricoh Innovations, a unit of the Japanese electronics firm, and the editor of HAL’s Legacy, a 1996 collection of essays assessing where the field stood in relation to that paradigmatic, if fictional, intelligent machine. Although Deep Blue did rely for much of its power on improved algorithms that replicated grandmaster-style pattern recognition, Stork argues that the system “was evaluating 200 million board positions per second, and that’s a very un-humanlike method.”

Many A.I. researchers today argue that any effort to replace humans with computers is doomed. For one thing, it is a much harder task than many pioneers anticipated, and for another there is scarcely any market for systems that make humans obsolete. “For revenue-generating applications today, replacing the human is not the goal,” says Patrick H. Winston, an MIT computer scientist and cofounder of Ascent Technology, a private company based in Cambridge, MA, that develops artificial-intelligence applications. “We don’t try to replace human intelligence, but complement it.”

Commonsense Solutions

“What we want to do is work toward things like cures for human diseases, immortality, the end of war,” Doug Lenat is saying. “These problems are too huge for us to tackle today. The only way is to get smarter as a species-through evolution or genetic engineering, or through A.I.”

We’re in a conference room at Cycorp, in a nondescript brick building nestled within an Austin, TX, industrial park. Here, teams of programmers, philosophers and other learned intellectuals are painstakingly inputting concepts and assertions into Cyc in a Socratic process similar to that of the anthrax dialogue above. Surprisingly, despite the conversational nature of the interaction, the staff seems to avoid the layman’s tendency to anthropomorphize the system.

“We don’t personalize Cyc,” says Charles Klein, a philosophy PhD from the University of Virginia who is one of Cycorp’s “ontologists.” “We’re pleased to see it computing commonsense outputs from abstract inputs, but we feel admiration toward it rather than warmth.”

That’s a mindset they clearly absorb from Lenat, a burly man of 51 whose reputation derives from several programming breakthroughs in the field of heuristics, which concerns rules of thumb for problem-solving-procedures “for gathering evidence, making hypotheses and judging the interestingness” of a result, as Lenat explained later. In 1976 he earned his Stanford doctorate with Automated Mathematician, or AM, a program designed to “discover” new mathematical theorems by building on an initial store of 78 basic concepts from set theory and 243 of Lenat’s heuristic rules. AM ranged throughout the far reaches of mathematics before coming to a sudden halt, as though afflicted with intellectual paralysis. As it happened, AM had been equipped largely with heuristics from finite-set theory; as its discoveries edged into number theory, for which it had no heuristics, it eventually ran out of discoveries “interesting” enough to pursue, as ranked by its internal scoring system.

AM was followed by Eurisko (the present tense of the Greek eureka, and root of the word heuristic), which improved on Automated Mathematician by adding the ability to discover not only new concepts but new heuristics. At the 1981 Traveller Trillion Credit Squadron tournament, a sort of intellectuals’ war game, Eurisko defeated all comers by outmaneuvering its rivals’ lumbering battleships with a fleet of agile little spacecraft no one else had envisioned. Within two years the organizers were threatening to cancel the tournament if Lenat entered again. Taking the cue and content with his rank of intergalactic admiral, he began searching for a new challenge.

The task he chose was nothing less than to end A.I.’s long winter by overcoming the limitations of expert systems. The reason a trained geologist is easier for a computer system to replicate than a six-year-old child is not a secret: it’s because the computer lacks the child’s common sense-that collection of intuitive facts about the world that are hard to reduce to logical principles. In other words, it was one thing to infuse a computer with data about global oil production or meningitis, but quite another to teach it all the millions of concepts that humans absorb through daily life-for example, that red is not pink or that rain will moisten a person’s skin but not his heart. “It was essentially like assembling an encyclopedia, so most people spent their time talking about it, rather than doing it,” Lenat says.

And so Cyc was born. Lenat abandoned a tenure-track position at Stanford to launch Cyc under the aegis of Microelectronics and Computing Technology, an Austin-based research consortium. Now, 18 years later, Cyc ranks as by far the most tenacious artificial-intelligence project in history and one far enough advanced, finally, to have generated several marketable applications. Among these is CycSecure, a program to be released this year that combines a huge database on computer network vulnerabilities with assumptions about hacker activities to identify security flaws in a customer’s network before they can be exploited by outsiders. Lenat expects Cyc’s common-sense knowledge base eventually to underpin a wide range of search engines and data-mining tools, providing the sort of filter that humans employ instinctively to discard useless or contradictory information. If you lived in New York and you queried Cyc about health clubs, for example, it would use what it knows about you to find information about clubs near your home or office, screening out those in Boston or Bangor.

Numerous other promising applications of the new A.I.-such as advanced robotics and the “Semantic Web,” a sophisticated way of tagging information on Web pages so that it can be understood by computers as well as human users (see “A Smarter Web,” TR November 2001)-share Lenat’s focus on real-world applications and add to the field’s fresh momentum. Searching the World Trade Center wreckage, for example, provided a telling test for the work of the Center for Robot-Assisted Search and Rescue in Littleton, CO, a nonprofit organization founded by West Point graduate John Blitch, who believes that small, agile robots can greatly aid search-and-rescue missions where conditions remain too perilous for exclusively human operations. Having assembled for a DARPA project a herd of about a dozen robots-with lights, video cameras and tanklike treads mounted on bodies less than 30 centimeters wide-he brought them to New York just after September 11. Over the next week Blitch deployed the robots on five forays into the wreckage-during which their ability to combine data arriving from multiple sensors helped find the bodies of five buried victims.

0 comments about this story. Start the discussion »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me