Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

It was the spring of 2000. The scene was a demonstration of an advanced artificial-intelligence project for the U.S. Department of Defense; the participants were a programmer, a screen displaying an elaborate windowed interface and an automated “intelligence”-a software application animating the display. The subject, as the programmer typed on his keyboard, was anthrax.

Instantly the machine responded: “Do you mean Anthrax (the heavy-metal band), anthrax (the bacterium) or anthrax (the disease)?”

“The bacterium,” was the typed answer, followed by the instruction, “Comment on its toxicity to people.”

“I assume you mean people (homo sapiens),” the system responded, reasoning, as it informed its programmer, that asking about People magazine “would not make sense.”

Through dozens of similar commonsense-ish exchanges, the system gradually absorbed all that had been published in the standard bioweapons literature about a bacterium then known chiefly as the cause of a livestock ailment. When the programmer’s input was ambiguous, the system requested clarification. Prompted to understand that the bacterium anthrax somehow fit into the higher ontology of biological threats, it issued queries aimed at filling out its knowledge within that broader framework, assembling long lists of biological agents, gauging their toxicity and strategies of use and counteruse. In the process, as its proud creators watched, the system came tantalizingly close to that crossover state in which it knew what it did not know and sought, without being prompted, to fill those gaps on its own.

The point of this exercise was not to teach or learn more about anthrax; the day when the dread bacterium would start showing up in the mail was still 18 months in the future. Instead, it was to demonstrate the capabilities of one of the most promising and ambitious A.I. projects ever conceived, a high-performance knowledge base known as Cyc (pronounced “psych”). Funded jointly by private corporations, individual investors and the Pentagon’s Defense Advanced Research Projects Agency, or DARPA, Cyc represents the culmination of an 18-year effort to instill common sense into a computer program. Over that time its creator, the computer scientist Douglas B. Lenat, and his cadres of programmers have infused Cyc with 1.37 million assertions-including names, abstract concepts, descriptions and root words. They’ve also given Cyc a common-sense inference engine that allows it, for example, to distinguish among roughly 30 definitions of the word “in” (being in politics is different from being in a bus).

Cyc and its rival knowledge bases are among several projects that have recently restored a sense of intellectual accomplishment to A.I.-a field that once inspired dreams of sentient computers like 2001: A Space Odyssey’s HAL 9000 and laid claim to the secret of human intelligence, only to be forced to back off from its ambitions after years of experimental frustrations. Indeed, there is a palpable sense among A.I.’s faithful-themselves survivors of a long, cold research winter-that their science is on the verge of new breakthroughs. “I believe that in the next two years things will be dramatically changing,” says Lenat.

It may be too early to declare that a science with such a long history of fads and fashions is experiencing a new springtime, but a greater number of useful applications are being developed now than at any time in A.I.’s more than 50-year history. These include not only technologies to sort and retrieve the vast quantity of information embodied in libraries and databases, so that the unruly jungle of human knowledge can be tamed, but improvements in system interfaces that allow humans and computers to communicate faster and more directly with each other-through, for instance, natural language, gesture, or facial expression. And not only are artificial-intelligence-driven devices venturing into places that might be unsafe for humans-one fleet of experimental robots with advanced A.I.-powered sensors assisted the search for victims in the World Trade Center wreckage last September-they’re showing up in the most mundane of all environments, the office. Commercial software soon to reach the market boasts “smart” features that employ A.I.-based Bayesian probability models to prioritize e-mails, phone messages and appointments according to a user’s known habits and (presumed) desires.

These and other projects are the talk of artificial-intelligence labs around the United States. What one does not hear much about anymore, however, is the traditional goal of understanding and replicating human intelligence.

“Absolutely none of my work is based on a desire to understand how human cognition works,” says Lenat. “I don’t understand, and I don’t care to understand. It doesn’t matter to me how people think; the important thing is what we know, not how do we know it.”

One might call this trend the “new” A.I., or perhaps the “new new new” A.I., for in the last half-century the field has redefined itself too many times to count. The focus of artificial intelligence today is no longer on psychology but on goals shared by the rest of computer science: the development of systems to augment human abilities. “I always thought the field would be healthier if it could get rid of this thing about consciousness,’” says Philip E. Agre, an artificial-intelligence researcher at the University of California, Los Angeles. “It’s what gets its proponents to overpromise.” It is the scaling back of its promises, oddly enough, that has finally enabled A.I. to start scoring significant successes.

Pages

0 comments about this story. Start the discussion »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me