Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

For decades we’ve thought of computers as centralized intelligences; our model was the human brain. But lately, a growing number of researchers have been talking about a shift in the core metaphor for computing, from the notion of “artificial intelligence” to something that might be called “artificial biology.” Forget about the dream of creating bug-free software. Just as bugs regularly affect any biological system-I have a cold as I write this-they should also be expected in software. So software needs to be designed to survive the bugs. It should have the biological properties of redundancy and regeneration: parts should be able to “die off” without affecting the whole.

This shift is not only transforming the research of leading academic groups at places like Stanford University, the University of California, Berkeley, and the University of Virginia but also influencing the development of commercial products: IBM and Oracle have both introduced products such as Web servers and database programs that they describe as “self-healing,” a term typically applied to biological organisms. In 2001 Paul Horn, head of IBM Research, wrote a widely read white paper describing the notion of “autonomic computing” and calling on the computing research community to begin thinking of ways to design systems that could anticipate problems and heal themselves.

The point of autonomic computing-and by extension, of self-healing software-is to give networks and computer systems the ability to regulate and repair things that now require human thought and intervention. For example, servers need to be rebooted now and then to keep them working. That can happen because of “memory leaks” created by software bugs, explains Steve White, who heads up IBM’s autonomic-computing research from the T. J. Watson Research Center in Hawthorne, NY. “A program will take up more and more memory to run,” he says, “so eventually it breaks. Start over, and it will work.” At the moment, users need to recognize problems themselves and physically reboot their systems. But with autonomic computing, “You can make it possible to reboot a system easily and automatically,” says White.

In the future, the biological metaphor may even affect the way we program to begin with. Software could eventually “heal” some of its own bugs, supplementing catch-all fixes-like automatic rebooting-that don’t get at the core problem. But that will require an entirely new approach to programming.

“We need to move towards a programming philosophy where we look at the global system and understand what properties it needs to have, rather than thinking about programming as a sequence of instructions,” says David Evans, who is pursuing biologically inspired programming methods as a computer science professor at the University of Virginia. “It’s really a different way of approaching problems.”

Evans notes that software today is written linearly, with each step depending on the previous one, more or less guaranteeing that bugs will wreak havoc: in biological terms, organisms with no redundancy don’t survive long if one means of accomplishing a task fails. More robust software would include many independent components so that it will continue to work even if several of its components fail.

Even today, programs such as Microsoft’s Windows XP operating system are beginning to exhibit the biologically inspired ability to detect problems and to fix them, albeit in a simple way, by storing models of their original configurations. The programs can then be restored to their original states if bugs corrupt them later. And good compilers-the programs that translate human-readable languages into machine-readable code-will identify potential errors and return error messages along with suggested fixes. But these methods still require programmers to predict problems and write code that guards against them to begin with-and we predict flaws in our software about as well as Dr. Frankenstein predicted the flaws of his artificial man.

How close have we come to writing software that, like the human body, can identify and correct problems we haven’t thought of?

“We haven’t developed anything that is very persuasive yet for healing unanticipated conditions,” says Tom DeMarco, longtime software pundit and principal with the Atlantic Systems Guild, an international software training and consulting group. “You have to remember that software doesn’t break. It is flawed to begin with. So for software to self-heal, you have to find a way to have the program create things that were not there when the program was written.

“We’ll get there someday.”

0 comments about this story. Start the discussion »

Tagged: Computing, Biomedicine

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me

A Place of Inspiration

Innovations and Ideas Fueling
Our Connected World

June 1-2, 2015
Register »