Deep Space One’s grace under pressure marked an early triumph for immobotkind. But in truth, the stuck power switch wasn’t an authentic crisis: ground controllers deliberately misled the craft’s control software so they could see how Remote Agent, the system developed by Williams and his NASA colleagues, would respond. It was all part of an extensive field test of a programming philosophy known as “model-based reasoning,” the core principle of immobot design.
To make machines behave autonomously, most practitioners in robotics and control engineering have long used “heuristic” programs that amount to lists of rules for accomplishing a goal and dealing with contingencies. For example, “If A is true, then do B. If C is true, then do D.” The trouble, many artificial-intelligence experts assert, is that traditional, hand-coded software can be either reliable or affordable. Not both.
The Polar Lander mishap and several other software-related failures that marred recent NASA missions demonstrate that so many of the situations we entrust to software are extraordinarily complex. And human programmers working within tight schedules and limited budgets simply cannot write code that anticipates every contingency. When they try to do so, the software is more often than not so convoluted and slapdash that it contains hidden, potentially fatal bugs (see “Why Software Is So Bad,” TR July/August 2002).
A model-based program that reasons like Remote Agent isn’t built that way at all. It looks like a picture of the machine it was designed to control, painted in the logical language of computers. Both mobile and immobile robots can use this picture to model themselves and choose the fastest, safest, or most cost-effective way to implement an operator’s instructions or deal with an emergency. “The idea is very simple,” Williams explains. “Provide the program with a physical plan of the system and let the software deduce what to do.”
The key to Remote Agent’s real-world reliability, says Robert Rasmussen, chief architect of the Mission Data System project at NASA’s Jet Propulsion Laboratory, was its collection of very simple models. Each model defined one of Deep Space One’s mechanical and electrical components in terms of its possible states. A valve, for example, might be represented by one of two operating modes, “open” or “closed,” and one of two failure modes, “stuck open” or “stuck closed.” Mathematical rules outlined the possible transitions between modes and the probabilities associated with each one. It’s very unlikely that a valve would go from stuck open to stuck closed, for example, so the software knew that it should spend less time investigating such possibilities in the case of a failure.
At the software’s higher levels, hundreds of component models were strung together according to the spacecraft’s blueprints. In actual operation, the software would begin with no more than a general goal provided by operators, as well as a picture of the spacecraft’s current state as indicated by sensors that monitor each valve, relay, gyroscope, fuel tank, and camera. Building from its knowledge of the craft’s innards, the software would create a step-by-step plan for reaching the goal or working around problems.
The advantage of this kind of programming is that software developers don’t have to lay out every detail of an operation-or imagine and prepare for all possible mishaps. “Engineers thinking about every bad thing they can think of and making sure the spacecraft can respond to those situations is a very time-consuming chess game,” says Rasmussen. Model-based programming, by contrast, “provides us with a way to make systems behave the way we want, without spending years and millions of dollars.”