Is Software a Special Case?
The potential risks of bad software were grimly illustrated between 1985 and 1987, when a computer-controlled radiation therapy machine manufactured by the government-backed Atomic Energy of Canada massively overdosed patients in the United States and Canada, killing at least three. In an exhaustive examination, Nancy Leveson, now an MIT computer scientist, assigned much of the blame to the manufacturer’s inadequate software-engineering practices. Because the program used to set radiation intensity was not designed or tested carefully, simple typing errors triggered lethal blasts.
Despite this tragic experience, similar machines running software made by Multidata Systems International, of St. Louis, massively overdosed patients in Panama in 2000 and 2001, leading to eight more deaths. A team from the International Atomic Energy Agency attributed the deaths to “the entering of data” in a way programmers had not anticipated. As Leveson notes, simple data-entry errors should not have lethal consequences. So this failure, too, may be due to inadequate software.
Programming experts tend to agree that such disasters are distressingly common. Consider the Mars Climate Orbiter and the Polar Lander, both destroyed in 1999 by familiar, readily prevented coding errors. But some argue that software simply cannot be judged, measured and improved in the same way as other engineering products. “It’s just a fact that there are things that other engineers can do that we can’t do,” says Shari Lawrence Pfleeger, a senior researcher at the Rand think tank in Washington, DC, and author of the 2001 volume Software Engineering: Theory and Practice. If a bridge survives a 500-kilogram weight and a 50,000-kilogram weight, Pfleeger notes, engineers can assume that it will bear all the values between. With software, she says, “I can’t make that assumption-I can’t interpolate.”
Moreover, software makers labor under extraordinary demands. Ford and General Motors have been manufacturing the same product-a four-wheeled box with an internal-combustion engine-for decades. In consequence, says Charles H. Connell, former principal engineer of Lotus Development (now part of IBM), they have been able to improve their products incrementally. But software companies are constantly asked to create products-Web browsers in the early 1990s, new cell phone interfaces today-unlike anything seen before. “It’s like a car manufacturer saying, This year we’re going to make a rocket ship instead of a car,’” Connell says. “Of course they’ll have problems.”
“The classic dilemma in software is that people continually want more and more and more stuff,” says Nathan Myhrvold, former chief technology officer of Microsoft. Unfortunately, he notes, the constant demand for novelty means that software is always “in the bleeding-edge phase,” when products are inherently less reliable. In 1983, he says, Microsoft Word had only 27,000 lines of code. “Trouble is, it didn’t do very much”-which customers today wouldn’t accept. If Microsoft had not kept pumping up Word with new features, the product would no longer exist.
“Users are tremendously non-self-aware,” Myhrvold adds. At Microsoft, he says, corporate customers often demanded that the company simultaneously add new features and stop adding new features. “Literally, I’ve heard it in a single breath, a single sentence. We’re not sure why we should upgrade to this new release-it has all this stuff we don’t want-and when are you going to put in these three things?’ And you say, Whaaat?’” Myhrvold’s sardonic summary: “Software sucks because users demand it to.”