Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Constant Learning: The Blessings of Ambiguity

Members of the Berkeley project have studied not just aircraft carriers and nuclear power plants but also air traffic control systems and the operation of large electric power grids, and they detect a pattern.

A layered organizational structure, for instance, seems to be basic to the effectiveness of these institutions. Depending on the demands of the situation, people will organize themselves into different patterns. This is quite surprising to organizational theorists, who have generally believed that organizations assume only one structure. Some groups are bureaucratic and hierarchical, others professional and collegial, still others are emergency-response, but management theory has no place for an organization that switches among them according to the situation.

The realization that such organizations exist opens a whole new set of questions: How are such multi-layered organizations set up in the first place? And how do the members know when it’s time to switch from one mode of behavior to another? But the discovery of these organizations may also have practical implications. Although La Porte cautions that his group’s work is “descriptive, not prescriptive,” the research may still offer some insights into avoiding accidents with other complex and hazardous technologies.

In particular, high-reliability organizations seem to provide a counterexample to Yale sociologist Charles Perrow’s argument that some technologies, by their very nature, pose inherent contradictions for the organizations running them. Concerning technologies such as nuclear power and chemical plants, Perrow writes: “Because of the complexity, they are best decentralized; because of the tight coupling, they are best centralized. While some mix might be possible, and is sometimes tried (handle small duties on your own, but execute orders from on high for serious matters), this appears to be difficult for systems that are reasonably complex and tightly coupled, and perhaps impossible for those that are highly complex and tightly coupled.” But if Diablo Canyon and the aircraft carriers are to be believed, such a feat is not impossible at all. Those organization show that operations can be both centralized and decentralized, hierarchical and collegial, rule-bound and learning-centered.

Besides the layered structure, high-reliability organizations emphasize constant communication far in excess of what would be thought useful in normal organizations. The purpose is simple: to avoid mistakes. On a flight deck, everyone announces what is going on as it happens to increase the likelihood that someone will notice-and react-if things start to go wrong. In an air traffic control center, although one operator is responsible for controlling and communicating with certain aircraft, he or she receives help from an assistant and, in times of peak load, one or two other controllers. The controllers constantly watch out for one another, looking for signs of trouble, trading advice, and offering suggestions for the best way to route traffic.

Poor communication and misunderstanding, often in the context of a strict chain of command, have played a prominent role in many technological disasters. The Challenger accident was one, with the levels of the space shuttle organization communicating mostly through formal channels, so that the concerns of engineers never reached top management. The 1982 crash of a Boeing 737 during takeoff from Washington National Airport, which killed 78 people, was another. The copilot had warned the captain of possible trouble several times-icy conditions were causing false readings on an engine-thrust gauge-but the copilot had not spoken forcefully enough, and the pilot ignored him. The plane crashed into a bridge on the Potomac River.

When a 747 flown by the Dutch airline KLM collided with a Pan Am 747 on a runway at Tenerife airport in the Canary Islands in 1977, killing 583 people, a post-crash investigation found that the young copilot thought that the senior pilot misunderstood the plane’s position but assumed the pilot knew what he was doing and so clammed up. And the Bhopal accident, in which thousands of people died when an explosion at an insecticide plant released a cloud of deadly methyl isocyanate gas, would never have happened had there been communication between the plant operators, who began flushing out pipes with water, and the maintenance staff, which had not inserted a metal disk into the valve to keep water from coming into contact with the methyl isocyanate in another part of the plant.

Besides communication, high-reliability organizations also emphasize active learning: employees not only know why the procedures are written as they are but can challenge them and look for ways to make them better. The purpose behind this learning is not so much to improve safety-although this often happens-but to keep the organization from regressing. Once people begin doing everything by the book, operations quickly go downhill. Workers lose interest and become bored: they forget or never learn why the organization does things certain ways; and they begin to feel more like cogs in a machine than integral parts of a vibrant institution. Effective organizations need to find ways to keep their members fresh and focused on the job at hand.

Any organization that emphasizes constant learning will have to tolerate a certain amount of ambiguity, Schulman notes. There will always be times when people are unsure of the best approach or disagree even on what the important questions are. This may be healthy, Schulman says, but it can also be unsettling to managers and employees who think a well-functioning organization should always know what to do. He tells of a meeting with Diablo Canyon managers at which he described some of his findings. “What’s wrong with us that we have so much ambiguity?” one manager asked. The manager had completely missed the point of Schulman’s research. A little ambiguity was nothing to worry about. Instead, the plant’s managers should be concerned if they ever thought they had all the answers.

Schulman offers one more observation about high-reliability organizations: they do not punish employees for making mistakes when trying to do the right thing. Punishment may work-or at least not be too damaging-in a bureaucratic organization where everyone goes by the book, but it discourages workers from learning any more than they absolutely have to, and it kills communication.

If an organization succeeds in managing a technology so that there are no accidents or threats to the public safety, it may face an insidious threat: call it the price of success. The natural response from the outside-whether upper management, regulators, or the public-is to begin to take that performance for granted. And as the possibility of an accident seems less and less real, the cost of eternal vigilance seems harder and harder to justify.

But organizational reliability, though expensive, is just as crucial to the safety of a technology as is the reliability of the equipment. If we are to keep our technological progress from backfiring, we must be as clever with our organizations as we are with our machines.

0 comments about this story. Start the discussion »

Tagged: Communications

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me