Skip to Content

On Risk

How should technologists think about precautions?
June 22, 2010

I have been thinking about risk.

As I write this column in early June, British Petroleum is still struggling to contain its leaking well in the Gulf of Mexico. After the company’s drilling rig, Deepwater Horizon, exploded on April 20, as many as 19,000 barrels of oil (or as much as 800,000 gallons) spewed into the Gulf every day. A cap is now capturing a little more than 400,000 gallons a day. It is the worst environmental disaster in the history of the United States, but there may be no solution until BP completes two “relief wells” in August.

In this issue of Technology Review, David Talbot writes about the increasing incidence of cyber crime and espionage, and the real (if still speculative) risk of outright cyber warfare. In “Moore’s Outlaws,” he quotes Stewart Baker, the former general counsel of the National Security Agency and a former policy chief at the U.S. Department of Homeland Security: “What we’ve been seeing, over the last decade or so, is that Moore’s Law is working more for the bad guys than the good guys. It’s really ‘Moore’s outlaws’ who are winning this fight. Code is more complex, and that means more opportunity to exploit the code. There is more money to be made in exploiting the code, and that means there are more and more sophisticated people looking to exploit vulnerabilities. If you look at things like malware found, or attacks, or the size of the haul people are pulling in, there is an exponential increase.”

Talbot describes experts’ concerns that computer viruses have made millions of machines into “enslaved armies”–botnets–awaiting instruction by malefactors. In the days leading up to April 1, 2009, a worm called Conficker was expected to receive an update from its unknown creator, but no one knew what: “A tweak to Conficker’s code might cause the three million or so machines … to start attacking the servers of some company or government network, vomit out billions of pieces of spam, or just improve the worm’s own ability to propagate.” It’s scary stuff.

In the first case, a complex system of technologies (whose purpose is to extract crude oil five miles under the ocean’s surface) failed; in the second, a more complex system (a global computer network whose purposes are incomprehensibly various, but upon which our technological civilization depends) is failing. These failures are not so much predictable as unsurprising. We expanded our use of vulnerable technologies, because we were dependent upon them. How should we think about the risks inherent in technologies, particularly new technologies?

One possible intellectual tool, popular with environmentalists and policy makers, is the Precautionary Principle, which states that when something is suspected of being harmful, the burden of proof that the thing is not harmful rests with its proponents. This principle has a stronger and a weaker formulation. The stronger calls for regulation of any potentially harmful activity, or refraining from action altogether, until there is consensus that it is safe. The weaker does not demand regulation or bans; it weighs costs against likelihoods. The former says, “Better safe than sorry”; the latter, “Be prepared.”

Although a handful of international agreements have endorsed the strong formulation, and while it possesses a quasi-legal status in European Union law, it is in fact seldom applied. (A notable exception is in the management of global fisheries.) Certainly, governments, corporations, and individuals routinely ignore it when thinking about new technologies. That’s because the idea is “paralyzing,” according to Cass Sunstein, the administrator of the White House Office of Information and Regulatory Affairs (and for many years a professor of law at the University of Chicago, where he wrote about behavioral economics and risk). No one knows how new technologies will be used in the future. There is never any consensus about risks. Crises accompany the development of any new, complex system, but their exact form tends to take us by surprise.

But if the strong formulation of the Precautionary Principle is paralyzing, the weak formulation is almost no help at all. It provides little guidance for thinking about unlikely but potentially catastrophic risks. We need an entirely new principle that will guide our investment in precautionary technology. When a technology fails or is unsustainable, we should be rationally confident that a fix or alternative exists or will exist, because ingenious humans have devised other technologies that will mitigate the crisis or stand in the outmoded technology’s place. Government has a justified role in requiring such precautionary investment in new technologies.

In the absence of a new principle, we have mere optimism.

David Talbot’s feature, which accepts that we’re not likely to build an entirely new, more secure Internet, describes research into new technologies that may make our networks safer. For now, perhaps that’s the best risk management we can expect. Read about them, and write and tell me what you think at jason.pontin@technologyreview.com.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.