Skip to Content

The Problem with Waiting for Catastrophes

Human systems are not infinitely adaptable.
June 21, 2011

A few months ago, in a conversation about cyber­security I was moderating at MIT’s CIO Symposium, Erik ­Brynjolfsson, the director of MIT’s Center for Digital Business, listed the complicated and controversial things that he felt would be necessary to make the Internet secure. He concluded by saying we might need to build a second, secure Internet, because the existing public network was not designed to be our global network for communications and commerce.

This sort of talk is not unusual. Beginning on page 46 of this issue, David Talbot describes a “perfect scam” in which Web users are shown bogus warnings that announce “Your Computer Is Infected!” and are then urged to buy fake antivirus software that provides no protection and sometimes conscripts the user’s computer into a botnet controlled by malefactors. It’s a billion-dollar criminal industry, and it’s aided by the insecurity of basic parts of the overall networking infrastructure. Talbot has argued for years that we might need a new architecture that apportions security roles to various elements in the system (see “The Internet Is Broken,” December 2005/January 2006).

Alas, discussions of new architectures inevitably run up against their prohibitive costs. What could inspire civilization to pay the price? At the CIO Symposium, Brynjolfsson had an idea: “Unfortunately, none of this is going to happen until there is a major disaster.”

I was taken aback. I asked, “What? Are you saying that the public network will be grossly insecure until a digital catastrophe forces companies and governments to invest in new technologies and to agree to new mandates?” Brynjolfsson, still cheerful, answered, “Well, that would be consistent with human nature.”

I should not have been shocked. Many economists consider it respectable to wait until a catastrophe strikes. Until something goes wrong, you don’t know the scale of a problem: any preëmptive action will tend to allocate resources inefficiently. In addition, a precautionary response necessarily involves application of some variety of the precautionary principle—about which I wrote at this time last year, on the occasion of the explosion of the Deepwater Horizon drilling rig in the Gulf of Mexico (see “On Risk,” July/August 2010). The precautionary principle states that when something new is suspected of being harmful, the burden of proof that it is not harmful rests with its ­proponents, and economists have always been suspicious of it. In its weak formulation (which calls for users of new things to be prepared), it is unhelpful. In its stronger form (which calls for regulation or abstention), it has been a recipe for inaction. As I wrote, “No one knows how new technologies will be used in the future. There is never any consensus about risks. Crises accompany the development of any new, complex system, but their exact form tends to take us by surprise.”

The blithe acceptance of catastrophes as spurs to action makes a sort of economic sense, although it can seem cold and unfeeling when we read, for instance, of historical instances such as Victorian administrators responding after the fact to Bengali famines. But the real problem with what can be called “ameliorative catastrophism” is that it assumes that human beings are infinitely adaptable.

In fact, the archeological record is replete with stories of societies that have not adapted to crises. In Collapse: How Societies Choose to Fail or Succeed (2005), the UCLA geographer Jared Diamond describes how a series of societies, from the Viking settlements in Greenland to the population of Easter Island, collapsed because their environmental strategies, appropriate at one time and place, were maladapted when circumstances changed. (I wrote about Collapse in April 2005 in “Let’s Go Dutch.”)

In this issue, David Rotman, Technology Review’s editor, interviews Nicholas Stern, a former World Bank chief economist and the author of the 2006 Stern Review. No ameliorative catastrophist, Stern argued in his 700-page analysis that the costs of climate change, if not addressed, will be the equivalent of losing 5 to 20 percent of the global gross domestic product “each year, now and forever.” Climate change, Stern wrote, “is the greatest market failure the world has ever seen” and could threaten hundreds of millions of people with hunger, water shortages, and poverty. Preventing such disasters, according to the report, would require investments equivalent to 1 percent of global GDP over each of the next 10 to 20 years.

In his interview, Rotman asks Stern whether the risks of climate change are so great that we must respond now even if we don’t know how best to allocate resources. Stern’s response? “You can’t afford not to make those investments: the risks are too great, and the rewards are high if you do.”

But write and tell me what you think at jason.pontin@ technologyreview.com.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.