In 1992, Hurricane Andrew devastated Florida’s southern coast, killing dozens of people and causing more than $25 billion in damage. The storm also exposed critical weaknesses in the way property insurers quantified the potential cost of such a natural catastrophe. Many insurance companies took big losses in the months that followed the storm and several failed.
Today, insurers are struggling to understand the economic scope of a new sort of potential catastrophe, this one man-made: a devastating cyberattack. Some of the lessons of 1992 apply, but in other ways, this is a very different kind of problem to solve.
Big insurers including AIG and Chubb have offered cyber policies since the late 1990s, and today approximately 80 companies sell them, most focused on data breaches. The market for cyber insurance has recently begun to grow quickly as a series of high-profile attacks have convinced top executives that hackers pose a serious concern. PricewaterhouseCoopers estimates companies will be paying $7.5 billion for cyber insurance in 2020, up from an estimated $2.75 billion in 2015.
Yet insurers are still struggling to grasp the nature of cyber risk, and to understand how to structure their policies in ways that won’t leave them vulnerable to catastrophic losses.
People are starting to view cybersecurity as a business risk instead of an IT problem, says Arvind Parthasarathi, CEO of Cyence, a three-year-old firm that helps insurers model cyber risks. That means recognizing this is not a problem with a clear solution, but a risk that can be managed, though not eliminated. Now, says Parthasarathi, executives are asking, “How much risk am I comfortable keeping?”
Insurers are asking the same question as they try to determine how to price new cybersecurity policies. The modern cyber threat is complex and rapidly evolving. The most pressing challenge is quantifying the risk of a cyber catastrophe hitting many policyholders at once, estimating the maximum loss in the worst-case scenario. That’s what insurers failed to do before Hurricane Andrew.
Recommended for You
A cyber disaster comparable in scale with Hurricane Andrew is hard to model in part because one hasn’t happened yet. Last October, we got a glimpse of one way such a calamity might unfold when hackers used a network of commandeered webcams, DVRs, and other Internet of things devices to launch a massive denial of service attack on Dyn, a major router of Internet traffic. The attack made many prominent websites including Amazon, Netflix, and Spotify unavailable to millions of users in the United States for hours (see “10 Breakthrough Technologies 2017: Botnets of Things”).
The cost of the Dyn attack is not yet clear, but a recent four-hour outage of Amazon’s S3 cloud storage system (which was not the result of a cyberattack) cost S&P 500 companies at least $150 million, according to an estimate from Cyence. It is not hard to imagine a large-scale attack on a cloud service causing billions in losses.
A cyberattack on traditional physical infrastructure, like the one that took out a substantial portion of the grid in Kiev, Ukraine, in December, is also a concern. Some have attributed the attack to Russian state-sponsored hackers. The insurance market Lloyd’s of London recently analyzed a hypothetical scenario in which a blackout in the northeastern U.S. leaves 93 million people without power. It concluded that an event like that could cost insurers anywhere between $21 billion and $71 billion, illustrating how challenging it is to pinpoint the cost of such risks.
How big a role could the insurance industry play in making U.S. companies less vulnerable to cyberattacks?Tell us in the comments.
The challenge of trying to quantify the cyber risk is similar in some ways to what insurers faced in the 1990s, in that they have very little experience with this type of risk. It took 15 years to build the data sets that underlie the complex and detailed natural catastrophe models insurers rely on today, says Tom Harvey, a product manager at Risk Management Solutions, which develops catastrophic risk models for insurers. While things are moving “a lot quicker” for cyber, he says, the data that companies collect is still quite inconsistent. That makes it difficult to aggregate information and study industry trends.
There are important differences between modeling natural catastrophes and cyber catastrophes, of course, starting with the fact that skilled humans drive cyber events, not physical laws. Hackers’ motivations, tactics, techniques, and targets change quickly to overcome new defenses. The challenge is to understand an “active adversary,” says Cyence’s Parthasarathi, whose company draws on game theory and behavioral economics to model the behavior of attackers.
Understanding the geography of the Internet is also crucial to evaluating the risk of a big cyberattack. Insurers need a “map” of the locations where valuable data are stored, including information about how well the owners of those assets protect them, says Stephen Boyer, CTO and cofounder of BitSight. Boyer’s company does this kind of mapping of assets stored on the Internet and measures the security performance of the organizations that own those assets.
Insurers must avoid doing the cyber version of covering everybody on the coast of Florida before Hurricane Andrew, says Boyer, things like offering too many policies to companies that depend on the same technology or service provider, like Amazon Web Services, as one example. “When an outage happens there, everybody has a claim,” he says.
Keep up with the latest in cybersecurity at EmTech MIT.
Discover where tech, business, and culture converge.
September 11-14, 2018
MIT Media Lab