Skip to Content
Uncategorized

The Perils of Highly Interconnected Systems

The key to thriving in an increasingly complex world is to develop a nuanced, stable theory of interoperability.

We live in an age of unprecedented interconnectivity. Our lives are mediated by technologies that connect us to one another, to ideas, and to institutions in ways we have never previously experienced. Social network sites, Twitter, blogs, and e-mail are among the most popular applications that create networks of connectivity among us that are unprecedented in scope in configuration.

The Internet solves problems for individuals as well as at the societal level. As we seek to address society’s most pressing issues—such as climate change, for instance—we are building increasingly complex infrastructures, including next generation transportation systems and the smart grid. These complex systems rely heavily on digital technologies that connect systems and organize the flow of data between and among them.

Most of the time, this high level of interconnection is purposeful, and in fact helpful. Sometimes, however, we can take this interconnection too far, without thinking through its consequences. Security and privacy risks are the most common problems that flow from unchecked levels of interoperability. Worse still, the most highly interconnected systems, such as the international financial system, can give rise to catastrophic domino effects. Whether the instrument is complex derivatives gone bad or computer malware, harm can flow across highly interconnected systems and cause knock-on effects far from where the initial harm occurred.   

When we consider the costs and benefits of high degrees of interconnection in this way, what we are talking about is interoperability—the theory of highly interconnected systems—or interop, for short.

In our new book of this name, we argue that we need to get interop right if we are to address some of the biggest challenges of our era. As societies, we are today confronted with a series of unprecedented challenges of global scale that involve vastly complex interconnected systems. The financial crisis, the need for sustainable forms of energy production, the reform of health-care systems, and global disaster response systems are all problems that touch, one way or another, on interop. In order to manage challenges of this level of complexity, the smartest people, the most advanced technology, and the best institutions need to work collaboratively and ensure the flow of information across systems. The management of this vast degree of interconnection brings its own challenges. We cannot take this flow of information for granted; it has to be planned and managed.

Take the health-care problem in nearly any country, and certainly the United States, as a major example. Costs are much higher than they should be and care is less effective than it should be. On many metrics, the situation looks dire: infant mortality rates, obesity in people of all ages, hospital readmission rates are all higher than they might be. One solution that most people agree on is to improve the nature and quality of our electronic health records as a means to strip cost out of health care as well as to improve the quality of the care. We have not lacked for leadership in electronic health records: both President Obama and President Bush before him have pledged that we would have a system of electronic health records in place in America by 2014. Yet progress has been elusive.

The problem with electronic health records is interop. While the health and economic benefits of a system of electronic health records are obvious, we don’t have electronic health records that can work across systems for a wide range of complicated reasons. One is legacy systems: hospitals and insurers have invested over the years in a hodgepodge of different systems that do not talk to one another. It’s expensive to make them work together and it’s expensive to move to new systems. Some health-care providers and insurers don’t actually want the higher level of interconnection across systems, because it might lead to new forms of transparency, as well as potentially liability. It takes time to load in the data and time to analyze it. And then there’s the pesky issue of privacy: if we make these systems highly interoperable, we need to ensure that privacy and security safeguards are more robust than they are today.

It’s our view that a concerted effort by the government, in partnership with industry, can lead to high levels of interop in electronic health records. The federal government will need to keep pushing and creating incentives for compliance. The government, after all, is a major consumer of health-care information systems. Even through its purchasing power, the government has a great deal of authority in this area.

The government should require a particular type of interoperability, but not mandate the specific way to get there. The system of electronic health records should work like the ATM system does for banks and money. It should be just as simple, secure, and private. The technology behind the ATM system can remain the purview of the private sector, but compliance should be mandatory. Kenneth Mandl and Zak Kohane have described how this could work from a technical perspective. And incentives can help ensure that health-care providers consider putting data into the systems to be worth their while.

The key thriving in an increasingly complex world is to develop a nuanced, stable theory of interoperability. The goal is ensuring that actors in complex systems can work together, on day one and over time, in such a way that maximizes the benefits of interconnection to society and minimizes the costs.

John Palfrey and Urs Gasser are the authors of a new book, Interop: The Promise and Perils of Interconnected Systems (Basic Books, 2012).

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.