Skip to Content

When algorithms mess up, the nearest human gets the blame

A look at historical case studies shows us how we handle the liability of automated systems.
May 28, 2019
An image showing the aftermath of a self-driving car accident, with an uber vehicle on its side
An image showing the aftermath of a self-driving car accident, with an uber vehicle on its sideTempe Police Department

Earlier this month, Bloomberg published an article about an unfolding lawsuit over investments lost by an algorithm. A Hong Kong tycoon lost more than $20 million after entrusting part of his fortune to an automated platform. Without a legal framework to sue the technology, he placed the blame on the nearest human: the man who sold it to him.

It’s the first known case over automated investment losses, but not the first involving the liability of algorithms. In March of 2018, a self-driving Uber struck and killed a pedestrian in Tempe, Arizona, sending another case to court. A year later, Uber was exonerated of all criminal liability, but the safety driver could face charges of vehicular manslaughter instead.

Both cases tackle one of the central questions we face as automated systems trickle into every aspect of society: Who or what deserves the blame when an algorithm causes harm? Who or what actually gets the blame is a different yet equally important question.

Madeleine Clare Elish, a researcher at Data & Society and a cultural anthropologist by training, has spent the last few years studying the latter question to see how it can help answer the former. To do so, she has looked back at historical case studies. While modern AI systems haven’t been around for long, the questions surrounding their liability are not new.

The self-driving Uber crash parallels the 2009 crash of Air France flight 447, for example, and a look at how we treated liability then offers clues for what we might do now. In that tragic accident, the plane crashed into the Atlantic Ocean en route from Brazil to France, killing all 228 people on board. The plane’s automated system was designed to be a completely “foolproof,” capable of handling nearly all scenarios except for the rare edge cases when it needed a human pilot to take over. In that sense, the pilots were much like today’s safety drivers for self-driving cars—meant to passively monitor the flight the vast majority of the time but leap into action during extreme scenarios.

What happened the night of the crash is, at this point, a well-known story. About an hour and a half into the flight, the plane’s air speed sensors stopped working because of ice formation. After the autopilot system transferred control back to the pilots, confusion and miscommunication led the plane to stall. While one of the pilots attempted to reverse the stall by pointing the plane’s nose down, the other, likely in a panic, raised the nose to continue climbing. The system was designed for one pilot to be in control at all times, however, and didn’t provide any signals or haptic feedback to indicate which one was actually in control and what the other was doing. Ultimately, the plane climbed to an angle so steep that the system deemed it invalid and stopped providing feedback entirely. The pilots, flying completely blind, continued to fumble until the plane plunged into the sea.

In a recent paper, Elish examined the aftermath of the tragedy and identified an important pattern in the way the public came to understand what happened. While a federal investigation of the incident concluded that a mix of poor systems design and insufficient pilot training had caused the catastrophic failure, the public quickly latched onto a narrative that placed the sole blame on the latter. Media portrayals, in particular, perpetuated the belief that the sophisticated autopilot system bore no fault in the matter despite significant human-factors research demonstrating that humans have always been rather inept at leaping into emergency situations at the last minute with a level head and clear mind.

Humans act like a "liability sponge."

In other case studies, Elish found the same pattern to hold true: even in a highly automated system where humans have limited control of its behavior, they still bear most of the blame for its failures. Elish calls this phenomenon a “moral crumple zone.” “While the crumple zone in a car is meant to protect the human driver,” she writes in her paper, “the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator.” Humans act like a “liability sponge,” she says, absorbing all legal and moral responsibility in algorithmic accidents no matter how little or unintentionally they are involved.

This pattern offers important insight into the troubling way we speak about the liability of modern AI systems. In the immediate aftermath of the Uber accident, headlines pointed fingers at Uber, but less than a few days later, the narrative shifted to focus on the distraction of the driver.

“We need to start asking who bears the risk of [tech companies’] technological experiments,” says Elish. Safety drivers and other human operators often have little power or influence over the design of the technology platforms they interact with. Yet in the current regulatory vacuum, they will continue to pay the steepest cost.

Regulators should also have more nuanced conversations about what kind of framework would help distribute liability fairly. “They need to think carefully about regulating sociotechnical systems and not just algorithmic black boxes,” Elish says. In other words, they should consider whether the system’s design works within the context it’s operating in and whether it sets up human operators along the way for failure or success. Self-driving cars, for example, should be regulated in a way that factors in whether the role safety drivers are being asked to play is reasonable.

“At stake in the concept of the moral crumple zone is not only how accountability may be distributed in any robotic or autonomous system,” she writes, “but also how the value and potential of humans may be allowed to develop in the context of human-machine teams.”

This story originally appeared in our Webby-nominated AI newsletter The Algorithm. To have more stories like this delivered directly to your inbox, sign up here. It's free.

Deep Dive


Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

A brief, weird history of brainwashing

L. Ron Hubbard, Operation Midnight Climax, and stochastic terrorism—the race for mind control changed America forever.

Africa’s push to regulate AI starts now        

AI is expanding across the continent and new policies are taking shape. But poor digital infrastructure and regulatory bottlenecks could slow adoption.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.