Skip to Content
Artificial intelligence

What Uber’s fatal accident could mean for the autonomous-car industry

The first pedestrian death leads some to ask whether the industry is moving too fast to deploy the technology.
March 19, 2018
Volvo

The autonomous-car industry faces closer scrutiny and criticism after a self-driving Uber killed a pedestrian in Tempe, Arizona, on Sunday evening.

Full details of the accident are unclear, but the local police department issued a statement saying that a woman was fatally struck after walking in front of an Uber car traveling in self-driving mode. Uber says it is cooperating with a police investigation and has suspended testing of its self-driving vehicles in Phoenix, Pittsburgh, San Francisco, and Toronto.

It is the first time a self-driving vehicle has killed a pedestrian, and the event is already causing some to question the pace at which the technology is moving. Besides Uber, dozens of companies, including established car makers and small startups, are rushing to test experimental self-driving vehicles and autonomous systems on roads. These efforts have received blessing from local governments because the technology seems so promising and because a driver is usually behind the wheel as a backup. A safety driver was in the front seat when the accident in Tempe occurred.

Though automated driving could ultimately save countless lives on roads, some say the technology is being deployed too quickly.

At a time when many have lauded the technology as ready for large-scale deployment, “this is clear proof that is not yet the case,” says Bryan Reimer, a research scientist at MIT who studies automated driving. “Until we understand the testing and deployment of these systems further, we need to take our time and work through the evolution of the technology,” he says.

The accident is unlikely to set a legal precedent, says Ryan Calo, who is researching the legal implications of vehicle autonomy at the University of Washington. Even if the victim is found to have been partly responsible, the company may also be liable, and it will be keen to settle in order to avoid a test case, he says.

Calo calls on those developing AI-based vehicles to think very carefully about the potential impact of their systems on human lives, and consider the legal and ethical implications.

The ethical questions surrounding self-driving cars—and especially a conundrum known as the “trolley problem,” which requires that a car choose between two potential victims in an accident—have confused the issue, he adds: “I don’t think the trolley-problem conversation has been at all helpful.” Of the accident that killed the woman in Arizona, he says, “The sensors probably didn’t pick her up, or the algorithm didn’t understand what it’s seeing.”

Regulators will no doubt take a closer look at the technology after this latest setback. This morning both the National Highway Traffic Safety Administration (NHTSA) and the National Transportation Safety Board (NTSB) said they have launched probes.

Subbarao Kambhampati, a professor at Arizona State University who specializes in AI, says the Uber accident raises questions about the ability of safety drivers to monitor systems effectively, especially after long hours of testing. Research conducted by Reimer and others reinforces this point. Other research has shown the challenge of establishing communications between self-driving systems and pedestrians.

The accident comes amid what seemed like rapid progress on self-driving technology and a push to loosen legal restrictions. Waymo, a subsidiary of Alphabet spun out of Google, announced late last year that it was taking the safety driver out of its vehicles and said it would launch a driverless taxi service in Phoenix later this year.

Just days ago, Waymo, Uber, and others had urged Congress to pass legislation that would pave the way for self-driving cars in the US. The accident will most likely slow the passage of that bill.

There have been a handful of accidents involving self-driving vehicles, including a crash in Florida in May 2016 involving a Tesla Model S in Autopilot mode that failed to see a truck across the road ahead. The Tesla’s driver was killed. Federal investigators have found the technology to be at fault in several of these accidents, but they have so far resisted the urge to implement stricter rules or halt testing altogether.

So far, the public has showed little sign of turning against the technology, even after such incidents. “I am not really sure this is going to lead to a public worry or backlash,” says Kambhampati. “Because honestly, I thought there would be more of a backlash after the Tesla accident.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.