Skip to Content

Fatal Tesla Autopilot Crash Is a Reminder Autonomous Cars Will Sometimes Screw Up

A death behind the wheel with Tesla’s Autopilot on raises the question of how safe automated cars must be.
June 30, 2016

More than 30,000 people are killed by cars in the U.S. each year, and people working on autonomous-driving technology at companies such as Google and Tesla say any technology that can significantly reduce that figure deserves serious attention.

But even if automated cars can be much safer than conventional ones, they will still be involved in accidents. No software can be perfect. And as self-driving technology matures, regulators and society as a whole will have to decide just how safe these vehicles need to be. Indeed, it has been argued that in some situations autonomous vehicles must be programmed to actively choose which people to harm.

Those thorny issues became more concrete today with news that Tesla is being investigated by the U.S. National Highway Traffic Safety Administration after a fatal crash involving the company’s Autopilot automated driving feature, which can do things like change lanes and adjust speed during highway driving for some of the company’s cars.

Tesla Motors' Model S sedan.

In Florida in May, a Tesla Model S sedan drove into a tractor-trailer crossing the road ahead while Autopilot was in control of the car. Neither Tesla’s Autopilot feature nor the driver applied the car’s brakes. In a blog post Thursday, Tesla said that Autopilot didn’t register the white side of the trailer against the bright sky.

Tesla’s Autopilot can steer the car, detect obstacles and lane markings, and use the brakes, all on its own. But it is far less capable than a human driver and lacks the sophistication and high-detail sensors seen in more mature autonomous car-projects like Google's.

Tesla has been criticized for promoting the convenience of Autopilot—a name that suggests no human intervention is needed—while also maintaining that drivers must constantly be ready to take over from the software. The leader of Google’s autonomous-car project, Chris Urmson, has said his company’s experiments have proved that humans can’t be relied on to do that, because they quickly come to trust that the car knows what it’s doing. All the same, Tesla CEO Elon Musk has said his company’s data suggests Autopilot is twice as safe as human drivers.

We don’t yet know exactly what happened in May’s fatal accident. Tesla’s statement emphasizes that the driver knew he should always keep an eye on what Autopilot was doing. But if NHTSA finds the design of Autopilot to blame, Tesla could be forced to issue a recall, or might feel it has to dumb down the feature. That could hurt both Tesla and enthusiasm for the technology in general.

Whatever the outcome of NHTSA’s investigation, the incident is an opportunity to consider the standards to which we hold autonomous-driving software and the companies that design it. If it is to be widely used, we will have to accept its being involved in accidents—some fatal, and some caused by its own failings.

Human drivers set a low bar: about 90 percent of crashes are caused by human error, and dumb mistakes like driving while texting or drunk kill far too many people. It’s easy to see how machines could improve on that. But deciding how much better they need to be will be much more difficult.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.