Skip to Content

Fatal Tesla Autopilot Crash Is a Reminder Autonomous Cars Will Sometimes Screw Up

A death behind the wheel with Tesla’s Autopilot on raises the question of how safe automated cars must be.
June 30, 2016

More than 30,000 people are killed by cars in the U.S. each year, and people working on autonomous-driving technology at companies such as Google and Tesla say any technology that can significantly reduce that figure deserves serious attention.

But even if automated cars can be much safer than conventional ones, they will still be involved in accidents. No software can be perfect. And as self-driving technology matures, regulators and society as a whole will have to decide just how safe these vehicles need to be. Indeed, it has been argued that in some situations autonomous vehicles must be programmed to actively choose which people to harm.

Those thorny issues became more concrete today with news that Tesla is being investigated by the U.S. National Highway Traffic Safety Administration after a fatal crash involving the company’s Autopilot automated driving feature, which can do things like change lanes and adjust speed during highway driving for some of the company’s cars.

Tesla Motors' Model S sedan.

In Florida in May, a Tesla Model S sedan drove into a tractor-trailer crossing the road ahead while Autopilot was in control of the car. Neither Tesla’s Autopilot feature nor the driver applied the car’s brakes. In a blog post Thursday, Tesla said that Autopilot didn’t register the white side of the trailer against the bright sky.

Tesla’s Autopilot can steer the car, detect obstacles and lane markings, and use the brakes, all on its own. But it is far less capable than a human driver and lacks the sophistication and high-detail sensors seen in more mature autonomous car-projects like Google's.

Tesla has been criticized for promoting the convenience of Autopilot—a name that suggests no human intervention is needed—while also maintaining that drivers must constantly be ready to take over from the software. The leader of Google’s autonomous-car project, Chris Urmson, has said his company’s experiments have proved that humans can’t be relied on to do that, because they quickly come to trust that the car knows what it’s doing. All the same, Tesla CEO Elon Musk has said his company’s data suggests Autopilot is twice as safe as human drivers.

We don’t yet know exactly what happened in May’s fatal accident. Tesla’s statement emphasizes that the driver knew he should always keep an eye on what Autopilot was doing. But if NHTSA finds the design of Autopilot to blame, Tesla could be forced to issue a recall, or might feel it has to dumb down the feature. That could hurt both Tesla and enthusiasm for the technology in general.

Whatever the outcome of NHTSA’s investigation, the incident is an opportunity to consider the standards to which we hold autonomous-driving software and the companies that design it. If it is to be widely used, we will have to accept its being involved in accidents—some fatal, and some caused by its own failings.

Human drivers set a low bar: about 90 percent of crashes are caused by human error, and dumb mistakes like driving while texting or drunk kill far too many people. It’s easy to see how machines could improve on that. But deciding how much better they need to be will be much more difficult.

Keep Reading

Most Popular

conceptual illustration of a heart with an arrow going in on one side and a cursor coming out on the other
conceptual illustration of a heart with an arrow going in on one side and a cursor coming out on the other

Forget dating apps: Here’s how the net’s newest matchmakers help you find love

Fed up with apps, people looking for romance are finding inspiration on Twitter, TikTok—and even email newsletters.

digital twins concept
digital twins concept

How AI could solve supply chain shortages and save Christmas

Just-in-time shipping is dead. Long live supply chains stress-tested with AI digital twins.

still from Embodied Intelligence video
still from Embodied Intelligence video

These weird virtual creatures evolve their bodies to solve problems

They show how intelligence and body plans are closely linked—and could unlock AI for robots.

computation concept
computation concept

How AI is reinventing what computers are

Three key ways artificial intelligence is changing what it means to compute.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.