Skip to Content

Fatal Tesla Autopilot Crash Is a Reminder Autonomous Cars Will Sometimes Screw Up

A death behind the wheel with Tesla’s Autopilot on raises the question of how safe automated cars must be.
June 30, 2016

More than 30,000 people are killed by cars in the U.S. each year, and people working on autonomous-driving technology at companies such as Google and Tesla say any technology that can significantly reduce that figure deserves serious attention.

But even if automated cars can be much safer than conventional ones, they will still be involved in accidents. No software can be perfect. And as self-driving technology matures, regulators and society as a whole will have to decide just how safe these vehicles need to be. Indeed, it has been argued that in some situations autonomous vehicles must be programmed to actively choose which people to harm.

Those thorny issues became more concrete today with news that Tesla is being investigated by the U.S. National Highway Traffic Safety Administration after a fatal crash involving the company’s Autopilot automated driving feature, which can do things like change lanes and adjust speed during highway driving for some of the company’s cars.

Tesla Motors' Model S sedan.

In Florida in May, a Tesla Model S sedan drove into a tractor-trailer crossing the road ahead while Autopilot was in control of the car. Neither Tesla’s Autopilot feature nor the driver applied the car’s brakes. In a blog post Thursday, Tesla said that Autopilot didn’t register the white side of the trailer against the bright sky.

Tesla’s Autopilot can steer the car, detect obstacles and lane markings, and use the brakes, all on its own. But it is far less capable than a human driver and lacks the sophistication and high-detail sensors seen in more mature autonomous car-projects like Google's.

Tesla has been criticized for promoting the convenience of Autopilot—a name that suggests no human intervention is needed—while also maintaining that drivers must constantly be ready to take over from the software. The leader of Google’s autonomous-car project, Chris Urmson, has said his company’s experiments have proved that humans can’t be relied on to do that, because they quickly come to trust that the car knows what it’s doing. All the same, Tesla CEO Elon Musk has said his company’s data suggests Autopilot is twice as safe as human drivers.

We don’t yet know exactly what happened in May’s fatal accident. Tesla’s statement emphasizes that the driver knew he should always keep an eye on what Autopilot was doing. But if NHTSA finds the design of Autopilot to blame, Tesla could be forced to issue a recall, or might feel it has to dumb down the feature. That could hurt both Tesla and enthusiasm for the technology in general.

Whatever the outcome of NHTSA’s investigation, the incident is an opportunity to consider the standards to which we hold autonomous-driving software and the companies that design it. If it is to be widely used, we will have to accept its being involved in accidents—some fatal, and some caused by its own failings.

Human drivers set a low bar: about 90 percent of crashes are caused by human error, and dumb mistakes like driving while texting or drunk kill far too many people. It’s easy to see how machines could improve on that. But deciding how much better they need to be will be much more difficult.

Keep Reading

Most Popular

open sourcing language models concept
open sourcing language models concept

Meta has built a massive new language AI—and it’s giving it away for free

Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

transplant surgery
transplant surgery

The gene-edited pig heart given to a dying patient was infected with a pig virus

The first transplant of a genetically-modified pig heart into a human may have ended prematurely because of a well-known—and avoidable—risk.

Muhammad bin Salman funds anti-aging research
Muhammad bin Salman funds anti-aging research

Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging

The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.

Yann LeCun
Yann LeCun

Yann LeCun has a bold new vision for the future of AI

One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.