Fatal Tesla Autopilot Crash Is a Reminder Autonomous Cars Will Sometimes Screw Up
More than 30,000 people are killed by cars in the U.S. each year, and people working on autonomous-driving technology at companies such as Google and Tesla say any technology that can significantly reduce that figure deserves serious attention.
But even if automated cars can be much safer than conventional ones, they will still be involved in accidents. No software can be perfect. And as self-driving technology matures, regulators and society as a whole will have to decide just how safe these vehicles need to be. Indeed, it has been argued that in some situations autonomous vehicles must be programmed to actively choose which people to harm.
Those thorny issues became more concrete today with news that Tesla is being investigated by the U.S. National Highway Traffic Safety Administration after a fatal crash involving the company’s Autopilot automated driving feature, which can do things like change lanes and adjust speed during highway driving for some of the company’s cars.

In Florida in May, a Tesla Model S sedan drove into a tractor-trailer crossing the road ahead while Autopilot was in control of the car. Neither Tesla’s Autopilot feature nor the driver applied the car’s brakes. In a blog post Thursday, Tesla said that Autopilot didn’t register the white side of the trailer against the bright sky.
Tesla’s Autopilot can steer the car, detect obstacles and lane markings, and use the brakes, all on its own. But it is far less capable than a human driver and lacks the sophistication and high-detail sensors seen in more mature autonomous car-projects like Google's.
Tesla has been criticized for promoting the convenience of Autopilot—a name that suggests no human intervention is needed—while also maintaining that drivers must constantly be ready to take over from the software. The leader of Google’s autonomous-car project, Chris Urmson, has said his company’s experiments have proved that humans can’t be relied on to do that, because they quickly come to trust that the car knows what it’s doing. All the same, Tesla CEO Elon Musk has said his company’s data suggests Autopilot is twice as safe as human drivers.
We don’t yet know exactly what happened in May’s fatal accident. Tesla’s statement emphasizes that the driver knew he should always keep an eye on what Autopilot was doing. But if NHTSA finds the design of Autopilot to blame, Tesla could be forced to issue a recall, or might feel it has to dumb down the feature. That could hurt both Tesla and enthusiasm for the technology in general.
Whatever the outcome of NHTSA’s investigation, the incident is an opportunity to consider the standards to which we hold autonomous-driving software and the companies that design it. If it is to be widely used, we will have to accept its being involved in accidents—some fatal, and some caused by its own failings.
Human drivers set a low bar: about 90 percent of crashes are caused by human error, and dumb mistakes like driving while texting or drunk kill far too many people. It’s easy to see how machines could improve on that. But deciding how much better they need to be will be much more difficult.
Keep Reading
Most Popular
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.