Skip to Content

Tesla Crash Will Shape the Future of Automated Cars

A tragic accident raises questions about the reliability and the public perception of automated driving.

A fatal accident that occurred while Tesla’s Autopilot technology was engaged may have a significant bearing on the future of automated driving.

The National Highway Traffic Safety Administration said yesterday that it is investigating the accident, which occurred near Williston, Florida, last month when a Model S crashed into a trailer making a left turn in front of it.

NHTSA’s investigation does not mean the agency believes the technology contributed to the accident or is defective. But the incident will inevitably raise questions about the performance of the technology and the way drivers treat it. The auto industry will certainly watch closely to see how it may shape regulations and influence the public perception of automated-driving technology. The agency is expected to release new guidelines about testing automated vehicles this month.

Tesla’s Autopilot is available for versions of the Model S equipped with the necessary hardware, which includes cameras, radar, and ultrasound sensors. Autopilot can effectively keep a car driving on a highway, following bends, slowing for other vehicles, and even overtaking them. Tesla released a statement saying that when the accident occurred, Autopilot didn’t notice the white tractor-trailer against the bright sky.

Technology experts have warned for some time that self-driving systems are far from perfect, and that the process of handing control from a car back to a driver can be problematic (see “Driverless Cars Are Farther Away Than You Think”).

Dan Galves, chief communications officer for Mobileye, an Israeli company that provides image-processing software for Tesla and other automakers, says his company’s technology was not designed to cope with the type of obstacle that appeared to cause the accident now being investigated. “Today’s collision avoidance technology, or automatic emergency braking (AEB), is defined as rear-end collision avoidance and is designed specifically for that,” he says. “This incident involved a laterally crossing vehicle.”

Bosch, a German company that makes automotive technologies including components for automated driving, said the benefits of automation would make progress on the technology inevitable. “We remain convinced that the gradual introduction of automated vehicles can make a significant contribution to improving road safety,” the company said in a statement. “Automated driving is coming—not overnight, but gradually.”

The result of the NHTSA investigation could have profound implications for Tesla itself. Safety issues can be extremely damaging for carmakers, and it took Audi more than a decade to repair its reputation after a problem with “sudden acceleration” in some vehicles during the 1980s. But Tesla has a reputation for aggressively exploring new technologies, and many of its customers are enthusiastic about being on the cutting edge of automotive innovation. The company also has the capacity to fix problems quickly by updating its vehicles using a cellular data link.

Tesla pointed out in its statement that this was the first fatality in more 130 million miles of driving with Autopilot activated, compared with one fatality in every 94 million miles of regular driving. This is a coarse comparison, however, since Autopilot is only meant to be used for highway driving in certain conditions. It might also prove irrelevant if NHTSA decides Autopilot was somehow at fault.

“This accident is a tragic reminder of the very significant technical and social barriers facing autonomous vehicles,” says David Keith, an assistant professor at the MIT Sloan School of Management, who studies the auto industry. “Safely controlling an automobile in all possible driving situations is an enormously complex task, and one that algorithms have not yet mastered.”

It is unclear what the driver of the Model S was doing at the time of the accident. However, as Google’s car project has shown, automation can apparently encourage drivers to engage in risky behavior (see “Lazy Humans Shaped Google’s New Autonomous Car”).

Keith adds: “NHTSA has been relatively hands-off regarding autonomous vehicles to date, and it is certainly possible that this accident will lead to greater oversight regarding what constitutes safe operation of these semi-autonomous vehicles.”

 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.