Skip to Content
Uncategorized

Autopilot’s Limitations Played “Major Role” in Fatal Tesla Crash, NTSB Says

September 12, 2017

The chairman of the National Transportation Safety Board said Tuesday that "system safeguards were lacking" in the Tesla S that killed a driver when it struck a truck in Florida in May 2016. 

According to Reuters, the new statement from the NTSB suggests that Tesla not only fielded a car with limited autonomous capabilities—something that company has previously acknowledged—but one that was also incapable of ensuring its human driver was paying attention to the road when its Autopilot system was engaged. 

“Tesla allowed the driver to use the system outside of the environment for which it was designed and the system gave far too much leeway to the driver to divert his attention, ” Robert Sumwalt, the agency's chairman, said. 

The news comes less than a month after the Wall Street Journal published a detailed account of the Autopilot program at Tesla, which included several accounts of its engineers expressing deep concerns about the system's safety. The system was only meant to provide partial autonomy that required continuous driver attention, but the company, particularly CEO Elon Musk, made public indications that Autopilot was capable of fully autonomous driving. 

At the time of the crash, Tesla vehicles were outfitted to detect if a driver's hands were on the wheel, and emit warning sounds if they were removed for more than a few seconds. In its statement, the NTSB said that such measures were insufficient for monitoring whether a driver was paying attention to the road.

Tesla has since ratcheted up how its cars watch for driver engagement and upgraded its sensor packages for new cars. But the NTSB's statement is a necessary reminder that we have a long way still to go in perfecting the human-vehicle interaction problem that lies at the heart of designing semi-autonomous cars. 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.