Skip to Content

Tesla Investigations Could Question Viability of Semi-Autonomous Driving

Evidence says that relying on people to oversee a partially autonomous car is a bad idea.
July 11, 2016

When Joshua Brown switched on the Autopilot feature of his Tesla Model S on May 7, he would have been warned not to trust it. “Always keep your hands on the wheel. Be prepared to take over at any time,” says the standard warning presented when Autopilot is turned on. But later that day Brown was killed when his car drove itself into the side of a semi-trailer that Autopilot had not detected. Data from the vehicle doesn’t record the driver using any of the car’s controls immediately before the crash.

Federal investigations into the crash by the National Highway Traffic Safety Administration (NHTSA) and the National Transportation Safety Board (NTSB) may now question whether Tesla’s design asks too much of drivers.

After the NHTSA announced it would investigate Brown’s crash, Tesla repeated a line it has used after less serious Autopilot collisions. “Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert,” the company said. It described the investigation as seeking to “determine whether the system worked according to expectations.”

But John Maddox, who was head of vehicle safety research at the NHTSA between 2008 and 2012, says the agency will likely consider whether expectations that drivers can supervise Autopilot are unreasonable in both that investigation and one into a nonfatal crash by a Tesla Model X SUV in Pennsylvania this month.

“That could be a contributing component to these crashes, based on media reports,” says Maddox, who is currently CEO of the American Center for Mobility, a nonprofit working to establish a test facility for automated driving technology. “There’s a concept of foreseeable misuse where no matter what you tell [drivers], they might misuse this technology, and that could be an unreasonable risk.”

Decades of NHTSA history show that just advising drivers of something isn’t enough, says Maddox. For example, when floor mats proved to cause reports of “unintended acceleration” in Toyota vehicles, the manufacturer was cited by the NHTSA despite the fact that owners had been advised to be careful not to replace or dislodge the mats.

John Lee, a professor at the University of Wisconsin, also says the Tesla investigation should consider whether it is reasonable to expect humans to step in when Autopilot fails. He worked on a National Academies report into whether faults in vehicle electronics contributed to Toyota’s acceleration problems. It concluded that they didn’t, but chastised the NHTSA for not being better equipped to investigate electronic systems in cars.

“The Tesla mishap is one of those cases that the report was trying to prepare the NHTSA to deal with,” says Lee. He says that considering the broader design, not just whether it works as Tesla describes, makes sense because there is strong evidence that humans cannot be trusted to reliably supervise systems like Autopilot.

“Fundamentally and physiologically people are ill-suited to monitoring systems for occasional faults,” says Lee.

There is ample evidence that people struggle to stay focused even when fully in control of a vehicle. The National Safety Council estimates that 1.6 million crashes in the U.S. each year involve drivers using cell phones.

If Autopilot mostly works without requiring their input, people will naturally feel the urge not to supervise it closely and be unprepared to take over when it does need their help, says Lee. “That’s a very difficult situation to put people in and expect them to respond appropriately,” he says. Lee says his lab has done driving-simulator studies proving that point.

The head of Google’s autonomous car project has said that the company decided designs like Tesla’s were dangerous after loaning 140 people a prototype capable of handling only some driving. Despite knowing that they were being recorded, drivers quickly grew to trust the technology and do things like turn around to find things on the backseat (see “Lazy Humans Shaped Google’s New Autonomous Car”).

The Tesla investigations are the first time the NHTSA has had to wrestle with the safety issues raised by a system that takes over so much of the work of a vehicle’s operator. Those issues are less new to the NTSB, which announced over the weekend that it would also investigate the fatal Tesla crash.

The NTSB mostly works on airplane and train crashes, where data from “black boxes” and automation features are commonly at the heart of investigations. In recent years, the agency’s chairman, Christopher Hart, has complained that the introduction of more automation to cockpits is causing a reduction in the skills and professionalism of commercial pilots.

Last month Hart warned that his agency’s findings in such cases give reasons for caution in efforts to add self-driving functions to cars. “Introducing automation into complex human-centric systems can be very challenging,” he said. “We have seen that the challenges can be even more difficult in a system that still has substantial human operator involvement and is not completely automated.”

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.