Skip to Content

If a Driverless Car Goes Bad We May Never Know Why

It’s incredibly difficult to figure out why some of the AI used by self-driving cars does what it does.

Two recent accidents involving Tesla’s Autopilot system may raise questions about how computer systems based on learning should be validated and investigated when something goes wrong.

A fatal Tesla accident in Florida last month occurred when a Model S controlled by Autopilot crashed into a truck that the automated system failed to spot. Tesla tells drivers to pay attention to the road while using Autopilot, and explains in a disclaimer that the system may struggle in bright sunlight. Today the National Highway Traffic Safety Administration said it was investigating another accident in Pennsylvania last week where a Model X hit the barriers on both sides of a highway and overturned. The driver said his car was operating in Autopilot mode at the time.

Tesla hasn’t disclosed precisely how Autopilot works. But machine learning techniques are increasingly used to train automotive systems, especially to recognize visual information. MobileEye, an Israeli company that supplies technology to Tesla and other automakers, offers software that uses deep learning to recognize vehicles, lane markings, road signs, and other objects in video footage.

A technician examines a Tesla using a laptop computer.

Machine learning can provide an easier way to program computers to do things that are incredibly difficult to code by hand. For example, a deep learning neural network can be trained to recognize dogs in photographs or video footage with remarkable accuracy provided it sees enough examples. The flip side is that it can be more complicated to understand how these systems work.

A neural network can be designed to provide a measure of its own confidence in a categorization, but the complexity of the mathematical calculations involved means it’s not straightforward to take the network apart to understand how it makes its decisions. This can make unintended behavior hard to predict; and if failure does occur, it can be difficult to explain why. If a system misrecognizes an object in a photo, for instance, it may be hard (though not impossible) to know what feature of the image led to the error. Similar challenges exist with other machine learning techniques.

As these algorithms become more common, regulators will need to consider how they should be evaluated. Carmakers are aware that increasingly complex and automated cars may be difficult for regulators to probe. Toyota is funding a research project at MIT that will explore ways for automated vehicles to explain their actions after the fact. The Japanese automaker is funding a number of such research projects related to challenges with self-driving cars.

Deep learning can be used to control a car in response to sensor data, beyond just recognizing objects in images. A team at Princeton designed an automated driving system based largely on deep learning. And researchers at the chipmaker Nvidia, which makes a range of hardware for automakers, have demonstrated an automated vehicle that relies entirely on deep learning.

Karl Iagnemma, a principal research scientist at MIT and the founder of nuTonomy, a startup working on automated taxis, says an end-to-end deep learning system would be difficult to interrogate. “You’re developing a black-box algorithm that’s being trained on examples of safe driving but whose output is a fairly inscrutable function,” he says.

Silvio Savarese, an assistant professor at Stanford who specializes in machine vision, says one drawback with conventional machine learning is that it lacks a human’s ability to draw conclusions from various forms of information. Even if a vehicle is temporarily obstructed, for example, a person may surmise that it could become an obstacle based on its trajectory. “We use a lot of contextual information,” he says. “The current learning mechanisms don’t do this well.”

The Tesla investigation is being watched closely by those developing automated driving technology. Whatever the conclusions, there is concern about the public perception of the technology and its safety. Iagnemma does not want to see a knee-jerk reaction to the accident.

"We’re at a moment where this could put the brakes on progress,” he says. “If the collective wisdom becomes that a single accident means that the developers were reckless, that’s a very high bar to set.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.