Hello,

We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not an Insider? Subscribe now for unlimited access to online articles.

Intelligent Machines

If a Driverless Car Goes Bad We May Never Know Why

It’s incredibly difficult to figure out why some of the AI used by self-driving cars does what it does.

Two recent accidents involving Tesla’s Autopilot system may raise questions about how computer systems based on learning should be validated and investigated when something goes wrong.

A fatal Tesla accident in Florida last month occurred when a Model S controlled by Autopilot crashed into a truck that the automated system failed to spot. Tesla tells drivers to pay attention to the road while using Autopilot, and explains in a disclaimer that the system may struggle in bright sunlight. Today the National Highway Traffic Safety Administration said it was investigating another accident in Pennsylvania last week where a Model X hit the barriers on both sides of a highway and overturned. The driver said his car was operating in Autopilot mode at the time.

Tesla hasn’t disclosed precisely how Autopilot works. But machine learning techniques are increasingly used to train automotive systems, especially to recognize visual information. MobileEye, an Israeli company that supplies technology to Tesla and other automakers, offers software that uses deep learning to recognize vehicles, lane markings, road signs, and other objects in video footage.

A technician examines a Tesla using a laptop computer.

Machine learning can provide an easier way to program computers to do things that are incredibly difficult to code by hand. For example, a deep learning neural network can be trained to recognize dogs in photographs or video footage with remarkable accuracy provided it sees enough examples. The flip side is that it can be more complicated to understand how these systems work.

A neural network can be designed to provide a measure of its own confidence in a categorization, but the complexity of the mathematical calculations involved means it’s not straightforward to take the network apart to understand how it makes its decisions. This can make unintended behavior hard to predict; and if failure does occur, it can be difficult to explain why. If a system misrecognizes an object in a photo, for instance, it may be hard (though not impossible) to know what feature of the image led to the error. Similar challenges exist with other machine learning techniques.

As these algorithms become more common, regulators will need to consider how they should be evaluated. Carmakers are aware that increasingly complex and automated cars may be difficult for regulators to probe. Toyota is funding a research project at MIT that will explore ways for automated vehicles to explain their actions after the fact. The Japanese automaker is funding a number of such research projects related to challenges with self-driving cars.

Deep learning can be used to control a car in response to sensor data, beyond just recognizing objects in images. A team at Princeton designed an automated driving system based largely on deep learning. And researchers at the chipmaker Nvidia, which makes a range of hardware for automakers, have demonstrated an automated vehicle that relies entirely on deep learning.

Karl Iagnemma, a principal research scientist at MIT and the founder of nuTonomy, a startup working on automated taxis, says an end-to-end deep learning system would be difficult to interrogate. “You’re developing a black-box algorithm that’s being trained on examples of safe driving but whose output is a fairly inscrutable function,” he says.

Silvio Savarese, an assistant professor at Stanford who specializes in machine vision, says one drawback with conventional machine learning is that it lacks a human’s ability to draw conclusions from various forms of information. Even if a vehicle is temporarily obstructed, for example, a person may surmise that it could become an obstacle based on its trajectory. “We use a lot of contextual information,” he says. “The current learning mechanisms don’t do this well.”

The Tesla investigation is being watched closely by those developing automated driving technology. Whatever the conclusions, there is concern about the public perception of the technology and its safety. Iagnemma does not want to see a knee-jerk reaction to the accident.

"We’re at a moment where this could put the brakes on progress,” he says. “If the collective wisdom becomes that a single accident means that the developers were reckless, that’s a very high bar to set.”

Hear more about deep learning from the experts at the EmTech Digital Conference, March 26-27, 2018 in San Francisco.

Learn more and register
A technician examines a Tesla using a laptop computer.

Uh oh–you've read all of your free articles for this month.

Insider Premium
$179.95/yr US PRICE

Next in Top Stories

Your guide to what matters today

Want more award-winning journalism? Subscribe to Insider Plus.
  • Insider Plus {! insider.prices.plus !}*

    {! insider.display.menuOptionsLabel !}

    Everything included in Insider Basic, plus the digital magazine, extensive archive, ad-free web experience, and discounts to partner offerings and MIT Technology Review events.

    See details+

    What's Included

    Unlimited 24/7 access to MIT Technology Review’s website

    The Download: our daily newsletter of what's important in technology and innovation

    Bimonthly print magazine (6 issues per year)

    Bimonthly digital/PDF edition

    Access to the magazine PDF archive—thousands of articles going back to 1899 at your fingertips

    Special interest publications

    Discount to MIT Technology Review events

    Special discounts to select partner offerings

    Ad-free web experience

/
You've read all of your free articles this month. This is your last free article this month. You've read of free articles this month. or  for unlimited online access.