Skip to Content
Artificial intelligence

Nvidia Lets You Peer Inside the Black Box of Its Self-Driving AI

In a step toward making AI more accountable, Nvidia has developed a neural network for autonomous driving that highlights what it’s focusing on.

Nvidia has developed a self-driving AI that shows you how it works.

As we explained in our latest cover story, “The Dark Secret at the Heart of AI,” some of the most powerful machine-learning techniques available result in software that is almost completely opaque, even to the engineers that build it. Approaches that provide some clues as to how an AI works will, therefore, be hugely important for building trust in a technology that looks set to revolutionize everything from medicine to manufacturing.

Nvidia provides chips that are ideal for deep learning, an especially powerful machine-learning technique (see “10 Breakthrough Technologies 2013: Deep Learning”).

Nvidia's neural network software highlights the areas it's focusing on as it makes driving decisions.

The chip maker has also been developing systems that demonstrate how an automaker might apply deep learning to autonomous driving. This includes a car that is controlled entirely by a deep-learning algorithm. Amazingly, the vehicle’s computer isn’t given any rules to follow—it simply matches input from several video cameras to the behavior of a human driver, and figures out for itself how it should drive. The only catch is that the system is so complex that it’s difficult to untangle how it actually works.

But Nvidia is working to open this black box. It has developed a way to visually highlight what the system is paying attention to. As explained in a recently published paper, the neural network architecture developed by Nvidia’s researchers is designed so that it can highlight the areas of a video picture that contribute most strongly to the behavior of the car’s deep neural network. Remarkably, the results show that the network is focusing on the edges of roads, lane markings, and parked cars—just the sort of things that a good human driver would want to pay attention to.

“What’s revolutionary about this is that we never directly told the network to care about these things,” Urs Muller, Nvidia’s chief architect for self-driving cars, wrote in a blog post.

It isn’t a complete explanation of how the neural network reasons, but it’s a good start. As Muller says: “I can’t explain everything I need the car to do, but I can show it, and now it can show me what it learned.”

This sort of approach could become increasingly important as deep learning is applied to just about any problem involving large quantities of data, including critical areas like medicine, finance, and military intelligence.

A handful of academic researchers are exploring the issue as well. For example, Jeff Clune at the University of Wyoming and Carlos Guestrin at the University of Washington (and Apple) have found ways of highlighting the parts of images that classification systems are picking up on. And Tommi Jaakola and Regina Barzilay at MIT are developing ways to provide snippets of text that help explain a conclusion drawn from large quantities of written data.

The Defense Advanced Projects Research Agency (DARPA), which does long-term research for the U.S. military, is funding several similar research efforts through a program it calls Explainable Artificial Intelligence (XAI).

Beyond the technical specifics, though, it’s fascinating to consider how this compares to human intelligence. We do all sorts of things we can’t explain fully, and the explanations we concoct are often only approximations, or “stories” about what’s going on. Given the opacity of today’s increasingly complex machine-learning methods, we may someday be forced to accept such explanations from AI, too.

(Sources: Nvidia, “The Dark Secret at the Heart of AI”, “The U.S. Military Wants Its Autonomous Machines to Explain Themselves”)

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.