Hello,

We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not an Insider? Subscribe now for unlimited access to online articles.

Intelligent Machines

The U.S. Military Wants Its Autonomous Machines to Explain Themselves

The latest machine-learning techniques are essentially black boxes. DARPA is funding a number of efforts to open them up.

Intelligence agents and military operatives may come to rely heavily on machine learning to parse huge quantities of data, and to control a growing arsenal of autonomous systems. But the U.S. military wants to make sure that this doesn’t lead to blindly trusting in any algorithm.

The Defense Advanced Research Projects Agency (DARPA), a division of the Defense Department that explores new technologies, is funding several projects that aim to make artificial intelligence explain itself. The approaches range from adding further machine-learning systems geared toward providing an explanation, to the development of new machine-learning approaches that incorporate an elucidation by design.

“We now have this real explosion of AI,” says David Gunning, the DARPA program manager who is funding an effort to develop AI techniques that include some explanation of their reasoning. “The reason for that is mainly machine learning, and deep learning in particular.”

Deep learning and other machine-learning techniques have taken Silicon Valley by storm, improving voice recognition and image classification significantly, and they are being used in ever-more contexts, including areas like law-enforcement and medicine, where the consequences of a mistake may be serious. But while deep learning is incredibly good at finding patterns in data, it can be impossible to understand how it reaches a conclusion. The learning process is mathematically very complex, and there is often no way to translate this into something a person would understand. 

And while deep learning is particularly hard to interpret, other machine-learning techniques can also be challenging. “These models are very opaque and difficult for people to interpret, especially if they’re not an expert in AI,” Gunning says.

Deep learning is especially cryptic because of its incredible complexity. It is roughly inspired by the process by which neurons in a brain learn in response to input. Many layers of simulated neurons and synapses are labeled data and their behavior is tuned until they learn to recognize, say, a cat in a photograph. But the model learned by the system is encoded in the weights of many millions of neurons, and is therefore very challenging to examine. When a deep-learning network recognizes a cat, for instance, it isn’t clear whether the system may be focusing on the whiskers, the ears, or even the cat’s blanket in an image.

Often, it might not matter that much if a machine-learning model is opaque, but this isn’t true for an intelligence officer trying to identify a potential target. “There are some critical applications where you need the explanation,” Gunning says.

Gunning adds that the military is developing countless autonomous systems that will undoubtedly rely heavily on machine-learning techniques like deep learning. Self-driving vehicles, along with aerial drones, will increasingly be used in coming years, he says, and they will become increasingly capable. 

Explainability isn’t just important for justifying decisions. It can help prevent things from going wrong. An image classification system that has learned to focus purely on texture for cat classification might be fooled by a furry rug. So offering an explanation could help researchers make their systems more robust, and help prevent those who rely on them from making mistakes.  

DARPA is funding 13 different research groups, which are pursuing a range of approaches to making AI more explainable.

One team selected for funding comes from Charles River Analytics, a company that develops high-tech tools for various customers, including the U.S. military. This team is exploring new deep-learning systems that incorporate an explanation, such as ones that highlight areas of an image that seem most relevant to a classification. The researchers are also experimenting with computer interfaces that make the workings of machine-learning systems more explicit with data, visualizations, and even natural language explanations.

Xia Hu, a professor at Texas A&M University who leads another of the teams chosen for funding, says the problem is also important in other areas where machine learning is being adopted, such as medicine, law, and education. Without some sort of explanation or reasoning, “domain experts are not going to trust the results,” Hu says. “That’s the main reason why many domain experts refuse to adopt machine learning or deep learning.”

Keep up with the latest in deep learning at EmTech MIT.
Discover where tech, business, and culture converge.

September 11-14, 2018
MIT Media Lab

Register now
More from Intelligent Machines

Artificial intelligence and robots are transforming how we work and live.

Want more award-winning journalism? Subscribe to Insider Plus.
  • Insider Plus {! insider.prices.plus !}*

    {! insider.display.menuOptionsLabel !}

    Everything included in Insider Basic, plus the digital magazine, extensive archive, ad-free web experience, and discounts to partner offerings and MIT Technology Review events.

    See details+

    Print + Digital Magazine (6 bi-monthly issues)

    Unlimited online access including all articles, multimedia, and more

    The Download newsletter with top tech stories delivered daily to your inbox

    Technology Review PDF magazine archive, including articles, images, and covers dating back to 1899

    10% Discount to MIT Technology Review events and MIT Press

    Ad-free website experience

/3
You've read of three free articles this month. for unlimited online access. You've read of three free articles this month. for unlimited online access. This is your last free article this month. for unlimited online access. You've read all your free articles this month. for unlimited online access. You've read of three free articles this month. for more, or for unlimited online access. for two more free articles, or for unlimited online access.