MIT Technology Review Subscribe

A new AI system can explain itself—twice

Neural networks can answer a question about a photo and point to the evidence for their answer by annotating the image.

How it works: To test the Pointing and Justification Explanation (PJ-X) model, researchers gathered data sets made up of pairs of photographs showing similar scenes, like different types of lunches. Then they came up with a question that has distinct answers for each photo (“Is this a healthy meal?”).

Advertisement

What it does: After being trained on enough data, PJ-X could both answer the question using text (“No, it’s a hot dog with lots of toppings”’) and put a heat map over the photo to highlight the reasons behind the answer (the hot dog and its many toppings).

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Why it matters: Typical AIs are black boxes—good at identifying things, but with algorithmic logic that is opaque to humans. For a lot of AI uses, however—a system that diagnoses disease, for instance—understanding how the technology came to its decision could be critical.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement