Neural networks can answer a question about a photo and point to the evidence for their answer by annotating the image.
How it works: To test the Pointing and Justification Explanation (PJ-X) model, researchers gathered data sets made up of pairs of photographs showing similar scenes, like different types of lunches. Then they came up with a question that has distinct answers for each photo (“Is this a healthy meal?”).
What it does: After being trained on enough data, PJ-X could both answer the question using text (“No, it’s a hot dog with lots of toppings”’) and put a heat map over the photo to highlight the reasons behind the answer (the hot dog and its many toppings).
Why it matters: Typical AIs are black boxes—good at identifying things, but with algorithmic logic that is opaque to humans. For a lot of AI uses, however—a system that diagnoses disease, for instance—understanding how the technology came to its decision could be critical.
Why Meta’s latest large language model survived only three days online
Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.
A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing
Online videos are a vast and untapped source of training data—and OpenAI says it has a new way to use it.
Responsible AI has a burnout problem
Companies say they want ethical AI. But those working in the field say that ambition comes at their expense.
Biotech labs are using AI inspired by DALL-E to invent new drugs
Two groups have announced powerful new generative models that can design new proteins on demand not seen in nature.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.