If we want AI to explain itself, here’s how it should tell us
Explainable AI systems aim to make decisions that are easily understood by humans—a laudable goal, but what makes a good explanation?
Testing the best: There’s only one way to figure that out: ask some users. So that’s what researchers from Harvard and Google Brain did, in a series of studies. Test subjects looked at different combinations of inputs, outputs, and explanations for a machine-learning algorithm that was designed to learn the dietary habits or medical conditions of an alien (yes, seriously—alien life was chosen to keep the test subject’s own biases from creeping in). Users then scored the different combinations.
Keep it short: Longer explanations were found to be more difficult to parse than shorter ones—though breaking up the same amount of text into many short lines was somehow better than making people read a few longer lines. As you can tell, the tests examined some pretty basic elements of how to deliver information—but at least it’s a start.
Deep Dive
Artificial intelligence
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
OpenAI teases an amazing new generative video model called Sora
The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.
Google’s Gemini is now in everything. Here’s how you can try it out.
Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.
Google DeepMind’s new generative model makes Super Mario–like games from scratch
Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.