If we want AI to explain itself, here’s how it should tell us
Explainable AI systems aim to make decisions that are easily understood by humans—a laudable goal, but what makes a good explanation?
Testing the best: There’s only one way to figure that out: ask some users. So that’s what researchers from Harvard and Google Brain did, in a series of studies. Test subjects looked at different combinations of inputs, outputs, and explanations for a machine-learning algorithm that was designed to learn the dietary habits or medical conditions of an alien (yes, seriously—alien life was chosen to keep the test subject’s own biases from creeping in). Users then scored the different combinations.
Keep it short: Longer explanations were found to be more difficult to parse than shorter ones—though breaking up the same amount of text into many short lines was somehow better than making people read a few longer lines. As you can tell, the tests examined some pretty basic elements of how to deliver information—but at least it’s a start.
Deep Dive
Artificial intelligence
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.