Even more impressive, it can explain its choices.
Explain yourself: The black box has long been a challenge in artificial intelligence. This refers to the tendency of algorithms to spit out results without explaining what went into them—and it can make weeding out bias difficult.
The news: In a paper released in Nature Medicine yesterday, DeepMind researchers described an AI system that can identify more than 50 diseases, refer them to a specialist, and, most important, indicate which portion of a medical scan prompted the diagnosis.
Why it matters: Explainability is crucial if AI is to see increased use in medicine. “Doctors and patients don’t want just a black box answer, they want to know why,” Ramesh Raskar, an associate professor at MIT, told Stat. “There is a standard of care, and if the AI technique doesn’t follow that standard of care, people are going to be uncomfortable with it.”
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.