AI can spot your eye disease with 94.5 percent accuracy
Even more impressive, it can explain its choices.
Explain yourself: The black box has long been a challenge in artificial intelligence. This refers to the tendency of algorithms to spit out results without explaining what went into them—and it can make weeding out bias difficult.
The news: In a paper released in Nature Medicine yesterday, DeepMind researchers described an AI system that can identify more than 50 diseases, refer them to a specialist, and, most important, indicate which portion of a medical scan prompted the diagnosis.
Why it matters: Explainability is crucial if AI is to see increased use in medicine. “Doctors and patients don’t want just a black box answer, they want to know why,” Ramesh Raskar, an associate professor at MIT, told Stat. “There is a standard of care, and if the AI technique doesn’t follow that standard of care, people are going to be uncomfortable with it.”
Deep Dive
Artificial intelligence
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
Google DeepMind’s new generative model makes Super Mario–like games from scratch
Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.
What’s next for generative video
OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.