Even more impressive, it can explain its choices.
Explain yourself: The black box has long been a challenge in artificial intelligence. This refers to the tendency of algorithms to spit out results without explaining what went into them—and it can make weeding out bias difficult.
The news: In a paper released in Nature Medicine yesterday, DeepMind researchers described an AI system that can identify more than 50 diseases, refer them to a specialist, and, most important, indicate which portion of a medical scan prompted the diagnosis.
Why it matters: Explainability is crucial if AI is to see increased use in medicine. “Doctors and patients don’t want just a black box answer, they want to know why,” Ramesh Raskar, an associate professor at MIT, told Stat. “There is a standard of care, and if the AI technique doesn’t follow that standard of care, people are going to be uncomfortable with it.”
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
The future of generative AI is niche, not generalized
ChatGPT has sparked speculation about artificial general intelligence. But the next real phase of AI will be in specific domains and contexts.
Video: Geoffrey Hinton talks about the “existential threat” of AI
Watch Hinton speak with Will Douglas Heaven, MIT Technology Review’s senior editor for AI, at EmTech Digital.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.