Even more impressive, it can explain its choices.
Explain yourself: The black box has long been a challenge in artificial intelligence. This refers to the tendency of algorithms to spit out results without explaining what went into them—and it can make weeding out bias difficult.
The news: In a paper released in Nature Medicine yesterday, DeepMind researchers described an AI system that can identify more than 50 diseases, refer them to a specialist, and, most important, indicate which portion of a medical scan prompted the diagnosis.
Why it matters: Explainability is crucial if AI is to see increased use in medicine. “Doctors and patients don’t want just a black box answer, they want to know why,” Ramesh Raskar, an associate professor at MIT, told Stat. “There is a standard of care, and if the AI technique doesn’t follow that standard of care, people are going to be uncomfortable with it.”
This artist is dominating AI-generated art. And he’s not happy about it.
Greg Rutkowski is a more popular prompt than Picasso.
What does GPT-3 “know” about me?
Large language models are trained on troves of personal data hoovered from the internet. So I wanted to know: What does it have on me?
An AI that can design new proteins could help unlock new cures and materials
The machine-learning tool could help researchers discover entirely new proteins not yet known to science.
DeepMind’s new chatbot uses Google searches plus humans to give better answers
The lab trained a chatbot to learn from human feedback and search the internet for information to support its claims.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.