After crunching through thousands of chest x-rays and the clinical reports that accompany them, an AI has learned to spot diseases in those scans as accurately as a human radiologist.
The majority of current diagnostic AI models are trained on scans labeled by humans, but that labeling is a time-consuming process. The new model, called CheXzero, can instead “learn” on its own from existing medical reports that specialists have written in natural language.
The findings suggest that labeling x-rays for the purpose of training AI models to interpret medical images isn’t necessary, which could save both time and money.
A team of researchers from Harvard Medical School trained the CheXzero model on a publicly available data set of more than 377,000 chest x-rays and more than 227,000 corresponding clinical reports. This taught it to associate certain types of images with their existing notes, rather than learning from structured data that had been manually labeled for the task.
CheXzero’s performance was then tested on separate data sets from two different institutions, one in another country, to check that it was capable of matching images with the corresponding notes even when the reports contained differing terminology.
The research, described in Nature Biomedical Engineering, found that the model was more effective at identifying issues such as pneumonia, collapsed lungs, and lesions than other self-supervised AI models. In fact, it was similar in accuracy to human radiologists.
While others have tried to use unstructured medical data in this manner, this is the first time a team’s AI model has learned from unstructured text and matched radiologists’ performance, and it has demonstrated the ability to predict multiple diseases from a given x-ray with a high degree of accuracy, says Ekin Tiu, an undergraduate student at Stanford and a visiting researcher who coauthored the report.
“We are the first to do that and demonstrate that effectively in this field,” he says.
The model’s code has been made publicly available to other researchers in the hope it could be applied to CT scans, MRIs, and echocardiograms to help detect a wider range of diseases in other parts of the body, says Pranav Rajpurkar, an assistant professor of biomedical informatics in the Blavatnik Institute at Harvard Medical School, who led the project.
“Our hope is that people are able to apply this out of the box to other chest x-ray data sets and diseases that they care about,” he says.
Rajpurkar is also optimistic that diagnostic AI models requiring minimal supervision could help increase access to health care in countries and communities where specialists are scarce.
“It makes a lot of sense to use the richer training signal from reports,” says Christian Leibig, director of machine learning at German startup Vara, which uses AI to detect breast cancer. “It’s quite an achievement to get to that level of performance.”
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
AI hype is built on high test scores. Those tests are flawed.
With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.
You need to talk to your kid about AI. Here are 6 things you should say.
As children start back at school this week, it’s not just ChatGPT you need to be thinking about.
AI language models are rife with different political biases
New research explains you’ll get more right- or left-wing answers, depending on which AI model you ask.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.