Skip to Content
Artificial intelligence

An AI used medical notes to teach itself to spot disease on chest x-rays

The model can diagnose problems as well as a human specialist, and doesn't need lots of labor-intensive training data.

September 15, 2022
Close-Up Of Chest X-Ray
Getty Images

After crunching through thousands of chest x-rays and the clinical reports that accompany them, an AI has learned to spot diseases in those scans as accurately as a human radiologist.

The majority of current diagnostic AI models are trained on scans labeled by humans, but that labeling is a time-consuming process. The new model, called CheXzero, can instead “learn” on its own from existing medical reports that specialists have written in natural language. 

The findings suggest that labeling x-rays for the purpose of training AI models to interpret medical images isn’t necessary, which could save both time and money. 

A team of researchers from Harvard Medical School trained the CheXzero model on a publicly available data set of more than 377,000 chest x-rays and more than 227,000 corresponding clinical reports. This taught it to associate certain types of images with their existing notes, rather than learning from structured data that had been manually labeled for the task. 

CheXzero’s performance was then tested on separate data sets from two different institutions, one in another country, to check that it was capable of matching images with the corresponding notes even when the reports contained differing terminology. 

The research, described in Nature Biomedical Engineering, found that the model was more effective at identifying issues such as pneumonia, collapsed lungs, and lesions than other self-supervised AI models. In fact, it was similar in accuracy to human radiologists.

While others have tried to use unstructured medical data in this manner, this is the first time a team’s AI model has learned from unstructured text and matched radiologists’ performance, and it has demonstrated the ability to predict multiple diseases from a given x-ray with a high degree of accuracy, says Ekin Tiu, an undergraduate student at Stanford and a visiting researcher who coauthored the report.

“We are the first to do that and demonstrate that effectively in this field,” he says.

The model’s code has been made publicly available to other researchers in the hope it could be applied to CT scans, MRIs, and echocardiograms to help detect a wider range of diseases in other parts of the body, says Pranav Rajpurkar, an assistant professor of biomedical informatics in the Blavatnik Institute at Harvard Medical School, who led the project.

“Our hope is that people are able to apply this out of the box to other chest x-ray data sets and diseases that they care about,” he says. 

Rajpurkar is also optimistic that diagnostic AI models requiring minimal supervision could help increase access to health care in countries and communities where specialists are scarce.

“It makes a lot of sense to use the richer training signal from reports,” says Christian Leibig, director of machine learning at German startup Vara, which uses AI to detect breast cancer. “It’s quite an achievement to get to that level of performance.”

Deep Dive

Artificial intelligence

The inside story of how ChatGPT was built from the people who made it

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.

AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.

GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why

We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.

ChatGPT is about to revolutionize the economy. We need to decide what that looks like.

New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.