Skip to Content
Artificial intelligence

Google shows how AI might detect lung cancer faster and more reliably

A visualization shows a lung CT scan with signs of cancer (highlighted).
A visualization shows a lung CT scan with signs of cancer (highlighted).Nature

New research from Google shows how machine learning could one day be used to detect signs of lung cancer earlier than often occurs today.

Early warning: Danial Tse, a researcher at Google, developed an algorithm that beat a number of trained radiologists in testing. Tse and colleagues trained a deep-learning algorithm to detect malignant lung nodules in more than 42,000 CT scans. The resulting algorithms turned up 11% fewer false positives and 5% fewer false negatives than their human counterparts. The work is described in a paper published in the journal Nature today.

Killer problem: Lung cancer killed more than 160,000 people in the United States in 2018, making it the leading cause of cancer death. And while computed tomography (CT) scans can be a life-saving part of cancer screening, they are also often unreliable.

Big promise: Tse and colleagues argue that AI could help make lung cancer screening more reliable across the world, although they acknowledge that the work needs to be validated on larger patient populations. Indeed, there is growing interest in using AI to catch many types of cancer. Researchers have shown how machine learning can be used to spot both breast cancer and skin cancer, for instance.  

Small steps: These studies are exciting but should be treated as small advances. It remains challenging to use AI in health care for privacy reasons, and because real-world data sets are rarely as perfect as those used in research studies.

It’s also worth noting that treating cancer involves a lot more than just detecting the disease in the first place. Determining the right course of treatment, for instance, can depend on a range of factors that vary greatly from patient to patient, making that part of the process far harder to automate. 

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.