Skip to Content

A Startup Hopes to Teach Computers to Spot Tumors in Medical Scans

Enlitic wants to make medicine smarter and faster with machine learning.
August 22, 2014

Machines are doing more and more of the work typically completed by humans, and detecting diseases may be next: a new company called Enlitic takes aim at the examination room by employing computers to make diagnoses based on images.

Enlitic cofounder and CEO Jeremy Howard—formerly the president and lead scientist at data-crunching startup Kaggle—says the idea is to teach computers how to recognize various injuries, diseases, and disorders by showing them hundreds of x-rays, MRIs, CT scans, and other films. Howard believes that with enough experience, a computer can start to spot trouble and flag the images immediately for a physician to investigate. That could save physicians from having to comb through stacks of films.

Use of machine learning has exploded in recent years as high-powered computers have grown more advanced and algorithms have gotten better at teaching computers to recognize patterns. Most recently, some machine learning efforts have sought to mimic the physical workings of the human brain, either in software or in hardware (see “Thinking in Silicon”)—an approach often referred to as “deep learning.” Show a computer enough images of a yellow taxi driving down the street, for instance, and it’s possible for it to start to recognize yellow taxis whether they’re on a street or somewhere else. That is the strategy Enlitic is employing.

Yet while the use of machine learning for computer vision has come a long way, Howard says its application in medicine is still lagging.

For Enlitic, the idea is that if you show a computer enough anonymized images of diseases, such as brain tumors, it’ll be able to start flagging them for physicians automatically.

Howard points out that images of medical conditions tend to look fairly consistent, which should aid machine learning. A yellow taxi can appear in all sorts of environments, but the angle, positioning, and colors of a chest x-ray tend to look roughly the same. That makes it simpler to isolate the critical differences between the images—say, by noticing that one includes a tumor.

Since making a complete diagnosis is about more than just knowing what to look for in an image, Howard says doctors might use Enlitic to scan a giant, constantly updating database for all images of, say, livers similar to that of a particular patient. “I don’t mean similar pixels, but based on a deep-learning algorithm they have similar expected outcomes and similar useful interventions,” he says.

And recent advances in machine learning techniques imply that theoretically, computers could gain useful information from patterns in patient behavior—how a patient’s voice sounds when describing a pain, or how badly the person winces when a certain amount of touch is applied to an injury. Howard thinks that eventually this kind of data could be used with Enlitic’s computer vision efforts to make diagnoses even faster and more accurate.

Enlitic is entering territory that isn’t completely uncharted: in 2011, researchers at Stanford reported that they had trained a computer to analyze microscopic images of breast cancer more accurately than humans.

Additionally, some computing powerhouses are already dedicating serious resources to organizing the crush of medical information. IBM’s Watson computing system, for instance, is helping doctors at the University of Texas’s MD Anderson Cancer Center to spot patterns in the medical charts and histories of more than 100,000 patients. And Microsoft has launched its InnerEye computing program, which is aimed at analyzing medical images and identifying disease progression.

For now, all those machines will still need human operators—though Enlitic hopes the ones it works with, at least, will get a lot speedier at spotting diseases.

“We’re not looking to replace radiologists,” Howard says. “We’re looking to give them the information they need to do what they do 10 times faster.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.