Skip to Content
Artificial intelligence

A biased medical algorithm favored white people for health-care programs

October 25, 2019
A medical professional checks a patient's back with a stethoscope
A medical professional checks a patient's back with a stethoscopeGetty Images

A study has highlighted the risks inherent in using historical data to train machine-learning algorithms to make predictions.

The news: An algorithm that many US health providers use to predict which patients will most need extra medical care privileged white patients over black patients, according to researchers at UC Berkeley, whose study was published in Science. Effectively, it bumped whites up the queue for special treatments for complex conditions like kidney problems or diabetes.

The study: The researchers dug through almost 50,000 records from a large, undisclosed academic hospital. They found that white patients were given higher risk scores, and were therefore more likely to be selected for extra care (like more nursing or dedicated appointments), than black patients who were in fact equally sick. The researchers calculated that the bias cut the proportion of black patients who got extra help by more than half.

What software was this? The researchers didn’t say, but the Washington Post identifies it as Optum, owned by insurer UnitedHealth. It says its product is used to “manage more than 70 million lives.” Though the researchers only focused on one particular tool, they identified the same flaw in the 10 most widely used algorithms in the industry. Each year, these tools are collectively applied to an estimated 150 to 200 million people in the US.

How the bias crept in: Race wasn’t a factor in the algorithm’s decision-making (that would be illegal); it used patients’ medical histories to predict how much they were likely to cost the health-care system. But cost is not a race-blind metric: for socioeconomic and other reasons, black patients have historically incurred lower health-care costs than white patients with the same conditions. As a result, the algorithm gave white patients the same scores as black patients who were significantly sicker.

A small saving grace: The researchers worked with Optum to correct the issue. They reduced the disparity by more than 80% by creating a version that predicts both a patient’s future costs and the number of times a chronic condition might flare up over the coming year. So algorithmic bias can be corrected, if—and sadly, it is a big if—you can catch it.

Why it matters: The study is the latest to show the pitfalls of allocating important resources according to the recommendation of algorithms. These kinds of challenges are playing out not just in health care, but also in hiring, credit scoring, insurance, and criminal justice.

Read next: our interactive explainer on how AI bias affects the criminal legal system and why it’s so hard to eliminate.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.