Skip to Content
Artificial intelligence

A biased medical algorithm favored white people for health-care programs

October 25, 2019
A medical professional checks a patient's back with a stethoscope
A medical professional checks a patient's back with a stethoscopeGetty Images

A study has highlighted the risks inherent in using historical data to train machine-learning algorithms to make predictions.

The news: An algorithm that many US health providers use to predict which patients will most need extra medical care privileged white patients over black patients, according to researchers at UC Berkeley, whose study was published in Science. Effectively, it bumped whites up the queue for special treatments for complex conditions like kidney problems or diabetes.

The study: The researchers dug through almost 50,000 records from a large, undisclosed academic hospital. They found that white patients were given higher risk scores, and were therefore more likely to be selected for extra care (like more nursing or dedicated appointments), than black patients who were in fact equally sick. The researchers calculated that the bias cut the proportion of black patients who got extra help by more than half.

What software was this? The researchers didn’t say, but the Washington Post identifies it as Optum, owned by insurer UnitedHealth. It says its product is used to “manage more than 70 million lives.” Though the researchers only focused on one particular tool, they identified the same flaw in the 10 most widely used algorithms in the industry. Each year, these tools are collectively applied to an estimated 150 to 200 million people in the US.

How the bias crept in: Race wasn’t a factor in the algorithm’s decision-making (that would be illegal); it used patients’ medical histories to predict how much they were likely to cost the health-care system. But cost is not a race-blind metric: for socioeconomic and other reasons, black patients have historically incurred lower health-care costs than white patients with the same conditions. As a result, the algorithm gave white patients the same scores as black patients who were significantly sicker.

A small saving grace: The researchers worked with Optum to correct the issue. They reduced the disparity by more than 80% by creating a version that predicts both a patient’s future costs and the number of times a chronic condition might flare up over the coming year. So algorithmic bias can be corrected, if—and sadly, it is a big if—you can catch it.

Why it matters: The study is the latest to show the pitfalls of allocating important resources according to the recommendation of algorithms. These kinds of challenges are playing out not just in health care, but also in hiring, credit scoring, insurance, and criminal justice.

Read next: our interactive explainer on how AI bias affects the criminal legal system and why it’s so hard to eliminate.

Deep Dive

Artificial intelligence

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

ChatGPT is going to change education, not destroy it

The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.

Deep learning pioneer Geoffrey Hinton has quit Google

Hinton will be speaking at EmTech Digital on Wednesday.

We are hurtling toward a glitchy, spammy, scammy, AI-powered internet

Large language models are full of security vulnerabilities, yet they’re being embedded into tech products on a vast scale.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.