A study has highlighted the risks inherent in using historical data to train machine-learning algorithms to make predictions.
The news: An algorithm that many US health providers use to predict which patients will most need extra medical care privileged white patients over black patients, according to researchers at UC Berkeley, whose study was published in Science. Effectively, it bumped whites up the queue for special treatments for complex conditions like kidney problems or diabetes.
The study: The researchers dug through almost 50,000 records from a large, undisclosed academic hospital. They found that white patients were given higher risk scores, and were therefore more likely to be selected for extra care (like more nursing or dedicated appointments), than black patients who were in fact equally sick. The researchers calculated that the bias cut the proportion of black patients who got extra help by more than half.
What software was this? The researchers didn’t say, but the Washington Post identifies it as Optum, owned by insurer UnitedHealth. It says its product is used to “manage more than 70 million lives.” Though the researchers only focused on one particular tool, they identified the same flaw in the 10 most widely used algorithms in the industry. Each year, these tools are collectively applied to an estimated 150 to 200 million people in the US.
How the bias crept in: Race wasn’t a factor in the algorithm’s decision-making (that would be illegal); it used patients’ medical histories to predict how much they were likely to cost the health-care system. But cost is not a race-blind metric: for socioeconomic and other reasons, black patients have historically incurred lower health-care costs than white patients with the same conditions. As a result, the algorithm gave white patients the same scores as black patients who were significantly sicker.
A small saving grace: The researchers worked with Optum to correct the issue. They reduced the disparity by more than 80% by creating a version that predicts both a patient’s future costs and the number of times a chronic condition might flare up over the coming year. So algorithmic bias can be corrected, if—and sadly, it is a big if—you can catch it.
Why it matters: The study is the latest to show the pitfalls of allocating important resources according to the recommendation of algorithms. These kinds of challenges are playing out not just in health care, but also in hiring, credit scoring, insurance, and criminal justice.
Read next: our interactive explainer on how AI bias affects the criminal legal system and why it’s so hard to eliminate.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.