Skip to Content
Artificial intelligence

New Research Aims to Solve the Problem of AI Bias in “Black Box” Algorithms

As we automate more and more decisions, being able to understand how an AI thinks is increasingly important.
November 7, 2017
Siobhan Gallagher

From picking stocks to examining x-rays, artificial intelligence is increasingly being used to make decisions that were formerly up to humans. But AI is only as good as the data it’s trained on, and in many cases we end up baking our all-too-human biases into algorithms that have the potential to make a huge impact on people’s lives.

In a new paper published on the arXiv, researchers say they may have figured out a way to mitigate the problem for algorithms that are difficult for outsiders to examine—so-called “black box” systems.

A particularly troubling area for bias to show up is in risk assessment modeling, which can decide, for example, a person’s chances of being granted bail or approved for a loan. It is typically illegal to consider factors like race in such cases, but algorithms can learn to recognize and exploit the fact that a person’s education level or home address may correlate with other demographic information, which can effectively imbue them with racial and other biases.

What makes this problem even trickier is many of the AIs used to make those choices are black boxes—either they’re too complicated to easily understand, or they’re proprietary algorithms that companies refuse to explain. Researchers have been working on tools to get a look at what’s going on under the hood, but the issue is widespread and growing (see “Biased Algorithms Are Everywhere, and No One Seems to Care”).

In the paper, Sarah Tan (who worked at Microsoft at the time) and colleagues tried their method on two black-box risk assessment models: one about loan risks and default rates from the peer-to-peer company LendingClub, and one from Northpointe, a company that provides algorithm-based services to courts around the country, predicting recidivism risk for defendants.

The researchers used a two-pronged approach to shed light on how these potentially biased algorithms work. First, they created a model that mimics the black-box algorithm being examined and comes up with a risk score based on an initial set of data, just as LendingClub and Northpointe’s would. Then they built a second model that they trained on real-world outcomes, using it to determine which variables from the initial data set were important in final outcomes.

In the case of LendingClub, the researchers analyzed data on a number of matured loans from 2007 to 2011. LendingClub’s database contained numerous different fields, but the researchers found that the company’s lending model probably ignored both the applicant’s annual income and the purpose of the loan. Income might make sense to ignore, since it’s self-reported and can be faked. But the purpose of the loan is highly correlated with risk—loans for small businesses are much riskier than those used to pay for weddings, for example. So LendingClub appeared to be ignoring an important variable.

Northpointe, meanwhile, says its COMPAS algorithm does not include race as a variable when making recommendations on sentencing. However, in an investigation by ProPublica, journalists collected racial information on defendants who were sentenced with help from COMPAS and found evidence of racial bias. In their mimic model, the researchers used the data gathered by ProPublica as well as information on the defendants’ age, sex, charge degree, number of prior convictions, and length of any previous prison stay. The method agreed with ProPublica’s findings, suggesting that COMPAS was likely biased for some age and racial groups.

Critics may point out that these aren’t exact replicas—out of necessity, the researchers were making a lot of educated guesses. But if the company behind an algorithm isn’t willing to release information on how its system works, approximation models like the ones from this research are a reasonable way to get insight, says Brendan O’Connor, an assistant professor at the University of Massachusetts, Amherst, who has published a paper on bias in natural-language processing.

“We need to be aware this is happening, and not close our eyes to it and act like it’s not happening,” O’Connor says.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.