Skip to Content
Artificial intelligence

New Research Aims to Solve the Problem of AI Bias in “Black Box” Algorithms

As we automate more and more decisions, being able to understand how an AI thinks is increasingly important.
November 7, 2017
Siobhan Gallagher

From picking stocks to examining x-rays, artificial intelligence is increasingly being used to make decisions that were formerly up to humans. But AI is only as good as the data it’s trained on, and in many cases we end up baking our all-too-human biases into algorithms that have the potential to make a huge impact on people’s lives.

In a new paper published on the arXiv, researchers say they may have figured out a way to mitigate the problem for algorithms that are difficult for outsiders to examine—so-called “black box” systems.

A particularly troubling area for bias to show up is in risk assessment modeling, which can decide, for example, a person’s chances of being granted bail or approved for a loan. It is typically illegal to consider factors like race in such cases, but algorithms can learn to recognize and exploit the fact that a person’s education level or home address may correlate with other demographic information, which can effectively imbue them with racial and other biases.

What makes this problem even trickier is many of the AIs used to make those choices are black boxes—either they’re too complicated to easily understand, or they’re proprietary algorithms that companies refuse to explain. Researchers have been working on tools to get a look at what’s going on under the hood, but the issue is widespread and growing (see “Biased Algorithms Are Everywhere, and No One Seems to Care”).

In the paper, Sarah Tan (who worked at Microsoft at the time) and colleagues tried their method on two black-box risk assessment models: one about loan risks and default rates from the peer-to-peer company LendingClub, and one from Northpointe, a company that provides algorithm-based services to courts around the country, predicting recidivism risk for defendants.

The researchers used a two-pronged approach to shed light on how these potentially biased algorithms work. First, they created a model that mimics the black-box algorithm being examined and comes up with a risk score based on an initial set of data, just as LendingClub and Northpointe’s would. Then they built a second model that they trained on real-world outcomes, using it to determine which variables from the initial data set were important in final outcomes.

In the case of LendingClub, the researchers analyzed data on a number of matured loans from 2007 to 2011. LendingClub’s database contained numerous different fields, but the researchers found that the company’s lending model probably ignored both the applicant’s annual income and the purpose of the loan. Income might make sense to ignore, since it’s self-reported and can be faked. But the purpose of the loan is highly correlated with risk—loans for small businesses are much riskier than those used to pay for weddings, for example. So LendingClub appeared to be ignoring an important variable.

Northpointe, meanwhile, says its COMPAS algorithm does not include race as a variable when making recommendations on sentencing. However, in an investigation by ProPublica, journalists collected racial information on defendants who were sentenced with help from COMPAS and found evidence of racial bias. In their mimic model, the researchers used the data gathered by ProPublica as well as information on the defendants’ age, sex, charge degree, number of prior convictions, and length of any previous prison stay. The method agreed with ProPublica’s findings, suggesting that COMPAS was likely biased for some age and racial groups.

Critics may point out that these aren’t exact replicas—out of necessity, the researchers were making a lot of educated guesses. But if the company behind an algorithm isn’t willing to release information on how its system works, approximation models like the ones from this research are a reasonable way to get insight, says Brendan O’Connor, an assistant professor at the University of Massachusetts, Amherst, who has published a paper on bias in natural-language processing.

“We need to be aware this is happening, and not close our eyes to it and act like it’s not happening,” O’Connor says.

Deep Dive

Artificial intelligence

Why Meta’s latest large language model survived only three days online

Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.

A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing

Online videos are a vast and untapped source of training data—and OpenAI says it has a new way to use it.

Responsible AI has a burnout problem

Companies say they want ethical AI. But those working in the field say that ambition comes at their expense.

Biotech labs are using AI inspired by DALL-E to invent new drugs

Two groups have announced powerful new generative models that can design new proteins on demand not seen in nature.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.