Skip to Content

Why We Should Expect Algorithms to Be Biased

We seem to be idolizing algorithms, imagining they are more objective than their creators.

Technologies driven by algorithms and artificial intelligence are increasingly present in our lives, and we are now regularly bumping up against a thorny question: can these programs be neutral actors? Or will they always reflect some degree of human bias?

The dustup over Facebook’s “trending topics” list and its possible liberal bias hit such a nerve that the U.S. Senate called on the company to come up with an official explanation, and this week COO Sheryl Sandberg said the company will begin training employees to identify and control their political leanings.

This is just one result, however, of a broader trend that Fred Benenson, Kickstarter’s former data chief, calls “mathwashing”: our tendency to idolize programs like Facebook’s as entirely objective because they have mathematics at their core.

One area of potential bias comes from the fact that so many of the programmers creating these programs, especially machine-learning experts, are male. In a recent Bloomberg article, Margaret Mitchell, a researcher at Microsoft, is quoted lamenting the dangers of a “sea of dudes” asking the questions central to creating these programs.

Concern has been building over this issue for some time, as studies found evidence of bias in online advertising, recruiting, and pricing strategies driven by presumably neutral algorithms.

In one study, Harvard professor Latanya Sweeney looked at the Google AdSense ads that came up during searches of names associated with white babies (Geoffrey, Jill, Emma) and names associated with black babies (DeShawn, Darnell, Jermaine). She found that ads containing the word “arrest” were shown next to more than 80 percent of “black” name searches but fewer than 30 percent of “white” name searches.

Sweeney worries that the ways Google’s advertising technology perpetuates racial bias could undermine a black person’s chances in a competition, whether it’s for an award, a date, or a job.

Areas such as lending and credit that have traditionally suffered from well-known human discrimination must be especially careful.

ZestFinance, an online lender founded on the idea that machine-learning programs can expand the number of people deemed creditworthy by looking at tens of thousands of data points, maintains that it is well attuned to the dangers of discriminatory lending. To guard against discrimination, ZestFinance has built tools to test its own results.

But the danger remains that unrecognized bias, not just in the programming of an algorithm but even in the data flowing into it, could inadvertently turn any program into a discriminator. For consumers who are unable to unpack the complexities of these programs, it will be hard to know whether they have been treated fairly.

“Algorithm and data-driven products will always reflect the design choices of the humans who built them,” Benenson explained in a recent Q&A with Technical.ly Brooklyn, “and it’s irresponsible to assume otherwise.”

(Read more: Wall Street Journal, Technical.ly Brooklyn, Bloomberg, “AI Takes Off”[PDF])

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.