Skip to Content
Artificial intelligence

Biased Algorithms Are Everywhere, and No One Seems to Care

The big companies developing them show no interest in fixing the problem.
July 12, 2017
Kate Crawford, speaking at the AI Now conference at MIT this week.
John Maeda (@johnmaeda)

Opaque and potentially biased mathematical models are remaking our lives—and neither the companies responsible for developing them nor the government is interested in addressing the problem.

This week a group of researchers, together with the American Civil Liberties Union, launched an effort to identify and highlight algorithmic bias. The AI Now initiative was announced at an event held at MIT to discuss what many experts see as a growing challenge.

Algorithmic bias is shaping up to be a major societal issue at a critical moment in the evolution of machine learning and AI. If the bias lurking inside the algorithms that make ever-more-important decisions goes unrecognized and unchecked, it could have serious negative consequences, especially for poorer communities and minorities. The eventual outcry might also stymie the progress of an incredibly useful technology (see “Inspecting Algorithms for Bias”).

Algorithms that may conceal hidden biases are already routinely used to make vital financial and legal decisions. Proprietary algorithms are used to decide, for instance, who gets a job interview, who gets granted parole, and who gets a loan.

The founders of the new AI Now Initiative, Kate Crawford, a researcher at Microsoft, and Meredith Whittaker, a researcher at Google, say bias may exist in all sorts of services and products.

“It’s still early days for understanding algorithmic bias,” Crawford and Whittaker said in an e-mail. “Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated.”

Examples of algorithmic bias that have come to light lately, they say, include flawed and misrepresentative systems used to rank teachers, and gender-biased models for natural language processing.

Cathy O’Neil, a mathematician and the author of Weapons of Math Destruction, a book that highlights the risk of algorithmic bias in many contexts, says people are often too willing to trust in mathematical models because they believe it will remove human bias. “[Algorithms] replace human processes, but they’re not held to the same standards,” she says. “People trust them too much.”

A key challenge, these and other researchers say, is that crucial stakeholders, including the companies that develop and apply machine learning systems and government regulators, show little interest in monitoring and limiting algorithmic bias. Financial and technology companies use all sorts of mathematical models and aren’t transparent about how they operate. O’Neil says, for example, she is concerned about how the algorithms behind Google’s new job search service work.

O’Neil previously worked as a professor at Barnard College in New York and a quantitative analyst at the company D. E. Shaw. She is now the head of Online Risk Consulting & Algorithmic Auditing, a company set up to help businesses identify and correct the biases in the algorithms they use. But O’Neil says even those who know their algorithms are at a risk of bias are more interested in the bottom line than in rooting out bias. “I’ll be honest with you,” she says. “I have no clients right now.”

O’Neil, Crawford, and Whittaker all also warn that the Trump administration’s lack of interest in AI—and in science generally—means there is no regulatory movement to address the problem (see “The Gaping, Dangerous Hold in the Trump Administration”).

“The Office of Science and Technology Policy is no longer actively engaged in AI policy—or much of anything according to their website,” Crawford and Whittaker write. “Policy work now must be done elsewhere.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.