Skip to Content
Artificial intelligence

Facebook says it will look for racial bias in its algorithms

NeONBRAND / Unsplash

The news: Facebook says it is setting up new internal teams to look for racial bias in the algorithms that drive its main social network and Instagram, according to the Wall Street Journal. In particular, the investigations will address the adverse effects of machine learning—which can encode implicit racism in training data—on Black, Hispanic, and other minority groups.

Why it matters: In the last few years, increasing numbers of researchers and activists have highlighted the problem of bias in AI and the disproportionate impact it has on minorities. Facebook, which uses machine learning to curate the daily experience of its 2.5 billion users, is well overdue for an internal assessment of this kind. There is already evidence that Facebook’s ad-serving algorithms discriminate by race and allow advertisers to stop specific racial groups from seeing their ads, for example. 

Under pressure: Facebook has a history of dodging accusations of bias in its systems. It has taken several years of bad press and pressure from civil rights groups to get to this point. Facebook has set up these teams after a month-long advertising boycott organized by civil rights groups—including the Anti-Defamation League, Color of Change, and the NAACP—that led big spenders like Coca-Cola, Disney, McDonald’s, and Starbucks to suspend their campaigns. 

No easy fix: The move is welcome. But launching an investigation is a far cry from actually fixing the problem of racial bias, especially when nobody really knows how to fix it. In most cases, bias exists in the training data and there are no good agreed-on ways to remove it. And adjusting that data—a form of algorithmic affirmative action—is controversial. Machine-learning bias is also just one of social media’s problems around race. If Facebook is going to look at its algorithms, it should be part of a wider overhaul that also grapples with policies that give platforms to racist politicians, white-supremacist groups, and Holocaust deniers.

"We will continue to work closely with Facebook’s Responsible AI team to ensure we are looking at potential biases across our respective platforms," says Stephanie Otway, a spokesperson for Instagram. "It’s early days and we plan to share more details on this work in the coming months."

Deep Dive

Artificial intelligence

DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.

“This is a profound moment in the history of technology,” says Mustafa Suleyman.

Deepfakes of Chinese influencers are livestreaming 24/7

With just a few minutes of sample video and $1,000, brands never have to stop selling their products.

AI hype is built on high test scores. Those tests are flawed.

With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.

You need to talk to your kid about AI. Here are 6 things you should say.

As children start back at school this week, it’s not just ChatGPT you need to be thinking about.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.