Skip to Content
Artificial intelligence

Facebook’s ad-serving algorithm discriminates by gender and race

Even if an advertiser is well-intentioned, the algorithm still prefers certain groups of people over others.
April 5, 2019
Ms. Tech/ Logo: facebook

Algorithms are biased—and Facebook’s is no exception.

Just last week, the tech giant was sued by the US Department of Housing and Urban Development over the way it let advertisers purposely target their ads by race, gender, and religion—all protected classes under US law. The company announced that it would stop allowing this.

But new evidence shows that Facebook’s algorithm, which automatically decides who is shown an ad, carries out the same discrimination anyway, serving up ads to over two billion users on the basis of their demographic information.

A team led by Muhammad Ali and Piotr Sapiezynski at Northeastern University ran a series of otherwise identical ads with slight variations in available budget, headline, text, or image. They found that those subtle tweaks had significant impacts on the audience reached by each ad—most notably when the ads were for jobs or real estate. Postings for preschool teachers and secretaries, for example, were shown to a higher fraction of women, while postings for janitors and taxi drivers were shown to a higher proportion of minorities. Ads about homes for sale were also shown to more white users, while ads for rentals were shown to more minorities.

“We’ve made important changes to our ad-targeting tools and know that this is only a first step,” a Facebook spokesperson said in a statement in response to the findings. “We’ve been looking at our ad-delivery system and have engaged industry leaders, academics, and civil rights experts on this very topic—and we’re exploring more changes.”

In some ways, this shouldn’t be surprising—bias in recommendation algorithms has been a known issue for many years. In 2013, for example, Latanya Sweeney, a professor of government and technology at Harvard, published a paper that showed the implicit racial discrimination of Google’s ad-serving algorithm. The issue goes back to how these algorithms fundamentally work. All of them are based on machine learning, which finds patterns in massive amounts of data and reapplies them to make decisions. There are many ways that bias can trickle in during this process, but the two most apparent in Facebook’s case relate to issues during problem framing and data collection.

Bias occurs during problem framing when the objective of a machine-learning model is misaligned with the need to avoid discrimination. Facebook’s advertising tool allows advertisers to select from three optimization objectives: the number of views an ad gets, the number of clicks and amount of engagement it receives, and the quantity of sales it generates. But those business goals have nothing to do with, say, maintaining equal access to housing. As a result, if the algorithm discovered that it could earn more engagement by showing more white users homes for purchase, it would end up discriminating against black users.

Bias occurs during data collection when the training data reflects existing prejudices. Facebook’s advertising tool bases its optimization decisions on the historical preferences that people have demonstrated. If more minorities engaged with ads for rentals in the past, the machine-learning model will identify that pattern and reapply it in perpetuity. Once again, it will blindly plod down the road of employment and housing discrimination—without being explicitly told to do so.

While these behaviors in machine learning have been studied for quite some time, the new study does offer a more direct look into the sheer scope of its impact on people’s access to housing and employment opportunities. “These findings are explosive!” Christian Sandvig, the director of the Center for Ethics, Society, and Computing at the University of Michigan, told The Economist. “The paper is telling us that [...] big data, used in this way, can never give us a better world. In fact, it is likely these systems are making the world worse by accelerating the problems in the world that make things unjust.”

The good news is there might be ways to address this problem, but it won’t be easy. Many AI researchers are now pursuing technical fixes for machine-learning bias that could create fairer models of online advertising. A recent paper out of Yale University and the Indian Institute of Technology, for example, suggests that it may be possible to constrain algorithms to minimize discriminatory behavior, albeit at a small cost to ad revenue. But policymakers will need to play a greater role if platforms are to start investing in such fixes—especially if it might affect their bottom line.

This originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your in-box, sign up here for free.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.