Facebook is withholding certain job ads from women because of their gender, according to the latest audit of its ad service.
The audit, conducted by independent researchers at the University of Southern California (USC), reveals that Facebook’s ad-delivery system shows different job ads to women and men even though the jobs require the same qualifications. This is considered sex-based discrimination under US equal employment opportunity law, which bans ad targeting based on protected characteristics. The findings come despite years of advocacy and lawsuits, and after promises from Facebook to overhaul how it delivers ads.
The researchers registered as an advertiser on Facebook and bought pairs of ads for jobs with identical qualifications but different real-world demographics. They advertised for two delivery driver jobs, for example: one for Domino’s (pizza delivery) and one for Instacart (grocery delivery). There are currently more men than women who drive for Domino’s, and vice versa for Instacart.
Though no audience was specified on the basis of demographic information, a feature Facebook disabled for housing, credit, and job ads in March of 2019 after settling several lawsuits, algorithms still showed the ads to statistically distinct demographic groups. The Domino’s ad was shown to more men than women, and the Instacart ad was shown to more women than men.
The researchers found the same pattern with ads for two other pairs of jobs: software engineers for Nvidia (skewed male) and Netflix (skewed female), and sales associates for cars (skewed male) and jewelry (skewed female).
The findings suggest that Facebook’s algorithms are somehow picking up on the current demographic distribution of these jobs, which often differ for historical reasons. (The researchers weren’t able to discern why that is, because Facebook won’t say how its ad-delivery system works.) “Facebook reproduces those skews when it delivers ads even though there’s no qualification justification,” says Aleksandra Korolova, an assistant professor at USC, who coauthored the study with her colleague John Heidemann and their PhD advisee Basileal Imana.
The study supplies the latest evidence that Facebook has not resolved its ad discrimination problems since ProPublica first brought the issue to light in October 2016. At the time, ProPublica revealed that the platform allowed advertisers of job and housing opportunities to exclude certain audiences characterized by traits like gender and race. Such groups receive special protection under US law, making this practice illegal. It took two and half years and several legal skirmishes for Facebook to finally remove that feature.
But a few months later, the US Department of Housing and Urban Development (HUD) levied a new lawsuit, alleging that Facebook’s ad-delivery algorithms were still excluding audiences for housing ads without the advertiser specifying the exclusion. A team of independent researchers including Korolova, led by Northeastern University’s Muhammad Ali and Piotr Sapieżyński , corroborated those allegations a week later. They found, for example, that houses for sale were being shown more often to white users and houses for rent were being shown more often to minority users.
Korolova wanted to revisit the issue with her latest audit because the burden of proof for job discrimination is higher than for housing discrimination. While any skew in the display of ads based on protected characteristics is illegal in the case of housing, US employment law deems it justifiable if the skew is due to legitimate qualification differences. The new methodology controls for this factor.
“The design of the experiment is very clean,” says Sapieżyński, who was not involved in the latest study. While some could argue that car and jewelry sales associates do indeed have different qualifications, he says, the differences between delivering pizza and delivering groceries are negligible. “These gender differences cannot be explained away by gender differences in qualifications or a lack of qualifications,” he adds. “Facebook can no longer say [this is] defensible by law.”
The release of this audit comes amid heightened scrutiny of Facebook’s AI bias work. In March, MIT Technology Review published the results of a nine-month investigation into the company’s Responsible AI team, which found that the team, first formed in 2018, had neglected to work on issues like algorithmic amplification of misinformation and polarization because of its blinkered focus on AI bias. The company published a blog post shortly after, emphasizing the importance of that work and saying in particular that Facebook seeks “to better understand potential errors that may affect our ads system, as part of our ongoing and broader work to study algorithmic fairness in ads.”
“We’ve taken meaningful steps to address issues of discrimination in ads and have teams working on ads fairness today,” said Facebook spokesperson Joe Osborn in a statement. “Our system takes into account many signals to try and serve people ads they will be most interested in, but we understand the concerns raised in the report… We’re continuing to work closely with the civil rights community, regulators, and academics on these important matters.”
Despite these claims, however, Korolova says she found no noticeable change between the 2019 audit and this one in the way Facebook’s ad-delivery algorithms work. “From that perspective, it’s actually really disappointing, because we brought this to their attention two years ago,” she says. She’s also offered to work with Facebook on addressing these issues, she says. “We haven’t heard back. At least to me, they haven’t reached out.”
In previous interviews, the company said it was unable to discuss the details of how it was working to mitigate algorithmic discrimination in its ad service because of ongoing litigation. The ads team said its progress has been limited by technical challenges.
Sapieżyński, who has now conducted three audits of the platform, says this has nothing to do with the issue. “Facebook still has yet to acknowledge that there is a problem,” he says. While the team works out the technical kinks, he adds, there’s also an easy interim solution: it could turn off algorithmic ad targeting specifically for housing, employment, and lending ads without affecting the rest of its service. It’s really just an issue of political will, he says.
Christo Wilson, another researcher at Northeastern who studies algorithmic bias but didn’t participate in Korolova’s or Sapieżyński’s research, agrees: “How many times do researchers and journalists need to find these problems before we just accept that the whole ad-targeting system is bankrupt?”