Facebook Still Lets People Target Ads by Race and Ethnicity
Though the company promised a fix months ago, Facebook’s ad system still allows advertisers to target people in ways that could run afoul of antidiscrimination laws.
The investigative journalism shop ProPublica has been on the case for over a year now. During its initial investigation, the social-media platform allowed ProPublica reporters who bought ads to block anyone with an “affinity” for African-American, Asian-American, or Hispanic people. That possibly put Facebook in violation of the Fair Housing Act, which makes housing discrimination for certain protected groups illegal. In response, Facebook announced an antidiscrimination initiative in February that included an automated system to spot problematic ads.
A new story from ProPublica out this week suggests things haven’t changed much. Investigators were still able to block ads from being shown to “African Americans, mothers of high school kids, people interested in wheelchair ramps, Jews, expats from Argentina and Spanish speakers.” These are also all protected classes under the Fair Housing Act.
This latest finding adds to what is becoming a litany of problems for Facebook’s ad targeting system. As we well know by now, Russian accounts bought political ads that were shown to millions of Americans as part of an effort to sway the 2016 presidential election. And yet another ProPublica investigation recently showed that people could buy ads that targeted “Jew haters.”
Facebook hasn’t described in detail how the automated system to prevent discrimination is supposed to work, besides that it involves a machine-learning algorithm that is supposed to get better with time. If an ad is not approved, there is an option to ask for a manual review. But the algorithm allowed all ProPublica's ads through, so it seems that whatever technique Facebook is using, it still isn’t up to the task of policing how people use (or misuse) its ad platform.
Deep Dive
Artificial intelligence
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.