Skip to Content
Artificial intelligence

AI’s white guy problem isn’t going away

A new report says current initiatives to fix the field’s diversity crisis are too narrow and shallow to be effective.
April 17, 2019
Eric Gay/AP

The numbers tell the tale of the AI industry’s dire lack of diversity. Women account for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. Racial diversity is even worse: black workers represent only 2.5% of Google’s entire workforce and 4% of Facebook’s and Microsoft’s. No data is available for transgender people and other gender minorities—but it’s unlikely the trend is being bucked there either.

This is deeply troubling when the influence of the industry has dramatically grown to affect everything from hiring and housing to criminal justice and the military. Along the way, the technology has automated the biases of its creators to alarming effect: devaluing women’s résumés, perpetuating employment and housing discrimination, and enshrining racist policing practices and prison convictions.

These consequences will only worsen without a different approach to fixing the problem, says a new report out this week from the research institute AI Now.

“The problem of a lack of diversity in tech [...] has reached a new and urgent inflection point,” said Meredith Whittaker, the institute’s cofounder, on a press call accompanying the report. “Millions of people are feeling the effects of these tools and are affected by any AI bias that gets baked into them.”

The AI Now team identify two main reasons why efforts to address a lack of diversity have failed. First, there’s a heavy emphasis on increasing “women in tech” and less on improving diversity of race, gender, and other qualities. Second, there’s a disproportionate focus on “fixing the pipeline”—the idea of increasing the number of candidates from underrepresented groups that flow from schools to industry. This tends to underestimate other systemic disadvantages that prevent women and minorities from staying in the field, such as harassment, unfair compensation, and imbalances of power.

The researchers offer several recommendations for improving workplace diversity in a more comprehensive way. These include measures aimed at closing the pay and opportunity gap, increasing diversity at the leadership levels across departments, and changing the incentive structures for company executives to hire and retain workers from underrepresented groups.

But the problem also runs deeper than hiring and compensation practices, says Jessie Daniels, a researcher at Data & Society, who studies the intersection of racism and technology and was not involved in the report. The tech industry was fundamentally built on the ethos that technology exists independently of society, she says: “In the early ’90s, there was this idea that the internet was going to release us from things like race and gender and infirmity; the idea that we were going to go to this place called ‘cyberspace’ where we wouldn’t have to think about embodiment or identity anymore.” 

That idea has stayed with the industry to this day and is the root of both the repeated failures to increase employee diversity and the repeated scandals around AI bias. Tech companies are built—and tech products are designed—with a “fantasy belief” that they exist independently of the sexism, racism, and societal context around them.

“That’s not a bug,” Daniels says. “It’s a feature.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.