Skip to Content
Silicon Valley

Female voice assistants fuel damaging gender stereotypes, says a UN study

Google Home
Google HomeAssociated Press

Products like Amazon Echo and Apple’s Siri are set to sound female by default, and people usually refer to the software as “her.”

Embedding bias: Most AI voice assistants are gendered as young women, and are mostly used to answer questions or carry out tasks like checking the weather, playing music, or setting reminders. This sends a signal that women are docile, eager-to-please helpers without any agency, always on hand to help their masters, the United Nations report says, helping to reinforce harmful stereotypes. The report calls for companies to stop making digital assistants female by default and explore ways to make them sound “genderless.”

Who’s blushing: The report is titled “I’d blush if I could,” after a response Siri gives when someone says, “Hey Siri, you’re a bi***.” It features an entire section on the responses to abusive and gendered language. If you say “You’re pretty” to an Amazon Echo, its Alexa software replies, “That’s really nice, thanks!” Google Assistant responds to the same remark with“Thank you, this plastic looks great, doesn’t it?” The assistants almost never give negative responses or label a user’s speech as inappropriate, regardless of its cruelty, the study found.

The report: Its aim is to expose the gender biases being hard-coded into the technology products that are playing an increasingly big role in our everyday lives. The report also suggests ways to close a gender skills gap that is wide, and growing, in most parts of the world. It found women are 25% less likely to have basic digital skills than men, and are only a fourth as likely to know how to program computers. “These gaps should make policy-makers, educators and everyday citizens ‘blush’ in alarm,” the report says.  

Sign up here to our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.