Skip to Content
Uncategorized

AI Learns Sexism Just by Studying Photographs

August 21, 2017

“Spoon” is to “woman” as “tennis racket” is to “man.” At least, that’s according to AI algorithms trained on two of the more common collections of thousands of images that are usually used by researchers to help machines understand the real world.

Wired reports that a team of researchers from the University of Washington studied gender predictions made by computer vision algorithms. What’s particularly interesting is that biases present in image sets—such as the fact that women were 33 percent more likely to appear in a photograph related to cooking in one data batch it studied—are amplified in the connections that the AI’s neural network makes. So, trained on that data set, an AI was 68 percent more likely to predict a woman was cooking, and did so even when an image was clearly of a balding man in a kitchen.

It’s by no means the first time that an AI has been observed to pick up gender biases from training data. Last year, we reported that researchers from Boston University and Microsoft Research found that an AI trained on archives of text learned to associate the word “programmer” with the word “man,” and “homemaker” with the word “woman.”

But it’s especially troubling that biases inherent in data sets may end up being amplified, rather than merely replicated, by the AIs that are trained on them. Sometimes it might cause offense, if, for instance, an AI is being used to target advertising based on images you upload to a social network. But in other applications—such as, say, the controversial practice of predicting criminality from a person’s face—baked-in prejudice could be downright harmful.

Currently, many of the companies developing AI don’t seem too bothered about the problem of bias in their neural networks. This finding is another piece of evidence to support those who argue that that needs to change.

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.