MIT Technology Review Subscribe

AI Learns Sexism Just by Studying Photographs

“Spoon” is to “woman” as “tennis racket” is to “man.” At least, that’s according to AI algorithms trained on two of the more common collections of thousands of images that are usually used by researchers to help machines understand the real world.

Wired reports that a team of researchers from the University of Washington studied gender predictions made by computer vision algorithms. What’s particularly interesting is that biases present in image sets—such as the fact that women were 33 percent more likely to appear in a photograph related to cooking in one data batch it studied—are amplified in the connections that the AI’s neural network makes. So, trained on that data set, an AI was 68 percent more likely to predict a woman was cooking, and did so even when an image was clearly of a balding man in a kitchen.

Advertisement

It’s by no means the first time that an AI has been observed to pick up gender biases from training data. Last year, we reported that researchers from Boston University and Microsoft Research found that an AI trained on archives of text learned to associate the word “programmer” with the word “man,” and “homemaker” with the word “woman.”

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

But it’s especially troubling that biases inherent in data sets may end up being amplified, rather than merely replicated, by the AIs that are trained on them. Sometimes it might cause offense, if, for instance, an AI is being used to target advertising based on images you upload to a social network. But in other applications—such as, say, the controversial practice of predicting criminality from a person’s face—baked-in prejudice could be downright harmful.

Currently, many of the companies developing AI don’t seem too bothered about the problem of bias in their neural networks. This finding is another piece of evidence to support those who argue that that needs to change.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement