Skip to Content
Artificial intelligence

AI has a culturally biased world view that Google has a plan to change

December 2, 2018

Google has launched an Inclusive Images Competition, an effort to expand the cultural fluency of image-recognition software. The task for entrants: reduce the bias in a computer vision system trained on a culturally biased image data set.

The context: Machines need to be trained on massive amounts of image data in order to recognize objects. Recent leaps in image recognition have coincided with the release of large, publicly available data sets, including ImageNet and Open Images.

The problem: The most popular data sets, however, are US- and Western-centric—simply because those Western images dominated the internet when the data sets were compiled. As a consequence, systems trained on them often fail to precisely describe scenes from other cultures and locales. Take wedding photos as an example. A standard image-recognition system, trained on open-source data sets, can recognize a bride in a white dress, reflecting the classic Western tradition. But it will fail to recognize a bride in a sari from an Indian ceremony.

The challenge: One way to mitigate this issue is to build more diverse and representative image data sets. While Google is pursuing this approach, the company also believes in advancing another way: by tweaking the machine-learning algorithms themselves to be more inclusive when learning from imperfect data.

The results: Hosted in partnership with the Neural Information Processing Systems (NeurIPS) conference, one of the largest annual gatherings for AI research, the competition received submissions from over 100 participants. Google Brain researcher Pallavi Baljekar noted at a conference talk on Sunday, December 2, that the first-year competition winners were able to make small steps toward more inclusive systems. But only one of the top five approaches successfully recognized an Indian bride. It’s clear that more work needs to be done.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.