Skip to Content
Uncategorized

Google’s New AI Smile Detector Shows How Embracing Race and Gender Can Reduce Bias

December 4, 2017

Computer vision is becoming increasingly good at recognizing different facial expressions, but for certain groups that aren’t adequately represented in training data sets, like racial minorities or women with androgynous features, algorithms can still underperform.

A new paper published in arXiv by Google researchers has improved upon state-of-the-art smile detection algorithms by including and training racial and gender classifiers in their model. The racial classifier was trained on four race subgroups and two for gender (the researchers didn't name the racial groups, but the images appear to consist of Asian, black, Hispanic, and white people).

Their method got to nearly 91 percent accuracy at detecting smiles in the Faces of the World (FotW) data set, a set of 13,000 images of faces collected from the Web that is sometimes used as a benchmark for such algorithms. That represents an improvement of a little over 1.5 percent from the previous mark. The results showed an overall improved accuracy across the board, showing that paying attention to race and gender can yield better results than trying to build an algorithm that is “color blind.”

Many researchers are hesitant to include classifiers like this under the assumption that it’s easier to be guilty of bias (or at least be accused of it) when your system has explicit racial or gender categories. The Google team’s results prove that the effort put forth to train racial or gender classifiers can actually reduce the bias problem. The researchers also used classifications like “Gender 1” and “Gender 2” to avoid introducing unconscious, societal bias whenever possible.

Even with the promising results and care taken to be aware of bias in all its forms, though, the researchers included a section in their paper called “Ethical Considerations,” in which they take pains to note that their work is not intended to “motivate race and gender identification as an end-goal.” They also point out that there is no “gold standard” for breaking down racial categories, and that gender should maybe be considered a spectrum in future work, rather than a binary state.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.