MIT Technology Review Subscribe

Google’s New AI Smile Detector Shows How Embracing Race and Gender Can Reduce Bias

Computer vision is becoming increasingly good at recognizing different facial expressions, but for certain groups that aren’t adequately represented in training data sets, like racial minorities or women with androgynous features, algorithms can still underperform.

A new paper published in arXiv by Google researchers has improved upon state-of-the-art smile detection algorithms by including and training racial and gender classifiers in their model. The racial classifier was trained on four race subgroups and two for gender (the researchers didn’t name the racial groups, but the images appear to consist of Asian, black, Hispanic, and white people).

Advertisement

Their method got to nearly 91 percent accuracy at detecting smiles in the Faces of the World (FotW) data set, a set of 13,000 images of faces collected from the Web that is sometimes used as a benchmark for such algorithms. That represents an improvement of a little over 1.5 percent from the previous mark. The results showed an overall improved accuracy across the board, showing that paying attention to race and gender can yield better results than trying to build an algorithm that is “color blind.”

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Many researchers are hesitant to include classifiers like this under the assumption that it’s easier to be guilty of bias (or at least be accused of it) when your system has explicit racial or gender categories. The Google team’s results prove that the effort put forth to train racial or gender classifiers can actually reduce the bias problem. The researchers also used classifications like “Gender 1” and “Gender 2” to avoid introducing unconscious, societal bias whenever possible.

Even with the promising results and care taken to be aware of bias in all its forms, though, the researchers included a section in their paper called “Ethical Considerations,” in which they take pains to note that their work is not intended to “motivate race and gender identification as an end-goal.” They also point out that there is no “gold standard” for breaking down racial categories, and that gender should maybe be considered a spectrum in future work, rather than a binary state.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement