Skip to Content
Uncategorized

Google’s New AI Smile Detector Shows How Embracing Race and Gender Can Reduce Bias

December 4, 2017

Computer vision is becoming increasingly good at recognizing different facial expressions, but for certain groups that aren’t adequately represented in training data sets, like racial minorities or women with androgynous features, algorithms can still underperform.

A new paper published in arXiv by Google researchers has improved upon state-of-the-art smile detection algorithms by including and training racial and gender classifiers in their model. The racial classifier was trained on four race subgroups and two for gender (the researchers didn't name the racial groups, but the images appear to consist of Asian, black, Hispanic, and white people).

Their method got to nearly 91 percent accuracy at detecting smiles in the Faces of the World (FotW) data set, a set of 13,000 images of faces collected from the Web that is sometimes used as a benchmark for such algorithms. That represents an improvement of a little over 1.5 percent from the previous mark. The results showed an overall improved accuracy across the board, showing that paying attention to race and gender can yield better results than trying to build an algorithm that is “color blind.”

Many researchers are hesitant to include classifiers like this under the assumption that it’s easier to be guilty of bias (or at least be accused of it) when your system has explicit racial or gender categories. The Google team’s results prove that the effort put forth to train racial or gender classifiers can actually reduce the bias problem. The researchers also used classifications like “Gender 1” and “Gender 2” to avoid introducing unconscious, societal bias whenever possible.

Even with the promising results and care taken to be aware of bias in all its forms, though, the researchers included a section in their paper called “Ethical Considerations,” in which they take pains to note that their work is not intended to “motivate race and gender identification as an end-goal.” They also point out that there is no “gold standard” for breaking down racial categories, and that gender should maybe be considered a spectrum in future work, rather than a binary state.

Keep Reading

Most Popular

This new data poisoning tool lets artists fight back against generative AI

The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. 

Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist

An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.

Data analytics reveal real business value

Sophisticated analytics tools mine insights from data, optimizing operational processes across the enterprise.

Driving companywide efficiencies with AI

Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.