Computer vision is becoming increasingly good at recognizing different facial expressions, but for certain groups that aren’t adequately represented in training data sets, like racial minorities or women with androgynous features, algorithms can still underperform.
A new paper published in arXiv by Google researchers has improved upon state-of-the-art smile detection algorithms by including and training racial and gender classifiers in their model. The racial classifier was trained on four race subgroups and two for gender (the researchers didn’t name the racial groups, but the images appear to consist of Asian, black, Hispanic, and white people).
Their method got to nearly 91 percent accuracy at detecting smiles in the Faces of the World (FotW) data set, a set of 13,000 images of faces collected from the Web that is sometimes used as a benchmark for such algorithms. That represents an improvement of a little over 1.5 percent from the previous mark. The results showed an overall improved accuracy across the board, showing that paying attention to race and gender can yield better results than trying to build an algorithm that is “color blind.”
Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.