Researchers at the Georgia Institute of Technology found that state-of-the-art object recognition systems are less accurate at detecting pedestrians with darker skin tones.
Crash-testing: The researchers tested eight image-recognition systems (each trained on a standard data set) against a large pool of pedestrian images. They divided the pedestrians into two groups for lighter and darker skin tones according to the Fitzpatrick skin type scale, a way of classifying human skin color.
Color coded: The detection accuracy of the systems was found to be lower by an average of five percentage points for the group with darker skin. This held true even when controlling for time of day and obstructed view.
Under the hood: Through further analysis, the researchers determined that the bias was probably caused by two things: too few examples of dark-skinned pedetrians and too little emphasis on learning from those examples. They say the bias could be mitigated by adjusting both the data and the algorithm.
An earlier version of this story originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.