Researchers at the Georgia Institute of Technology found that state-of-the-art object recognition systems are less accurate at detecting pedestrians with darker skin tones.
Crash-testing: The researchers tested eight image-recognition systems (each trained on a standard data set) against a large pool of pedestrian images. They divided the pedestrians into two groups for lighter and darker skin tones according to the Fitzpatrick skin type scale, a way of classifying human skin color.
Color coded: The detection accuracy of the systems was found to be lower by an average of five percentage points for the group with darker skin. This held true even when controlling for time of day and obstructed view.
Under the hood: Through further analysis, the researchers determined that the bias was probably caused by two things: too few examples of dark-skinned pedetrians and too little emphasis on learning from those examples. They say the bias could be mitigated by adjusting both the data and the algorithm.
An earlier version of this story originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.
Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.