A new report out from nonprofit Upturn analyzed some of the most prominent hiring algorithms on the market and found that by default, such algorithms are prone to bias.
The hiring steps: Algorithms have been made to automate four primary stages of the hiring process: sourcing, screening, interviewing, and selection. The analysis found that while predictive tools were rarely deployed to make that final choice on who to hire, they were commonly used throughout these stages to reject people.
When bias creeps in: If past hiring data is used, the training information can be biased or unrepresentative, carrying these biases over to the software. Just removing data about gender and race won’t keep bias out of software. Other pieces of information, like distance from the office, can correlate strongly to more sensitive factors. Amazon saw this in a hiring algorithm it tried to develop. Hiring managers can also be prone to giving too much credence to the recommendations made by hiring algorithms.
Regulating AI: The study also found that the current laws aren’t equipped to deal with the problem. The laws designed to regulate human hiring decisions don’t easily allow for investigation and enforcement when dealing with machine-based discrimination.
The positives: On the other hand, predictive tools can help uncover past human biases and assumptions that had previously been overlooked. This would allow companies to make positive changes in their practices. “Several vendors are taking steps not only to audit their products for bias, but building equity-promoting features into their products,” says one of the report’s authors, Miranda Bogen. “We hope more vendors will embrace these types of interventions, but there’s still a long way to go before we can be confident that predictive tools aren’t causing more harm than benefit.”
Fixing the problem: To make the use of hiring AIs more fair, the report recommends:
—allowing independent auditing of employer and vendor software
—having governments update their regulations to cover predictive hiring software
—scrutinizing ad and job platforms in more detail to analyze their growing influence on hiring
“Because there are so many different points in that process where biases can emerge, employers should definitely proceed with caution,” says Bogen. “They should be transparent about what predictive tools they are using and take whatever steps they can to proactively detect and address biases that arise—and if they can’t confidently do that, they should pull the plug.”
This article was originally published in our future of work newsletter, Clocking In. You can sign up here.
The US Navy wants swarms of thousands of small drones
Budget documents reveal plans for the Super Swarm project, a way to overwhelm defenses with vast numbers of drones attacking simultaneously.
Inside effective altruism, where the far future counts a lot more than the present
The giving philosophy, which has adopted a focus on the long term, is a conservative project, consolidating decision-making among a small set of technocrats.
A wrongfully terminated Chinese-American scientist was just awarded nearly $2 million in damages
"The settlement makes clear that when the government discriminates, it’s going to be held accountable," said Sherry Chen's lawyer.
The Chinese surveillance state proves that the idea of privacy is more “malleable” than you’d expect
The authors of "Surveillance State" discuss what the West misunderstands about Chinese state control and whether the invasive trajectory of surveillance tech can still be reversed.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.