Years ago, LinkedIn discovered that the recommendation algorithms it uses to match job candidates with opportunities were producing biased results. The algorithms were ranking candidates partly on the basis of how likely they were to apply for a position or respond to a recruiter. The system wound up referring more men than women for open roles simply because men are often more aggressive at seeking out new opportunities.
LinkedIn discovered the problem and built another AI program to counteract the bias in the results of the first. Meanwhile, some of the world’s largest job search sites—including CareerBuilder, ZipRecruiter, and Monster—are taking very different approaches to addressing bias on their own platforms, as we report in the newest episode of MIT Technology Review’s podcast “In Machines We Trust.” Since these platforms don’t disclose exactly how their systems work, though, it’s hard for job seekers to know how effective any of these measures are at actually preventing discrimination.
If you were to start looking for a new job today, artificial intelligence would very likely influence your search. AI can determine what postings you see on job search platforms and decide whether to pass your résumé on to a company’s recruiters. Some companies may ask you to play AI-powered video games that measure your personality traits and gauge whether you’d be a good fit for specific roles.
More and more companies are using AI to recruit and hire new employees, and AI can factor into almost any stage in the hiring process. Covid-19 fueled new demand for these technologies. Both Curious Thing and HireVue, companies specializing in AI-powered interviews, reported a surge in business during the pandemic.
Most job hunts, though, start with a simple search. Job seekers turn to platforms like LinkedIn, Monster, or ZipRecruiter, where they can upload their résumés, browse job postings, and apply to openings.
The goal of these websites is to match qualified candidates with available positions. To organize all these openings and candidates, many platforms employ AI-powered recommendation algorithms. The algorithms, sometimes referred to as matching engines, process information from both the job seeker and the employer to curate a list of recommendations for each.
“You typically hear the anecdote that a recruiter spends six seconds looking at your résumé, right?” says Derek Kan, vice president of product management at Monster. “When we look at the recommendation engine we’ve built, you can reduce that time down to milliseconds.”
Most matching engines are optimized to generate applications, says John Jersin, the former vice president of product management at LinkedIn. These systems base their recommendations on three categories of data: information the user provides directly to the platform; data assigned to the user based on others with similar skill sets, experiences, and interests; and behavioral data, like how often a user responds to messages or interacts with job postings.
In LinkedIn’s case, these algorithms exclude a person’s name, age, gender, and race, because including these characteristics can contribute to bias in automated processes. But Jersin’s team found that even so, the service’s algorithms could still detect behavioral patterns exhibited by groups with particular gender identities.
For example, while men are more likely to apply for jobs that require work experience beyond their qualifications, women tend to only go for jobs in which their qualifications match the position’s requirements. The algorithm interprets this variation in behavior and adjusts its recommendations in a way that inadvertently disadvantages women.
“You might be recommending, for example, more senior jobs to one group of people than another, even if they’re qualified at the same level,” Jersin says. “Those people might not get exposed to the same opportunities. And that’s really the impact that we’re talking about here.”
Men also include more skills on their résumés at a lower degree of proficiency than women, and they often engage more aggressively with recruiters on the platform.
To address such issues, Jersin and his team at LinkedIn built a new AI designed to produce more representative results and deployed it in 2018. It was essentially a separate algorithm designed to counteract recommendations skewed toward a particular group. The new AI ensures that before referring the matches curated by the original engine, the recommendation system includes a representative distribution of users across gender.
Kan says Monster, which lists 5 to 6 million jobs at any given time, also incorporates behavioral data into its recommendations but doesn’t correct for bias in the same way that LinkedIn does. Instead, the marketing team focuses on getting users from diverse backgrounds signed up for the service, and the company then relies on employers to report back and tell Monster whether or not it passed on a representative set of candidates.
Irina Novoselsky, CEO at CareerBuilder, says she’s focused on using data the service collects to teach employers how to eliminate bias from their job postings. For example, “When a candidate reads a job description with the word ‘rockstar,’ there is materially a lower percent of women that apply,” she says.
Ian Siegel, CEO and cofounder of ZipRecruiter, says the company’s algorithms don’t take certain identifying characteristics such as names into account when ranking candidates; instead they classify people on the basis of 64 other types of information, including geographical data. He says the company doesn’t discuss the details of its algorithms, citing intellectual-property concerns, but adds: “I believe we are as close to a merit-based assessment of people as can currently be done.”
With automation at each step of the hiring process, job seekers must now learn how to stand out to both the algorithm and the hiring managers. But without clear information on what these algorithms do, candidates face significant challenges.
“I think people underestimate the impact algorithms and recommendation engines have on jobs,” Kan says. “The way you present yourself is most likely read by thousands of machines and servers first, before it even gets to a human eye.”
This article was updated on 6/25/21 to reflect that LinkedIn’s new AI ensures a representative distribution of users (not an even distribution) across genders are recommended for jobs.
Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.