Skip to Content
Artificial intelligence

An AI hiring firm says it can predict job hopping based on your interviews

The idea of “bias-free” hiring, already highly misleading, is being used by companies to shirk greater scrutiny for their tools’ labor issues beyond discrimination.
July 24, 2020
PredictiveHire
A screenshot of PredictiveHire.

Since the onset of the pandemic, a growing number of companies have turned to AI to assist with their hiring. The most common systems involve using face-scanning algorithms, games, questions, or other evaluations to help determine which candidates to interview. 

While activists and scholars warn that these screening tools can perpetuate discrimination, the makers themselves argue that algorithmic hiring helps correct for human biases. Algorithms can be tested and tweaked, whereas human biases are much harder to correct—or so the thinking goes. In a December 2019 paper, researchers at Cornell reviewed the landscape of algorithmic screening companies to analyze their claims and practices. Of the 18 they identified with English-language websites, the majority marketed themselves as a fairer alternative to human-based hiring, suggesting that they were latching onto the heightened concern around these issues to tout their tools’ benefits and get more customers.

But discrimination isn’t the only concern with algorithmic hiring, and some scholars worry that marketing language that focuses on bias lets companies off the hook on other issues, such as workers’ rights. A new preprint from one of these firms now serves as an important reminder: “We should not let the attention that people have begun to pay to bias and discrimination issues actually crowd out the fact that there are a bunch of other issues,” says Solon Barocas, an assistant professor at Cornell University and principal researcher at Microsoft Research, who studies algorithmic fairness and accountability. 

The firm in question is Australia-based PredictiveHire, founded in October 2013. It offers a chatbot that asks candidates a series of open-ended questions. It then analyzes their responses to assess job-related personality traits like “drive,” “initiative,” and “resilience.” According to the firm’s CEO, Barbara Hyman, its clients are employers that must manage large numbers of applications, such as those in retail, sales, call centers, and health care. As the Cornell study found, it also actively uses promises of fairer hiring in its marketing language. On its home page, it boldly advertises: “Meet Phai. Your co-pilot in hiring. Making interviews SUPER FAST. INCLUSIVE, AT LAST. FINALLY, WITHOUT BIAS.”

As we’ve written before, the idea of “bias-free” algorithms is highly misleading. But PredictiveHire’s latest research is troubling for a different reason. It is focused on building a new machine-learning model that seeks to predict a candidate’s likelihood of job hopping, the practice of changing jobs more frequently than an employer desires. The work follows the company’s recent peer-reviewed research that looked at how open-ended interview questions correlate with personality (in and of itself a highly contested practice). Because organizational psychologists have already shown a link between personality and job hopping, Hyman says, the company wanted to test whether they could use their existing data for the prediction. “Employee retention is a huge focus for many companies that we work with given the costs of high employee churn, estimated at 16% of the cost of each employee’s salary,” she adds.

The study used the free-text responses from 45,899 candidates who had used PredictiveHire’s chatbot. Applicants had originally been asked five to seven open-ended questions and self-rating questions about their past experience and situational judgment. These included questions meant to tease out traits that studies have previously shown to correlate strongly with job-hopping tendencies, such as being more open to experience, less practical, and less down to earth. The company researchers claim the model was able to predict job hopping with statistical significance. PredictiveHire’s website is already advertising this work as a “flight risk” assessment that is “coming soon.”

PredictiveHire’s new work is a prime example of what Nathan Newman argues is one of the biggest adverse impacts of big data on labor. Newman, an adjunct associate professor at the John Jay College of Criminal Justice, wrote in a 2017 law paper that beyond the concerns about employment discrimination, big-data analysis had also been used in myriad ways to drive down workers’ wages.

Machine-learning-based personality tests, for example, are increasingly being used in hiring to screen out potential employees who have a higher likelihood of agitating for increased wages or supporting unionization. Employers are increasingly monitoring employees’ emails, chats, and other data to assess which might leave and calculate the minimum pay increase needed to make them stay. And algorithmic management systems like Uber’s are decentralizing workers away from offices and digital convening spaces that allow them to coordinate with one another and collectively demand better treatment and pay.

None of these examples should be surprising, Newman argued. They are simply a modern manifestation of what employers have historically done to suppress wages by targeting and breaking up union activities. The use of personality assessments in hiring, which dates back to the 1930s in the US, in fact began as a mechanism to weed out people most likely to become labor organizers. The tests became particularly popular in the 1960s and ’70s once organizational psychologists had refined them to assess workers for their union sympathies.

In this context, PredictiveHire’s fight-risk assessment is just another example of this trend. “Job hopping, or the threat of job hopping,” points out Barocas, “is one of the main ways that workers are able to increase their income.” The company even built its assessment on personality screenings designed by organizational psychologists.

Barocas doesn’t necessarily advocate tossing out the tools altogether. He believes the goal of making hiring work better for everyone is a noble one and could be achieved if regulators mandate greater transparency. Currently none of them have received rigorous, peer-reviewed evaluation, he says. But if firms were more forthcoming about their practices and submitted their tools for such validation, it could help hold them accountable. It could also help scholars engage more readily with firms to study the tools’ impacts on both labor and discrimination.

“Despite all my own work for the past couple of years expressing concerns about this stuff,” he says, “I actually believe that a lot of these tools could significantly improve the current state of affairs.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.