Skip to Content
Tech policy

The AI hiring industry is under scrutiny—but it’ll be hard to fix

November 7, 2019
computer
computer

The Electronic Privacy Information Center (EPIC) has asked the Federal Trade Commission to investigate HireVue, an AI tool that helps companies figure out which workers to hire. 

What’s HireVue? HireVue is one of a growing number of artificial intelligence tools that companies use to assess job applicants. The algorithm analyzes video interviews, using everything from word choice to facial movements to figure out an “employability score” that is compared against that of other applicants. More than 100 companies have already used it on over a million applicants, according to the Washington Post

What’s the problem? It’s hard to predict which workers will be successful from things like facial expressions. Worse, critics worry that the algorithm is trained on limited data and so will be more likely to mark “traditional” applicants (white, male) as more employable. As a result, applicants who deviate from the “traditional”—including people don’t speak English as a native language or who are disabled—are likely to get lower scores, experts say. Plus, it encourages applicants to game the system by interviewing in a way that they know HireVue will like. 

What’s next? AI hiring tools are not well regulated, and addressing the problem will be hard for a few reasons. 

—Most companies won’t release their data or explain how the algorithms work, so it’s very difficult to prove any bias. That’s part of the reason there have been no major lawsuits so far. The EPIC complaint, which suggests that HireVue’s promise violates the FTC’s rules against “unfair and deceptive” practices, is a start. But it’s not clear if anything will happen. The FTC has received the complaint but has not said whether it will pursue it. 

—Other attempts to prevent bias are well-meaning but limited. Earlier this year, Illinois lawmakers passed a law that requires employers to at least tell job seekers that they’ll be using these algorithms, and to get their consent. But that’s not very useful. Many people are likely to consent simply because they don’t want to lose the opportunity.

—Finally, just like AI in health or AI in the courtroom, artificial intelligence in hiring will re-create society’s biases, which is a complicated problem. Regulators will need to figure out how much responsibility companies should be expected to shoulder in avoiding the mistakes of a prejudiced society. 

Deep Dive

Tech policy

How conservative Facebook groups are changing what books children read in school

Parents are gathering online to review books and lobby schools to ban them, often on the basis of sexual content.

Why can’t tech fix its gender problem?

A new generation of tech activists, organizers, and whistleblowers, most of whom are female, non-white, gender-diverse, or queer, may finally bring change.

How the idea of a “transgender contagion” went viral—and caused untold harm

A single paper on the notion that gender dysphoria can spread among young people helped galvanize an anti-trans movement.

The most popular content on Facebook belongs in the garbage

Meta’s own report into what gets the most clicks confirms what many of us know already: spammy memes win.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.