The Electronic Privacy Information Center (EPIC) has asked the Federal Trade Commission to investigate HireVue, an AI tool that helps companies figure out which workers to hire.
What’s HireVue? HireVue is one of a growing number of artificial intelligence tools that companies use to assess job applicants. The algorithm analyzes video interviews, using everything from word choice to facial movements to figure out an “employability score” that is compared against that of other applicants. More than 100 companies have already used it on over a million applicants, according to the Washington Post.
What’s the problem? It’s hard to predict which workers will be successful from things like facial expressions. Worse, critics worry that the algorithm is trained on limited data and so will be more likely to mark “traditional” applicants (white, male) as more employable. As a result, applicants who deviate from the “traditional”—including people don’t speak English as a native language or who are disabled—are likely to get lower scores, experts say. Plus, it encourages applicants to game the system by interviewing in a way that they know HireVue will like.
What’s next? AI hiring tools are not well regulated, and addressing the problem will be hard for a few reasons.
—Most companies won’t release their data or explain how the algorithms work, so it’s very difficult to prove any bias. That’s part of the reason there have been no major lawsuits so far. The EPIC complaint, which suggests that HireVue’s promise violates the FTC’s rules against “unfair and deceptive” practices, is a start. But it’s not clear if anything will happen. The FTC has received the complaint but has not said whether it will pursue it.
—Other attempts to prevent bias are well-meaning but limited. Earlier this year, Illinois lawmakers passed a law that requires employers to at least tell job seekers that they’ll be using these algorithms, and to get their consent. But that’s not very useful. Many people are likely to consent simply because they don’t want to lose the opportunity.
—Finally, just like AI in health or AI in the courtroom, artificial intelligence in hiring will re-create society’s biases, which is a complicated problem. Regulators will need to figure out how much responsibility companies should be expected to shoulder in avoiding the mistakes of a prejudiced society.
What happened to the microfinance organization Kiva?
A group of strikers argue that the organization seems more focused on making money than creating change. Are they right?
How one elite university is approaching ChatGPT this school year
Why Yale never considered banning the technology.
Worldcoin just officially launched. Here’s why it’s already being investigated.
The project is backed by some of tech's biggest stars, but four countries are probing its privacy practices.
Google has a new tool to outsmart authoritarian internet censorship
Its Outline VPN can now be built directly into apps—making it harder for governments to block internet access, particularly during protests.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.