Skip to Content
Artificial intelligence

Baidu is testing neural networks that can match job seekers to jobs

October 2, 2018

The Chinese company’s technology identifies candidates whose skills meet employers’ needs.

The news: In a recent paper, Baidu researchers showed how their neural net can work out from their résumés which skills job seekers possess, and spot what skills certain job postings are asking for. The software then pairs up the best matches.

How it works: The model, called the Person-Job Fit Neural Network, learns which words and phrases in job listings correspond with skills a candidate might have. For example, “product development procedure” and “documenting” are terms that often signal a need for program management experience. The algorithm uses its database to learn whether someone’s job history indicates the relevant experience for the role. These candidates are then flagged as a potential match for the vacancy.

Where it falls short: In a test, the system had a hard time matching educational job requirements, because its sample overwhelmingly had the same requirement of “bachelor’s degree or higher.” This lack of data-set diversity made it difficult for the algorithm to identify which candidates had a better educational background for each position.

What’s in it for them? Since Baidu owns the world’s second largest search engine, it’s likely the company could use this technology to help better target job ads.

Use with caution: If bias exists in previous hires, it can creep into systems like this, posing a disadvantage to certain groups that may not be presented with the same job opportunities. As communications professor Safiya Umoja Noble told us earlier this year, bias already exists in search engine results, and it’s only going to get worse.

This article first appeared in our future of work newsletter, Clocking In. Sign up here to get connected with the workplace of the future.

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.