Glassdoor, the jobs search and ratings site, has been able to build a collaborative working environment for its human and AI content reviewers.
The AI’s job: Machine-learning algorithms scan for fraud and profanity on the site. Cara Barry, who leads the Glassdoor fraud team, told the New Yorker that their software searches for, among other things, people leaving multiple five- or one-star reviews to improperly affect a company’s ranking. It can also use text analysis to find inappropriate posts.
The human job: Human moderators review posts that machines and Glassdoor users have flagged. They look for things ranging from critique of employees below the C-suite (which is not allowed on the platform) to racist comments about coworkers.
The partnership: While glaring holes exist in the AIs that review content for companies like Youtube and Facebook, Glassdoor has been able to use the technology with great success. Instead of manually reviewing all posts as Glassdoor employees have done in the past, humans now have to look at only about half of them, significantly lightening their load while keeping would-be trolls at bay.