Skip to Content

A Startup That Scores Job Seekers, Whether They Know It or Not

To help recruiters, a startup called Gild has created a database of four million software developers and rated their work. Could other fields be next?

Winning over recruiters and potential bosses can be hard enough. Now there’s something else job seekers have to woo: an algorithm.

A San Francisco startup called Gild has created a program that evaluates and scores software developers on the work they have publicly released. Tech recruiters can use this “Gild score” to see through the top-tier degrees, vague descriptions of skill sets, or polished testimonials of well-connected programmers whose coding skills may be below par. Less-obvious candidates, such as a junior in college who has been building great apps since she was 16, might rise into view instead.

For now, Gild is evaluating only software developers, whose work can often be freely found in repositories for open-source software, coder Q&A forums, and other online developer hangouts. But CEO Sheeroy Desai says that Gild hopes to bring its “talent acquisition technology” beyond the realm of software programmers, especially as more work products start to appear online.

He says it’s too early to detail what those possibilities could be. But one could imagine some future algorithm evaluating a teacher’s online courses, a journalist’s articles, or a scientist’s open-access data. (A company called Klout already scores how influential people are in social media.) “This is massively useful beyond just tech recruiting,” says Bryan Power, director of talent at Square, a payment technology company that has used Gild’s software to help vet job candidates for the last three months. “There’s so much more that will be online in the next couple of years,” he says.

Since launching in beta last March, Gild has profiled four million software developers and has 70 customers, from high-profile Silicon Valley startups such as Palantir Technologies and Box to large IT providers such as Salesforce and EMC.

Its technology stitches together profiles of individual coders from their activities in open-source forums and public websites. It can “scrape” information from popular developer hangouts even if those sites don’t have formal APIs, or application programming interfaces, to facilitate the transfer of data. Gild also uses image recognition to match up profile pictures on different sites. It then assesses two scores—one for work quality, one for influence.

One of Gild’s biggest data sources is Github, a software developer collaboration site that hosts the most open-source code in the world. Github profiles are already replacing programmers’ résumés in many cases.

Desai, the former chief operating officer of the IT company Sapient, cofounded Gild with Luca Bonmassar, who had been leading a software team at Vodafone, because they were tired of the time they had wasted in hiring software developers. “You’d bring them in, throw code at them, and realize they didn’t know what they were doing,” says Desai.

As with any grading scheme, Gild’s algorithm—which scores from 1 to 100—is making judgments about what makes a good developer. The software grades the quality of someone’s code by checking for basic errors and also gauging its complexity. It also looks at how extensively programmers’ open-source code has been taken up by other projects. It tends to reward developers who know a small number of programming languages really well and dabble in several others, Desai says.

This approach has several limitations. Power, at Square, points out that not every company would make the same judgments about candidates that Gild’s algorithm does. It can be tricky to untangle the contributions that individual programmers have made to a group project on Github or a similar social forum. And not every developer has worked on a large number of open-source projects.

But Desai says that by accumulating so much data, learning patterns, and making predictions, Gild is lowering the threshold of information it needs to score a given candidate.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.