Skip to Content

An AI-Fueled Credit Formula Might Help You Get a Loan

Startup ZestFinance says it has built a machine-learning system that’s smart enough to find new borrowers and keep bias out of its credit analysis.
February 14, 2017

Credit ratings have long been the key measure of how likely a U.S. consumer is to repay any loan, from mortgages to credit cards. But the factors that FICO and other companies that create credit scores rely on—things like credit history and credit card balances—often depend on having credit already.  

In recent years, a crop of startup companies have launched on the premise that borrowers without such histories might still be quite likely to repay, and that their likelihood of doing so could be determined by analyzing large amounts of data, especially  data that has traditionally not been part of the credit evaluation. These companies use algorithms and machine learning to find meaningful patterns in the data, alternative signs that a borrower is a good or bad credit risk.

These companies are still young, but to date, there isn’t clear evidence that these approaches have greatly expanded the credit available, and lenders using them often charge high interest rates, according to a report by the National Consumer Law Center, a consumer advocacy group. Consumer advocates worry  that some of these new data sources—such as information about how consumers behave online or financial data not traditionally included in the credit analysis—could unwittingly bake bias into the results, causing certain borrowers to be unfairly judged. In the U.S. lenders are prohibited by law from considering race, gender, and religion in a lending decision.

Los Angeles-based ZestFinance, founded by former Google CIO Douglas Merrill, claims to have solved this problem with a new credit-scoring platform, called ZAML. The company sells the machine-learning software to lenders and also offers consulting services. Zest does not lend money itself.

The platform was fine-tuned based on the experience Zest had working with the search engine Baidu in China, where only 20 percent of the population has any known credit history. Studying 21 different factors such as how people search and the way they traverse between Web pages, Zest discovered patterns in Baidu’s data that could be used to decide whether to make small loans to those customers for purchases like clothing. Among the things Zest evaluated was how well a person’s self-reported income matched up against their “modeled income,” what Zest calculates that person actually earned based on other behavior. Just as important as how much discrepancy there is between reported and modeled income is when they report the inflated income (in other words, income that's higher than what the model implies they're actually making) and how much they inflated it, Merrill says.

In two months, Baidu, which has a small lending business, was approving 150 percent more borrowers with no increased losses on their loans, and the company has made hundreds of thousands of loans since, Merrill says.

Andrew Ng, Baidu’s chief scientist, credits Zest’s technology with helping his company accelerate its entry into consumer financial services by improving the "predictiveness" of their credit models using data from borrowers' online search behavior, mobile wallets, and other sources. With Zest, Baidu found that borrowers who engage in risky behavior online—like gambling, or visiting risky websites such as those that sell illicit goods or market thrill-seeking events—have a higher statistical likelihood of defaulting on a loan.

“While perhaps 'obvious' in hindsight, cues like these can have a significant effect on underwriting performance,” Ng wrote by e-mail.

Some data is out of bounds. Zest does not use social media data in its analysis, something Merrill has called “creepy,” and which the company says is not very useful in these kinds of analysis.

Zest has worked with two credit card issuers and a car lender as well. Among credit card holders, one important signal turned out to be calls to the help desk, something the lender was not connecting to credit worthiness prior to Zest’s work. As it turns out, someone who calls in to extend a payment period for a balance, though delaying a payment, is likely to actually be a reliable customer. “Intuition is sometimes wrong,” says Merrill.

One protection against bias, according to the company, is the fact that for each borrower, the system assesses 100,000 different data points, and no one point plays a determining role. To test for bias, Zest relies again on machine learning, which the system uses to test its own results. It applies an algorithm the Consumer Financial Protection Bureau uses to check for discrimination, and also does other testing to find any unexpected correlations with factors that lenders are prohibited from considering. 

Baidu's Ng endorsed Zest’s technology for its ability to explain what he called "black-box machine learning underwriting models” and focus on detecting and correcting both explicit and hidden biases.

Explaining credit decisions to borrowers and regulators will be pivotal, says Chi Chi Wu, an attorney at the NCLC, especially explaining whether the data patterns being relied on are really predictive, and not just correlated. “Alternative data is not the be-all and end-all,” she says.  

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.