Skip to Content
Artificial intelligence

Apple Card is being investigated over claims it gives women lower credit limits

November 11, 2019
Apple Card
Apple CardAP

The algorithm that determines the credit limit for users of Apple’s new credit card, which launched in the US in August, is facing an investigation because it appears to give men higher limits than women.

The news: On November 7, web entrepreneur David Heinemeier Hansson posted a now-viral tweet saying that the Apple Card had given him 20 times the credit limit of his wife. This was despite the fact that they filed joint tax returns and that, upon investigation, his wife had a better credit score than he did. Apple cofounder Steve Wozniak replied to the tweet and said that he, too, had been granted 10 times the credit limit of his wife, even though they have no separate assets or bank accounts.

Upshot: Now New York’s Department of Financial Services is launching an investigation into Goldman Sachs, which manages the card. Its superintendent, Linda Lacewell, said in a blog post that the watchdog would “examine whether the algorithm used to make these credit limit decisions violates state laws that prohibit discrimination on the basis of sex.” The regulator has already recently opened an investigation into reports that an algorithm resulted in black patients receiving less comprehensive care than white patients.

Wider problem: Goldman Sachs posted a statement on Twitter over the weekend, saying that gender is not taken into account when determining creditworthiness. But the unexplainable disparity in the card’s credit limits is yet another example of how algorithmic bias can be unintentionally created. Algorithms of the sort used to assess creditworthiness are trained on years of historical data, and bias can slip into the process in a number of different ways.

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.