Skip to Content
Artificial intelligence

Algorithms are making American inequality worse

In a new book, political scientist Virginia Eubanks says using computers to decide who gets social services hurts the poor.
January 26, 2018
SADAF RASSOUL CAMERON

William Gibson wrote that the future is here, just not evenly distributed. The phrase is usually used to point out how the rich have more access to technology, but what happens when the poor are disproportionately subject to it?

In Automating Inequality, author Virginia Eubanks argues that the poor are the testing ground for new technology that increases inequality. The book, out this week, starts with a history of American poorhouses, which dotted the landscape starting in the 1660s and were around into the 20th century. From there, Eubanks catalogues how the poor have been treated over the last hundred years, before coming to today’s system of social services that increasingly relies on algorithms.

Eubanks leaves no uncertainty as to her position on whether such automation is a good thing. Her thesis is that the punitive and moralistic view of poverty that built the poorhouses never left us, and has been wrapped into today’s automated and predictive decision-making tools. These algorithms can make it harder for people to get services while forcing them to deal with an invasive process of personal data collection. As examples, she profiles three different programs: a Medicaid application process in Indiana, homeless services in Los Angeles, and child protective services in Pittsburgh.

Eubanks spoke to MIT Technology Review about when social services first became automated, her own experience with predictive algorithms, and how these flawed tools give her hope that inequality will be put into such stark relief that we will have to address how we treat our poor, once and for all.

What are the parallels between the poorhouses of the past and what you call today’s digital poorhouses?

These high-tech tools we're seeing—I call it “the regime of data analytics”—are actually more evolution than revolution. They fit pretty well within the history of poverty policy in the United States.

When I originally started this work, I thought the moment that we’d see these digital tools really arrive in public assistance and public services might be in the 1980s, when there was a widespread uptake of personal computers, or in the 1990s when welfare reform passed. But in fact, they arose in the late 1960s and early 1970s, just as a national welfare rights movement was opening up access to public assistance.

At the same time, there was a backlash against the civil rights movement going on, and a recession. So these elected officials, bureaucrats, and administrators were in this position where the middle-class public was pushing back against the expansion of public assistance. But they could no longer use their go-to strategy of excluding people from the rolls for largely discriminatory reasons. That's the moment that we see these technologies arrive. What you see is an incredibly rapid decline in the welfare rolls right after they’re integrated into the systems. And that collapse has continued basically until today.

So for some of these algorithms that we have right now, machine-learning tools will replace them. In your research did you come across any issues that are going to arise once we have more AI within these systems?

I don’t know that I have a direct response to it. But one thing I will say is that the Pittsburgh child services system often gets written about as if it’s AI or machine learning. And in fact, it’s actually just a simple statistical regression model.

I do think it’s really interesting, the way we tend to math-wash these systems, that we have a tendency to think they're more complicated and harder to understand than they actually are. I suspect that there's a little bit of technological hocus-pocus that happens when these systems come online and people often feel like they don't understand them well enough to comment on them. But it’s just not true. I think a lot more people that are currently talking about these issues are able to, confident to, and should be at the table when we talk about them.

You have a great quote from a woman on food stamps who tells you her caseworker looks at her purchase history. You appear surprised, so she says, "You should pay attention to what happens to us. You're next." Do you have examples of technologies that the general population deals with that are like this example?

I start the book by talking about a case where my partner was attacked and very badly beaten. After he had gotten some major surgery, we were told at the pharmacy when I was trying to pick up his pain meds that we no longer had health insurance. In a panic, I called my insurance company and they told me basically that we were missing a start date for our coverage.

I said, “You know, well, that’s odd, because you paid claims that we made a couple of weeks ago, so we must have had a start date at that point.” And they said, “Oh, it must have just been a technical error. Somebody must have accidentally erased your start date or something.”

I was really suspicious that what was actually going on was that they had suspended our coverage while they investigated us for fraud (I had been working on these kinds of fraud detection tools for a long time by then). And we had some of the most common indicators that insurance fraud was occurring: we had only had our insurance for a couple of days before the attack, we are not married, and he had received controlled substances to help him manage his pain.

I will never know whether we were being investigated, but either way they were telling us that we owed upward of $60,000 in medical bills that had been denied because we weren’t covered when the claims went through. It caused extraordinary stress.

So these systems are actually already at work, sort of invisibly, in many of the services that we interact with on a day-to-day basis, whether we are poor, working class or professional middle class, or economic elites. But they don’t affect us all equally. My partner and I were able to endure that experience because we had resources to help us get through that experience, and also because it only happened to us once. It wasn’t coming from every direction. It wasn’t an overwhelming force where we're hearing it from child protective services, and also Medicaid, and also food stamps, and also the police.

I think it can be a lot harder for folks who are dealing with many of these systems at the same time.

Is there anything good happening because of these tools?

One of the reasons I’m optimistic is that these systems are also really incredible diagnostics. They make inequities in our country really concrete, and really evident. Where one of the systems goes spiraling out of control is a place where we have a deep inequality that needs to be addressed. And so I believe that the combination of the movement work that’s already happening now and increased attention to systems like these can really create incredible pressure to create a more just social system overall.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.