Skip to Content
Q&A

Society wants you to feel ashamed of yourself

Algorithm expert Cathy O'Neil has written a new book that shows how the tech world and society generally feeds off the idea of shame.

Cathy O'Neill
Christopher Churchill

Working in finance at the beginning of the 2008 financial crisis, Cathy O’Neil got a firsthand look at how much people trusted algorithms—and how much destruction they were causing. Disheartened, she jumped to tech, where she found the same blind faith in everything from targeted advertising to risk-assessment models for mortgage-­backed securities. So she left. “I didn’t think what we were doing was trustworthy,” she says.

The feeling of being “a co-conspirator, an unwitting tool in the industry” lit the fire under her to write Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Published in 2016, the book dismantled the idea that algorithms are objective, revealing instead—in example after example—how they can and do perpetuate inequality. 

Before her book came out, says O’Neil, “people didn’t really understand that the algorithms weren’t predicting but classifying … and that this wasn’t a math problem but a political problem. A trust problem.”

O’Neil showed how every algorithm is optimized for a particular notion of success and is trained on historical data to recognize patterns: e.g., “People like you were successful in the past, so it’s fair to guess you will be successful in the future.”  Or “People like you were failures in the past, so it’s fair to guess you will be a failure in the future.

The Shame Machine: Who Profits in the New Age of Humiliation book cover

This might seem like a sensible approach. But O’Neil’s book revealed how it breaks down in notable, and damaging, ways. Algorithms designed to predict the chance of rearrest, for example, can unfairly burden people, typically people of color, who are poor, live in the wrong neighborhood, or have untreated mental-­health problems or addictions. “We are not really ever defining success for the prison system,” O’Neil says. “We are simply predicting that we will continue to profile such people in the future because that’s what we’ve done in the past. It’s very sad and, unfortunately, speaks to the fact that we have a history of shifting responsibilities of society’s scourges to the victims of those scourges.”

Gradually, O’Neil came to recognize another factor that was reinforcing these inequities: shame. “Are we shaming someone for a behavior that they can actually choose not to do? You can’t actually choose not to be fat, though every diet company will claim otherwise. Can you choose not to be an addict? Much harder than you think. Have you been given the opportunity to explain yourself? We’ve been shaming people for things they have no choice or voice in.”

I spoke with O’Neil by phone and email about her new book, The Shame Machine: Who Profits in the New Age of Humiliation, which delves into the many ways shame is being weaponized in our culture and how we might fight back.

The trajectory from algorithms to shame isn’t immediately apparent. How did you connect these two strands?

I investigated the power behind weaponized algorithms. Often, it’s based on the idea that you aren’t enough of an expert to question this scientific, mathematical formula, which is a form of shaming. And it was even more obvious to me, I think, because as a math PhD holder, it didn’t work on me at all and in fact baffled me. 

The power of bad algorithms is a violation of trust, but it’s also shame. You do not know enough to ask questions. For example, when I interviewed a friend of mine, who is a principal whose teachers were being evaluated by the Value Added Model for Teachers in New York City, I asked her to get her hands on the formula that her teachers were targeted by. It took her many layers of requests, and each time she asked she was told, “It’s math—you won’t understand it.”

In The Shame Machine, you argue that shame is a massive structural problem in society. Can you expand on that?

Shame is a potent mechanism to turn a systemic injustice against the targets of the injustice. Someone might say, “This is your fault” (for poor people or people with addictions), or “This is beyond you” (for algorithms), and that label of unworthiness often is sufficient to get the people targeted with that shame to stop asking questions. As just one example, I talked to Duane Townes, who was put into a reentry program from prison that was essentially a no-end, below-poverty-­level manual-labor job done under the eye of armed men who would call his parole officer if he complained or took a bathroom break for longer than five minutes. It was humiliating, and he felt that he was treated as less than a man. This was by intentional design of the program, though, and was meant to train people to be “good workers.” 

It’s tantamount to a taser to one’s sense of self. It causes momentary helplessness and the inability to defend one’s rights. 

Did covid-19 exacerbate the issues you highlight in your new book?

Well, it introduced more fast-­changing norms, around masking, distancing, and vaccinations, so in that sense the shaming became pervasive. It was also obvious that the tribes that manifested on social media and inside politics took on these norms very differently, which caused huge shame wars online and in person. The way shame works is to move people who somewhat disagree further away from each other. In other words, shame backfires when there is no community trust. The more each side lobbed outrage and shame at the other, the further apart people grew. 

In 2021, California became the first state to offer free lunch to all students, not just the economically disadvantaged, which has really helped to remove a long-held stigma. What are some other ways we design systems to be less about shame? Are there ways we can harness shame for social reform?

That’s a great example! Another one that I suggest is to make it a lot easier to qualify for welfare [or] have a universal basic income, and to relieve student debt burdens. The systematic shaming of poor people in this country has meant there’s little solidarity among poor people. That’s almost entirely due to successful shaming campaigns. Poor people would advocate for debt relief and UBI themselves if we didn’t have such a successful shame machine at work.

The chapter on “networked shame” explores how the algorithms of Facebook, Google, and others are continually optimized to spur conflict among us. How does this benefit them? What can be done to counteract it?

It’s their bread and butter! If we didn’t get outraged and spun out on defending our sense of worthiness and getting the likes and retweets based on performative and often destructive shaming, they’d make way less money. I want us to start seeing the manipulation by the big tech companies as a bid for us to work for them for free. We shouldn’t do it. We should aim higher, and that means at them.

At an individual level, that means we refuse to punch down on social media if possible, or even boycott platforms that encourage that. At a systematic level, we insist that the designs of the platforms, including the algorithms, be audited and monitored for toxicity. That’s not a straightforward suggestion, but we know that, for example, Facebook tried doing this [in 2018] and found it to be possible but less profitable, so they rejected it. 

After Weapons was published you started ORCA, an algorithmic auditing company. What does the company’s work entail? 

Algorithmic auditing, at least at my company, is where we ask the question “For whom does this algorithmic system fail?” That could be older applicants in the context of a hiring algorithm, or obese folks when it comes to life insurance policies, or Black borrowers in the context of student loans. We have to define the outcomes that we’re concerned about, the stakeholders that might be harmed, and the notion of what it means to be fair. [We also need to define] the thresholds that determine when an algorithm has crossed the line.

So can there ever be a “good” algorithm?

It depends on the context. For hiring, I’m optimistic, but if we don’t do a good job defining the outcomes of interest, the stakeholders who might be harmed, and—most crucially—the notion of fairness as well as the thresholds, then we could end up with really meaningless and gameable rules that produce very problematic algorithmic hiring systems. In the context of, say, the justice system, the messiness of crime data is just too big a problem to overcome—not to mention the complete lack of agreement on what constitutes a “successful” prison stay.

This interview has been edited for length and clarity.

Deep Dive

Computing

Inside the hunt for new physics at the world’s largest particle collider

The Large Hadron Collider hasn’t seen any new particles since the discovery of the Higgs boson in 2012. Here’s what researchers are trying to do about it.

How ASML took over the chipmaking chessboard

MIT Technology Review sat down with outgoing CTO Martin van den Brink to talk about the company’s rise to dominance and the life and death of Moore’s Law.

 

How Wi-Fi sensing became usable tech

After a decade of obscurity, the technology is being used to track people’s movements.

Algorithms are everywhere

Three new books warn against turning into the person the algorithm thinks you are.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.