Skip to Content
Tech policy

AI is sending people to jail—and getting it wrong

Using historical data to train risk assessment tools could mean that machines are copying the mistakes of the past.
January 21, 2019
Ian Waldie/Getty Images

AI might not seem to have a huge personal impact if your most frequent brush with machine-learning algorithms is through Facebook’s news feed or Google’s search rankings. But at the Data for Black Lives conference last weekend, technologists, legal experts, and community activists snapped things into perspective with a discussion of America’s criminal justice system. There, an algorithm can determine the trajectory of your life.

The US imprisons more people than any other country in the world. At the end of 2016, nearly 2.2 million adults were being held in prisons or jails, and an additional 4.5 million were in other correctional facilities. Put another way, 1 in 38 adult Americans was under some form of correctional supervision. The nightmarishness of this situation is one of the few issues that unite politicians on both sides of the aisle.

Under immense pressure to reduce prison numbers without risking a rise in crime, courtrooms across the US have turned to automated tools in attempts to shuffle defendants through the legal system as efficiently and safely as possible. This is where the AI part of our story begins.

Police departments use predictive algorithms to strategize about where to send their ranks. Law enforcement agencies use face recognition systems to help identify suspects. These practices have garnered well-deserved scrutiny for whether they in fact improve safety or simply perpetuate existing inequities. Researchers and civil rights advocates, for example, have repeatedly demonstrated that face recognition systems can fail spectacularly, particularly for dark-skinned individuals—even mistaking members of Congress for convicted criminals.

But the most controversial tool by far comes after police have made an arrest. Say hello to criminal risk assessment algorithms.

Risk assessment tools are designed to do one thing: take in the details of a defendant’s profile and spit out a recidivism score—a single number estimating the likelihood that he or she will reoffend. A judge then factors that score into a myriad of decisions that can determine what type of rehabilitation services particular defendants should receive, whether they should be held in jail before trial, and how severe their sentences should be. A low score paves the way for a kinder fate. A high score does precisely the opposite.

The logic for using such algorithmic tools is that if you can accurately predict criminal behavior, you can allocate resources accordingly, whether for rehabilitation or for prison sentences. In theory, it also reduces any bias influencing the process, because judges are making decisions on the basis of data-driven recommendations and not their gut.

You may have already spotted the problem. Modern-day risk assessment tools are often driven by algorithms trained on historical crime data.

As we’ve covered before, machine-learning algorithms use statistics to find patterns in data. So if you feed it historical crime data, it will pick out the patterns associated with crime. But those patterns are statistical correlations—nowhere near the same as causations. If an algorithm found, for example, that low income was correlated with high recidivism, it would leave you none the wiser about whether low income actually caused crime. But this is precisely what risk assessment tools do: they turn correlative insights into causal scoring mechanisms.

Now populations that have historically been disproportionately targeted by law enforcement—especially low-income and minority communities—are at risk of being slapped with high recidivism scores. As a result, the algorithm could amplify and perpetuate embedded biases and generate even more bias-tainted data to feed a vicious cycle. Because most risk assessment algorithms are proprietary, it’s also impossible to interrogate their decisions or hold them accountable.

The debate over these tools is still raging on. Last July, more than 100 civil rights and community-based organizations, including the ACLU and the NAACP, signed a statement urging against the use of risk assessment. At the same time, more and more jurisdictions and states, including California, have turned to them in a hail-Mary effort to fix their overburdened jails and prisons.

Data-driven risk assessment is a way to sanitize and legitimize oppressive systems, Marbre Stahly-Butts, executive director of Law for Black Lives, said onstage at the conference, which was hosted at the MIT Media Lab. It is a way to draw attention away from the actual problems affecting low-income and minority communities, like defunded schools and inadequate access to health care.

“We are not risks,” she said. “We are needs.”

This story originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, subscribe here for free.

Deep Dive

Tech policy

wet market selling fish
wet market selling fish

This scientist now believes covid started in Wuhan’s wet market. Here’s why.

How a veteran virologist found fresh evidence to back up the theory that covid jumped from animals to humans in a notorious Chinese market—rather than emerged from a lab leak.

thermal image of young woman wearing mask
thermal image of young woman wearing mask

The covid tech that is intimately tied to China’s surveillance state

Heat-sensing cameras and face recognition systems may help fight covid-19—but they also make us complicit in the high-tech oppression of Uyghurs.

German woman stands in queue for vaccination
German woman stands in queue for vaccination

What Europe’s new covid surge means—and what it doesn’t

New restrictions are coming into place across Europe as covid cases rise again. But there are several reasons why a new wave is happening.

surveillance drone in Afghanistan
surveillance drone in Afghanistan

After 20 years of drone strikes, it’s time to admit they’ve failed

The very first drone attack missed its target, and two decades on civilians are still being killed. Why can't we accept that the technology doesn't work?

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.