Skip to Content
Artificial intelligence

Police across the US are training crime-predicting AIs on falsified data

A new report shows how supposedly objective systems can perpetuate corrupt policing practices.
February 13, 2019
David McNew/Staff

In May of 2010, prompted by a series of high-profile scandals, the mayor of New Orleans asked the US Department of Justice to investigate the city police department (NOPD). Ten months later, the DOJ offered its blistering analysis: during the period of its review from 2005 onwards, the NOPD had repeatedly violated constitutional and federal law.

It used excessive force, and disproportionately against black residents; targeted racial minorities, non-native English speakers, and LGBTQ individuals; and failed to address violence against women. The problems, said assistant attorney general Thomas Perez at the time, were “serious, wide-ranging, systemic and deeply rooted within the culture of the department.”

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.

Predictive policing algorithms are becoming common practice in cities across the US. Though lack of transparency makes exact statistics hard to pin down, PredPol, a leading vendor, boasts that it helps “protect” 1 in 33 Americans. The software is often touted as a way to help thinly stretched police departments make more efficient, data-driven decisions. 

But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study. “If the data itself is incorrect, it will cause more police resources to be focused on the same over-surveilled and often racially targeted communities. So what you’ve done is actually a type of tech-washing where people who use these systems assume that they are somehow more neutral or objective, but in actual fact they have ingrained a form of unconstitutionality or illegality.”

The researchers examined 13 jurisdictions, focusing on those that have used predictive policing systems and been subject to a government-commissioned investigation. The latter requirement ensured that the policing practices had legally verifiable documentation. In nine of the jurisdictions, they found strong evidence that the systems had been trained on “dirty data.”

The problem wasn’t just data skewed by disproportionate targeting of minorities, as in New Orleans. In some cases, police departments had a culture of purposely manipulating or falsifying data under intense political pressure to bring down official crime rates. In New York, for example, in order to artificially deflate crime statistics, precinct commanders regularly asked victims at crime scenes not to file complaints. Some police officers even planted drugs on innocent people to meet their quotas for arrests. In modern-day predictive policing systems, which rely on machine learning to forecast crime, those corrupted data points become legitimate predictors.

The paper’s findings call the validity of predictive policing systems into question. Vendors of such software often argue that the biased outcomes of their tools are easily fixable, says Rashida Richardson, the director of policy research at AI Now and lead author on the study. “But in all of these instances, there is some type of systemic problem that is reflected in the data,” she says. The remedy, therefore, would require far more than simply removing one or two instances of bad behavior. It’s not so easy to “segregate out good data from bad data or good cops from bad cops,” adds Jason Schultz, the institute’s research lead for law and policy, another author on the study. 

Vendors also argue that they avoid data more likely to reflect biases, such as drug-related arrests, and opt instead for training inputs like 911 calls. But the researchers found just as much bias in the supposedly more neutral data. Furthermore, they found that vendors never independently audit the data fed into their systems.

The paper sheds light on anther debate raging in the US over the use of criminal risk assessment tools, which also use machine learning to help determine anything from defendants’ fate during pretrial proceedings to the severity of their sentences. “The data we discuss in this paper is not just isolated to policing,” says Richardson. “It’s used throughout the criminal justice system.”

Currently, much of the debate has focused on the mechanics of the system itself—whether it can be designed to produce mathematically fair results. But the researchers emphasize that this is the wrong question. “To separate out the algorithm question from the social system it’s connected to and embedded within doesn’t get you very far,” says Schultz. “We really have to acknowledge the limits of those kinds of mathematical, calculation-based attempts to address bias.”

Moving forward, the researchers hope their work will help reframe the debate to focus on the broader system rather than the tool itself. They also hope it will prompt governments to create mechanisms, like the algorithmic impact assessment framework the institute released last year, to bring more transparency, accountability, and oversight to the use of automated decision-making tools.

If the social and political mechanisms that generate dirty data aren’t reformed, such tools will only do more harm than good, they say. Once people recognize that, then maybe the debate will finally shift to “ways we can use machine learning and other technological advances to actually stop the root cause of [crime],” says Richardson. “Maybe we can solve poverty and unemployment and housing issues using government data in a more beneficial way.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.