Skip to Content

Police Will Soon Be Watched by Algorithms That Try to Predict Misconduct. Is That a Good Thing?

Cops will soon toil under the eye of algorithms that try to predict their actions—but making them accurate enough will be difficult.
March 9, 2016

Police in Charlotte, North Carolina, are set to become guinea pigs for a new high-tech approach to improving relations between cops and citizens. The Charlotte-Mecklenburg police department is working with University of Chicago researchers to create software that tries to predict when an officer is likely to have a bad interaction with someone. The claim is that it will be able to forewarn against everything from impolite traffic stops to fatal shootings.

FiveThirtyEight’s look at the program points out that previous efforts to use algorithms to nudge police to do their jobs better haven’t worked out. Chicago’s police department gave up on a system introduced in the 1990s after resistance from cops who didn’t like working under its algorithmic eye.

As well as concerns about trust and retaliation, one problem with that system was its poor accuracy. Although predictive algorithms have improved in the years since, that will still be a major challenge.

Training software to make accurate predictions requires a lot of data. Computing companies such as Google and Facebook have data points by the billion lying around, and large data sets have been a crucial part of recent advances in artificial intelligence.

But the data needed to create software that can accurately guess at an individual cop’s future actions is surely much scarcer. And accuracy is very important here. If Amazon recommends two movies and you only like one of them, you probably won’t feel slighted and misunderstood. If an algorithm starts offering tips on how to do your job—and your job involves navigating potentially life-threatening scenarios—it had better have good advice.

(Read more: FiveThirtyEight)

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.