Skip to Content
Artificial intelligence

Algorithms might be better than us at predicting whether people will reoffend

February 17, 2020
prison
prisonPixabay

The news: A new study has found that risk-assessment algorithms are sometimes better than people at predicting whether a criminal will be rearrested within two years of leaving jail. But neither are very good.

The research: Researchers at Stanford University and the University of California, Berkeley tried to recreate a 2018 experiment which found that humans with no training were just as good as a widely-used risk-assessment software program called COMPAS. Risk-assessment algorithms are trained on historical defendant data and are supposed to help judges determine whether a defendant should be kept in jail or be allowed out while awaiting trial. (We went into some of the problems with them in this interactive piece last year.)

Test the humans: The team used a dataset of COMPAS risk assessments covering about 7,000 real defendants and used it to create profiles for each one. These profiles were showed to 400 laypeople recruited through the Amazon Mechanical Turk, asking them to decide if they thought the person would commit another crime. The 2018 study found COMPAS was accurate about 65% of the time, and humans were closer to 67%. This new study managed to closely replicate these results.

New tweaks: But this time the original experiment was changed and extended. For example, the team tested if revealing more information about defendants, giving or withholding feedback after each round and looking just at violent crimes made a difference. The results showed that if the humans didn't get feedback on their prediction accuracy or if they were given lots of extra information about each defendant, the algorithm was more accurate each time. The authors note that in real life, humans rarely get immediate feedback on their decisions regarding defendants and so this might be a more realistic comparison. The research is described in a paper in Science Advances.

Bias abounds: One thing seems to be clear: inaccuracy and bias creep into predictions regardless of whether they’re being made by humans or algorithms. However, the difference is one of accountability. Where people can appeal against judges’ decisions, it’s much harder to contest decisions made by algorithms, which are gradually being used for more areas of official decision-making beyond justice, like housing and schooling.

The solution?: That’s part of the reason why there’s so much work underway to try to make algorithms explainable, although this isn’t necessarily the panacea people think it is—and can actually make things worse by leading us to over-trust statistical models. Campaigners in Europe have managed to rein in “government by unaccountable algorithm” in a few high-profile cases, but without data privacy laws, it’s even harder to push back against these sorts of programs in the US.

Read next: Can you make AI fairer than a judge? Play our courtroom algorithm game.

Deep Dive

Artificial intelligence

The inside story of how ChatGPT was built from the people who made it

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

ChatGPT is about to revolutionize the economy. We need to decide what that looks like.

New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.

GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why

We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.

Google just launched Bard, its answer to ChatGPT—and it wants you to make it better

Under pressure from its rivals, Google is updating the way we look for information by introducing a sidekick to its search engine.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.