Skip to Content
Artificial intelligence

London police’s face recognition system gets it wrong 81% of the time

A man puts up a poster describing London's Metropolitan Police's face recognition system trial
A man puts up a poster describing London's Metropolitan Police's face recognition system trialAssociated Press

The first independent evaluation of the Metropolitan police’s use of face recognition warned it is “highly possible" the system would be ruled unlawful if challenged in court.

The news: London’s police force has conducted 10 trials of face recognition since 2016, using Japanese company NEC’s Neoface technology. It commissioned academics from the University of Essex to independently assess the scheme, and they concluded that the system is 81% inaccurate (in other words, the vast majority of people it flags for the police are not on a wanted list). They found that of 42 matches, only eight were confirmed to be correct, Sky News reports.

Police pushback: The Met police insists its technology makes an error in only one in 1,000 instances, but it hasn’t shared its methodology for arriving at that statistic.

Rising fears: As face recognition becomes more ubiquitous, there’s growing concern about the gender and racial bias embedded into many systems. With that (and other concerns) in mind, San Francisco banned its use by public agencies last month. That doesn’t do anything to stop it from proliferating in the private sector, but at least it might mean it can’t be wielded by authorities with the power to arrest you. 

Sign up here to our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech.

Deep Dive

Artificial intelligence

Why Meta’s latest large language model survived only three days online

Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.

DeepMind’s game-playing AI has beaten a 50-year-old record in computer science

The new version of AlphaZero discovered a faster way to do matrix multiplication, a core problem in computing that affects thousands of everyday computer tasks.

A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing

Online videos are a vast and untapped source of training data—and OpenAI says it has a new way to use it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.