Skip to Content
Artificial intelligence

Police are testing face recognition tech on London’s streets this week

December 18, 2018

London’s Metropolitan Police Service is conducting a two-day trial of face recognition this week around several locations in the city.

What’s happening: This is the seventh public test by the force, with three more set to take place before the end of 2018. The Met is yet to disclose details for the next pilots but says it’s considering football matches, music festivals, and transport hubs as settings.

Why? The police say it’s being used to identify wanted criminals from its “watch list” database. The system only retains faces that match those on the list, which are kept for 30 days, and all other data is deleted, according to the Met. The NeoFace technology they’re using is made by Japanese IT multinational NEC.

Controversial: The tech has already become quietly pervasive in the US, but it’s still a relative novelty in the UK, and not a particularly welcome one in certain quarters. Privacy watchdog Big Brother Watch has filed a legal challenge against police use of face recognition, warning that it’s being used without legal backing or sufficient public knowledge.

False positives: It’s not clear whether the tech even works: 98% of the face recognition matches in a previous Met Police trial turned out to be incorrect. It’s even less accurate for people who aren’t white or male.

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.