Skip to Content
Artificial intelligence

New York’s mass face recognition trial on drivers has been a spectacular failure

April 8, 2019

Face recognition proved totally unable to identify the faces of drivers in the city, according to the Wall Street Journal.

The trial: An internal e-mail from the Metropolitan Transportation Authority seen by the WSJ included the details of the trial at the Robert F. Kennedy Bridge last year. Cameras attached to the bridge were supposed to capture and identify the faces of drivers through their windshields as they passed, matching them against government databases.

The results: But the document, from November of last year, says that the “initial period for the proof of concept testing at the RFK for facial recognition has been completed and failed with no faces (0%) being detected within acceptable parameters.” That’s no faces accurately identified. Oops. Despite the failure, more cameras are going to be positioned on other bridges and tunnels, according to a spokesperson.

Controversy: Face recognition is rightly controversial. As well as its potential as a tool for mass surveillance, it has been shown to misidentify non-white faces and women. Last week a letter by prominent AI researchers called on Amazon to stop selling its face recognition software to law enforcement. Lee Rowland, policy director at the New York Civil Liberties Union, told the WSJ that face recognition coupled with the gathering of drivers’ licence plate data “represents a sea change in our government’s ability to track us.”

Apart from that, the failure of the NYC trial so far also suggests that the technology is perhaps not quite as game-ready as some of its more bullish advocates would suggest.

Sign up here for our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech.

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.