MIT Technology Review Subscribe

There is a crisis of face recognition and policing in the US

The deeply flawed technology is in wide use, largely out of the public eye

When news broke that a mistaken match from a face recognition system had led Detroit police to arrest Robert Williams for a crime he didn’t commit, it was late June, and the country was already in upheaval over the death of George Floyd a month earlier. Soon after, it emerged that yet another Black man, Michael Oliver, was arrested under similar circumstances as Williams.  While much of the US continues to cry out for racial justice, a quieter conversation is taking shape about face recognition technology and the police. We would do well to listen. 

When Jennifer Strong and I started reporting on the use of face recognition technology by police for our new podcast, “In Machines We Trust,” we knew these AI-powered systems were being adopted by cops all over the US and in other countries. But we had no idea how much was going on out of the public eye. 

Advertisement

For starters, we don’t know how often police departments in the US use facial recognition for the simple reason that in most jurisdictions, they don’t have to report when they use it to identify a suspect in a crime. The most recent numbers are speculative and from 2016, but they suggest that at the time, at least half of Americans had photos in a face recognition system. One county in Florida ran 8,000 searches each month. 

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

We also don’t know which police departments have facial recognition technology, because it’s common for police to obscure their procurement process. There is evidence, for example, that many departments buy their technology using federal grants or nonprofit gifts, which are exempt from certain disclosure laws. In other cases, companies offer police trial periods for their software that allow officers to use systems without any official approval or oversight. This allows companies that make face recognition systems to claim their products are in wide use—and give the outward impression they’re both popular and reliable crime-solving tools.

Protected algorithms that don’t serve

But if facial recognition is known for anything, it’s how unreliable it is. As we report in the show, in January London’s Metropolitan Police debuted a live facial recognition system that in tests had an accuracy rate of less than 20%. In New York City, the Metro Transit Authority trialed a system on major thoroughfares with a reported rate of 0% accuracy.  The systems are often racially biased as well—one study found that in some commercial systems, even in lab conditions error rates in identifying darker skinned women were around 35%. While reporting for the show, we found that it’s not uncommon for police to alter photos to improve their chances of finding a match. Some even defended the practice as critical to doing good police work.

Two of the most controversial and advanced companies in the field, ClearviewAI and NTechLabs, claim to have solved the “bias problem” and reached near-perfect accuracy. ClearviewAI asserts that it’s used by around 600 police departments in the US (some experts we spoke to were skeptical of that figure). NTechLabs, based in Russia, has signed on for live video facial recognition throughout the city of Moscow.

But there is almost no way to independently verify their claims. Both companies have algorithms that sit on databases of billions of public photos. The National Institute of Standards and Technology (NIST), meanwhile, offers one of the few independent audits of face recognition technology. The NIST Vendor Test uses a much smaller dataset, which along with the quality and diversity of the images in the database, limits its power as an auditing tool. ClearviewAI has not taken NIST’s most recent test. NTechLabs has taken the static image test and performed well, but there is no currently used test for live video facial recognition. There is also no independent test specifically for bias. 

Recognition in the streets

The recent wave of Black Lives Matter protests, sparked by Floyd’s death, have called into question much of what we’ve accepted about modern policing, including their use of technology. The dark irony is that, when people take to the streets to protest racism in policing, some police have used cutting-edge tools with a known racial bias against those assembled. We know, for example, that the Baltimore police department used face id on protestors after the death of Freddie Gray in 2015. And we know that a handful of departments have put out public calls for footage of this year’s protests. It’s been documented that police in Minneapolis have access to a range of tech, including ClearviewAI’s services. According to Jameson Spivack of the Center on Privacy and Technology at Georgetown University, who we interview in the show, if face recognition is used on BLM protests, it’s “targeting and discouraging Black political speech specifically.”

After years of struggle for regulation, by mostly Black and brown-led organizations, we’ve never been at a better moment to really change. Microsoft, Amazon and IBM have all announced discontinuations or moratoriums of their face recognition products. In the past several months, a handful of major US cities have announced bans or moratoriums on the technology. On the other hand, the technology is moving rapidly. The systems’ capabilities—as well as potential for misuse and abuse—will continue to grow by leaps and bounds.  We’re already starting to see police departments and technology providers move beyond static, retrospective face recognition systems to live video analytics that are integrated with other types of data streams like audio gunshot surveillance systems.

Advertisement

Some of the police officers we spoke to said they shouldn’t be left with archaic tools to fight crime in the 21st century. And it’s true that in some cases, technology can make policing less violent and less prone to human biases. 

But after months of reporting out our audio miniseries, I was left with a feeling of foreboding. The stakes are growing by the day, and so far the public has been left far behind in its understanding of what’s going on. It’s not clear how that’s going to change unless all people on all sides of this issue can agree that everyone has a right to be informed.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement