Skip to Content
Artificial intelligence

Who owns your face?

The debate about regulating facial recognition has reached a critical juncture in the US.

August 12, 2020

Police have a history of using facial recognition to arrest protestors—something not forgotten by activists since the death of George Floyd. In the last of a four-part series on facial recognition, host Jennifer Strong explores the way forward for the technology and examines what policy might look like. 

We meet:

  • Artem Kuharenko, NTechLab
  • Deborah Raji, AI Now Institute
  • Toussaint Morrison, Musician, actor, and Black Lives Matter organizer
  • Jameson Spivack, Center on Privacy & Technology 

Credits

This episode was reported and produced by Jennifer Strong, Tate Ryan-Mosley, Emma Cillekens, and Karen Hao. We had help from Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield. Our technical director is Jacob Gorski.  

Full episode transcript

Toussaint Morrison: This place was eerily quiet the first weekend of the curfew. There were very quiet police cars driving very slowly around here as well.

Jennifer Strong: That’s Toussaint Morrison. He’s showing my producer, Tate Ryan Mosley, around a park in Minneapolis, Minnesota, a 2 and a half mile drive from where George Floyd was killed on May 25th, sparking what’s likely the largest protest movement in American history. 

Toussaint Morrison: About a half a mile East was where an attempt was made to burn down the fifth precinct police station.

Jennifer Strong: Morrison is a musician, actor, filmmaker, and in the past few months he’s also become an organizer of the Black Lives Matter movement in Minneapolis.

Toussaint Morrison: Your anger is perfectly justified. Your skin is beautiful and you are not the criminal that they make you out to be.

Jennifer Strong: He’s well-known here. Activists know who he is. Government officials know who he is. And he thinks the police probably do too. 

Tate Ryan-Mosley: Have you had any conversations with people who are worried about being identified by police officers who might have photos of them at the protests being part of the protests?

Toussaint Morrison: There's definitely a fear. I learned about that a little bit too late. My government name, and face is out there. So, I mean, so yeah, that has been definitely a concern. And, I've heard it from black folks to white folks to, to, every everything in between folks.

Jennifer Strong: Although he says protesters are more concerned about their physical safety right now—such as being hit by a car. 

Toussaint Morrison: Identity outing has not been something I've heard as much as being, being fearful for one's life at an actual march.  

Jennifer Strong: We know from an investigation by Buzzfeed that police here have access to lots of surveillance tech, including Clearview’s facial recognition software—which you’ll remember from episode two—plus they have stingrays that act as cell towers to grab mobile data, an audio surveillance system called ShotSpotter, and a camera system with video analytics layered on top. 

Federal agents also flew a predator drone over the protests too. And Morrison believes these tools are targeted differently towards communities of color.

Toussaint Morrison: Now they don't need a reason to be suspicious. Oh, I saw that your face was at this rally. And, and I see that it lines up with something else. Well, what if the technology is wrong? They're not going to think that it would possibly, you know, because they're trusting the computer. So that technology is heightening a danger. That's already dangerous. And it's already pointed at people of color, disabled people, trans folks. We're, we're the brunt of that technology. So when you create that technology, who does it affect the most?

Jennifer Strong: I’m Jennifer Strong and this is part four of our series on police and facial recognition, where we’ll explore the way forward and what regulation might look like. 

Toussaint Morrison: Moving forward, I don't know what's going to happen, 

but I, what I do know is that there will be no going back, you know, there's not gonna be any going back.

Jameson Spivek: We don't know if police are using face recognition on the current wave of protests, but we do know two things. One, that many of them have the ability to do so, and two -- it's happened in the past.

Jennifer Strong: Jameson Spivek is a policy associate at the Center for Privacy and Technology...

Jameson Spivek: ...which is an independent think tank that's based at Georgetown Law School.

Jennifer Strong: Back in 2015 police in Baltimore used social media tracking on people protesting the death of Freddie Gray.  

Facial recognition helped police identify protestors with outstanding warrants, and they arrested them directly from the crowds.

Spivek is worried about what this means for free speech if it continues.

Jameson Spivek: This is really troubling because,it discourages political speech and participation, which is protected by the first amendment. so if people think they're being identified or arrested for a crime completely unrelated to the protests, they're not going to attend. So this is, targeting and discouraging black political speech specifically.   

Jennifer Strong: And, more broadly...

Jameson Spivek: It shifts the balance of power significantly towards governments. It gives them the ability to identify and track many people… from a distance and in secret. And the government has never had the ability to surveil the public like this. This is essentially a workaround that allows police to run warrantless searches. 

Jennifer Strong: This is how he sees regulation starting to take shape.

Jameson Spivek: One option that already has, has been put into place is to just ban police use of this technology overall. It's flawed... It facilitates unprecedented levels of government surveillance and police have been shown to misuse it. Similarly, another option is to place a moratorium on police use of face recognition. And what this does is it gives the public and elected officials time to get up to speed on what this technology is, how it works, how police are using it. Then another option is to just pass regulation that allows police to use it, but has certain restrictions on their ability to use it.

Jennifer Strong: Reporting this series we’ve spoken to several people who believe reforming the use of face ID just isn’t possible. The ACLU says it should be nationally outlawed. And so, I’m curious what type of regulation he thinks would really make a difference?

Jameson Spivek: Things like requiring a probable cause back search warrant for any face recognition search, restricting the use to violent felonies, and prohibiting the use of face recognition for immigration enforcement. Narrow bans on the use of face recognition in conjunction with things like drones or in police worn body cameras, or for ongoing surveillance, because face recognition should not be used in life or death situations. Another thing is to have a mandatory, mandatory disclosure to defendants that police use face recognition to identify and then eventually arrest them. 

Jennifer Strong: But even if those regulations don't come to pass, he says we need...

Jameson Spivek: Testing to make sure it's accurate and it's not biased and, having reports about how it's used and, and transparency… Those are all good and they are all needed…. But they're not enough. We really need these, these deeper reforms. 

Jennifer Strong: He says it can’t just be up to the companies making the technology to be responsible for the rules that govern it.  

Jameson Spivek: We need to be very vigilant, and ask ourselves are the things that these companies supporting in terms of legislation, are they really going to protect people or is it just a way for the companies to have clarity about how the technology is regulated, but not really regulated in a way that's strong enough that actually protects people and then actually really affects the company's ability to produce it. I don't think that they are going to voluntarily give up selling this technology. so it’s really on, on lawmakers, to step in. 

Jameson Spivek: Most of the, the major companies that are developing face recognition for police and for the government are, are smaller, more specialized companies that most people have not heard of.

Jennifer Strong: One of those companies you’ve likely never heard of is NTechLab,  even though it first made waves about five years ago when, as a brand new startup, it beat Google and won an international competition scoring 95-percent accuracy in one of the categories.

Since then, the Russian company has repeatedly won biometrics competitions held by companies such as Amazon, by US government agencies, and universities. 

Jennifer Strong: And the founder of the company is this man… 

Artem Kuharenko: Artem Kuharenko

Jennifer Strong: NTechLab is best known for its app called FindFace, which let people search social media profiles with photos on their phones.  

Is it meant for a certain group of people or you want it to be available to anybody on social media?

Artem Kuharenko: It was available for anybody in the internet.

Jennifer Strong: This is how John Oliver described the app during a recent episode of HBO’s Last Week Tonight.

John Oliver: If you want a sense for just how terrifying this technology could be if it becomes part of everyday life, just watch as a Russian TV presenter demonstrates an app called FindFace… [newsreel] If you find yourself in a cafe with an attractive girl and you don’t have the guts to approach her, no problem. All you need is a smart phone and the application FindFace.

Jennifer Strong: The man in this video uses the app to take a photo of a woman at a different table. It instantly pulls up her profile on Russia’s version of Facebook.

John Oliver: Just imagine that from a woman’s perspective... don’t worry, I already know where you live.  

Jennifer Strong: The app was a viral hit. But these days NTechLab’s attention is on live facial recognition—meaning the algorithm works in video, in real time.

A system they installed in the city of Moscow is believed to be among the largest of this type in the world. 

Artem Kuharenko: Right now more than 100,000 video cameras is connected to the system, and the system proved to be very helpful and useful to the city.

Jennfier Strong: So, 100-thousand video cameras, capturing a billion faces per month. And he claims the system is very, very accurate. 

Artem Kuharenko: So it's only one false accept, per 10 billion of comparisons. It's one in 10 zeros.

Jennfier Strong: That kind of accuracy is unheard of. But we can’t say it’s impossible either—and I’ll get to why in a moment. What we do know is it’s much harder to achieve accuracy on live video than it is on photos.

Earlier in this series we talked about trials of live facial recognition by London police that produced an accuracy rate of about 20-percent. And another one in New York City that during its testing period didn’t produce even one correct match. 

But in Moscow, Kuharenko says his system is being used to solve crimes in real time. Including at the world’s largest soccer competition: 

Artem Kuharenko: During the FIFA world cup in 2018 in Moscow more than 100 criminals, was caught due to the system. 

Jennfier Strong: NtechLab works with more than a hundred clients in 20 countries – including US chipmaker Nvidia and the Chinese telecom Huawei. It also has smart city projects in Dubai, fintech projects in Europe and retail partnerships in North and South America. 

The company submits some algorithms for testing by the US government, but he says they can’t do that for their most advanced work. 

Because NIST—or The National Institute of Standards and Technology—tests facial recognition algorithms on photos, and his latest systems use video. 

Artem Kuharenko: Their tests are quite far from real life scenarios.

Jennfier Strong: And if government bodies don’t catch up with the tech, companies are more or less left to audit themselves.

Artem Kuharenko: Leader companies in the field have their own tests. In our company, we have a lot of different tests before we send it to production. But the problem is that there is no independent test, which will be open, where anyone could see and anyone could test all algorithms.

Jennfier Strong: This summer, NTechLab added silhouette detection to their video platform. It’s used to identify people in profile. 

They’ve also taken on a new role with the global pandemic… 

Artem Kuharenko: ...measure distance between people and find areas where a lot of people are standing close to each other, so that the city could improve the processes, which are happening in these areas. It also helped to stop for expansion of coronavirus in Moscow.

Jennifer Strong: Now, but as we are in the middle of this global pandemic, how well does the technology work when someone's wearing a mask?

Artem Kuharenko: It works with the same accuracy as without a mask. So it's almost almost the same accuracy. And we also have special algorithm, which can tell whether there is a mask on a person and whether it's correctly weared or not.

Jennfier Strong: Wearing a mask has typically caused the accuracy of these systems to drop, including in a pre-pandemic test of NTech Lab by NIST. But the pandemic has created something of an arms race between companies trying to build systems that read masked faces. The agency also says the company’s algorithms are often among the more accurate they test.

We simply don’t know.

But masked or not, face recognition isn’t the only thing happening on those video feeds.

Artem Kuharenko: Car detection... the license plate recognition… and we combine all this video analytics together so that it can work as a whole system,  and extract as much information from the video stream as possible. Ideally the system will be able to extract as much information as human can see in the video. But algorithm can do it, with a much better speed. And, if human can process only one video stream at a time, the system could process hundreds and hundreds of thousands of videos in real time. 

Jennfier Strong: Do you ever worry that somebody might take all of your hard work and use it to build a world you don't really want to live in?

Artem Kuharenko: Uh, I don't, I don't actually believe in this scenario because it's a quite it's, it's a good scenario for the movie, but it's a very unlikely scenario in real life.

Artem Kuharenko: As a technology company we always try to tell people so that people understand what's happening and make the decision whether they want it or not. 

Jennfier Strong: Like the founder of Clearview, he says it's up to us—everyday people and citizens all over the world—to decide whether and how to live with this technology. Now, considering the many issues around transparency and accountability, it would seem not quite that easy. But there are people trying hard to shoulder that responsibility. 

And you'll meet one in just a moment. 

Deb Raji: I guess the journey to where we are today has arisen from this first sort of cracking the rose colored lenses of, this is a technology that works, and demonstrating that it doesn't work for very specific people...and then later on opening up, this conversation around what does it actually mean for facial recognition to work? 

I am Debra Raji and I'm a tech fellow at the A-I now Institute.

Jennfier Strong: It’s based at NYU, and works to understand the social impact of facial recognition and other AI technologies. 

Deb Raji: You know, How can we actually begin to have conversations around its restriction, around the disclosure of its use, and how does that play out in terms of policy restrictions? 

Jennfier Strong: As an AI researcher she has a super power most of us don’t. She can audit the algorithms that make face ID products work... so long as companies provide access. 

And her efforts are forcing change. The spark that sent her down this path came from what she describes as a horrible realization during her college internship at a machine-learning startup. 

Deb Raji: Wait a second, facial recognition doesn't actually work for everybody. 

Jennfier Strong: She was working on a computer vision model that would help clients flag inappropriate images as “not safe for work.” The trouble was, it flagged photos of people of color at a much higher rate. 

So, she looked for the problem, and she found it. The model was learning to recognize not safe imagery from pornography, and safe imagery from stock photos. It turns out porn is much more diverse and that diversity caused the model to automatically associate dark skin with salacious content.

The startup refused to do anything about it. So, she went to work on these issues with a woman we met earlier in this series—Joy Buolamwini—who as a grad student made a more diverse and balanced data set. They used it to audit algorithms in face ID products already on the market.

This work has a whole lot to do with today’s understanding of how these products fail women and people of color.

But it came at a cost. 

Deb Raji: The computer vision community at that time, wasn't having these conversations around ethics and society and fairness. Like, now we're a lot more comfortable with this work, but, there was a time where even the research community was also very hostile and questioning sort of like, what is this? What's your point here? What’s the significance of this? 

Jennfier Strong: That changed over time and she says she’s found support within the companies they audit too.

Deb Raji: And even though their institutional level or corporate level stance was defensive. These individuals within these companies fought really hard to change the position of their companies and to push for some of these positions that we see today.

Jennfier Strong: Amazon and Microsoft recently put a pause on selling their face I-D systems to law enforcement. IBM stopped working on it altogether.

Deb Raji: There's this kind of additional acknowledgement with these moratoriums to say, wait, and actually, while this nuanced conversation is happening with respect to establishing this policy and this regulation that we desperately need. We're not going to sell that technology at the same time. And I think that realization and that gap is an important step forward in the conversation.

Jennfier Strong: So, there is urgency to this moment, which she is calling a pause. And during this pause, there’s a lot we need to sort out. 

But if we rely on tech companies to go “all the way” with regulation? 

Deb Raji: …they’re always gonna fall short.

VO: And she warns face ID is just the tip of the iceberg.  

Deb Raji: It's much easier to have this conversation about faces than it is to have it about, insurance data, or medical data, or, you know, even some of these, social security systems, even though this exact situation of this disproportionate performance also applies to those cases. 

Jennfier Strong: So, going forward, she wants disclosure and transparency. She’s also calling for a proper evaluation system. 

Ultimately, though:

Deb Raji: A lot of the power is in the hands of the policy makers because big tech companies should definitely not be controlling the conversation.

Jennfier StrongWe may be at an inflection point in our relationship with facial recognition, and with how it gets used.

And yet, it seems safe to say the adoption of this tech is likely to continue at a breakneck pace, leaving our understanding of its power and impact in the dust— unless we really do stop and take a breath, and set some rules for who gets access to images of our faces, and what they can do with them.

Next episode… we go swabbing for coronavirus on the New York City subway, as we explore AI’s role in getting the world’s public transit systems back up and moving.

This episode was reported and produced by me, Tate Ryan-Mosley, Emma Cillekens, and Karen Hao. We had help from Benji Rosen. We’re edited by Michael Reilly and Gideon Litchfield. Our technical director is Jacob Gorski.  

We’ll see you back here in a couple weeks.

Thanks for listening, I’m Jennifer Strong. 

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.