Skip to Content
Policy

Eight case studies on regulating biometric technology show us a path forward

A new report from the AI Now Institute reveals how different regulatory approaches work or fall short in protecting communities from surveillance.
September 4, 2020
Amba Kak, director of global strategy and programs at the New York–based AI Now Institute
Amba Kakcourtesy of AI Now

Amba Kak was in law school in India when the country rolled out the Aadhaar project in 2009. The national biometric ID system, conceived as a comprehensive identity program, sought to collect the fingerprints, iris scans, and photographs of all residents. It wasn’t long, Kak remembers, before stories about its devastating consequences began to spread. “We were suddenly hearing reports of how manual laborers who work with their hands—how their fingerprints were failing the system, and they were then being denied access to basic necessities,” she says. “We actually had starvation deaths in India that were being linked to the barriers that these biometric ID systems were creating. So it was a really crucial issue.”

Those instances provoked her to research biometric systems and the ways the law could hold them accountable. On September 2, Kak, who is now the director of global strategy and programs at the New York–based AI Now Institute, released a new report detailing eight case studies of how biometric systems are regulated around the world. They span city, state, national, and global efforts, as well as some from nonprofit organizations. The goal is to develop a deeper understanding of how different approaches work or fall short. I spoke to Kak about what she learned and how we should move forward.

This interview has been edited and condensed for clarity.

What motivated this project?

Biometric technology is proliferating and becoming normalized, both in government domains but also in our private lives. The monitoring of protests using facial recognition happened this year alone in Hong Kong, in Delhi, in Detroit, and in Baltimore. Biometric ID systems, which are less talked about, where biometrics are used as a condition to access your welfare services—that’s also proliferated across low- and middle-income countries in Asia, Africa, and Latin America.

But the interesting thing is that the pushback against these systems is also at its peak. The advocacy around it is getting more attention than ever before. So then the question is: Where do law and policy figure in? That’s where this compendium comes in. This report tries to pull out what we can learn from these experiences at a moment when it seems like there is a lot of appetite from governments and from advocacy groups for more regulation.

What is the current state of play for biometric regulation globally? How mature are the legal frameworks for handling this emerging technology?

There are about 130 countries in the world that have data protection laws. Almost all cover biometric data. So if we’re just asking the question of do laws exist to regulate biometric data, then the answer would be in most countries, they do.

But when you dig a little deeper, what are the limitations of a data protection law? A data protection law at its best can help you regulate when biometric data is used and make sure that it isn’t used for purposes for which consent was not given. But issues like accuracy, discrimination—those issues have still received very little legal attention.

On the other hand, what about completely banning the technology? We’ve seen that concentrated in the US at the city and state level. I think people forget sometimes that most of this legislative activity has been concentrated on public and, more specifically, on police use.

So we have a mix of data protection law that provides some safeguards but is inherently limited. And then we have a concentration of these complete moratoriums at the local city and state level in the US.

What were some common themes that emerged from these case studies? 

To me, the clearest one was the chapter on India by Nayantara Ranganathan, and the chapter on the Australian facial recognition database by Monique Mann and Jake Goldenfein. Both of these are massive centralized state architectures where the whole point is to remove the technical silos between different state and other kinds of databases, and to make sure that these databases are centrally linked. So you’re creating this monster centralized, centrally linked biometric data architecture. Then as a Band-Aid on this huge problem, you’re saying, “Okay, we have a data protection law, which says that data should never be used for a purpose that was not imagined or anticipated.” But meanwhile, you’re changing the expectation of what can be anticipated. Today the database that was used in a criminal justice context is now being used in an immigration context.

For example, [in the US] ICE is now using or trying to use DMV databases in different states in the process of immigration enforcement. So these are databases created in a civilian context, and they’re trying to use them for immigration. Similarly in Australia, you have this giant database, which includes driver’s license data, that is now going to be used for limitless criminal justice purposes, and where the home affairs department will have complete control. And similarly in India, they created a law, but the law basically put most of the discretion in the hands of the authority that created the database. So I think from these three examples, what becomes clear to me is you have to read the law in the context of the broader political movements that are happening. If I had to summarize the broader trend, it’s the securitization of every aspect of governance, from criminal justice to immigration to welfare, and it’s coinciding with the push for biometrics. That’s one.

The second—and this is a lesson that we keep repeating—consent as a legal tool is very much broken, and it’s definitely broken in the context of biometric data. But that doesn’t mean that it’s useless. Woody Hartzog’s chapter on Illinois’s BIPA [Biometric Information Privacy Act] says: Look, it’s great that we’ve had several successful lawsuits against companies using BIPA, most recently with Clearview AI. But we can’t keep expecting “the consent model” to bring about structural change. Our solution can’t be: The user knows best; the user will tell Facebook that they don’t want their face data collected. Maybe the user will not do that, and the burden shouldn’t be on the individual to make these decisions. This is something that the privacy community has really learned the hard way, which is why laws like the GDPR don’t just rely on consent. There are also hard guideline rules that say: If you’ve collected data for one reason, you cannot use it for another purpose. And you cannot collect more data than is absolutely necessary.

Was there any country or state that you thought demonstrated particular promise in its approach to the regulation of biometrics?

Yeah, unsurprisingly it’s not a country or a state. It’s actually the International Committee of the Red Cross [ICRC]. In the volume, Ben Hayes and Massimo Marelli—they’re both actually representatives of the ICRC—wrote a reflective piece on how they decided that there was a legitimate interest for them to be using biometrics in the context of distributing humanitarian aid. But they also recognized that there were many governments that would pressure them for access to that data in order to persecute these communities.

So they had a very real conundrum, and they resolved that by saying: We want to create a biometrics policy that minimizes the actual retention of people’s biometric data. So what we’ll do is have a card on which someone’s biometric data is securely stored. They can use that card to get access to the humanitarian welfare or assistance being provided. But if they decide to throw that card away, the data will not be stored anywhere else. The policy basically decided not to establish a biometric database with the data of refugees and others in need of humanitarian aid.

To me, the broader lesson from that is recognizing what the issue is. The issue in that case was that the databases were creating a honeypot and a real risk. So they thought up both a technical solution and a way for people to withdraw or delete their biometric data with complete agency.

What are the major gaps you see in approaches to biometric regulation across the board?

A good example to illustrate that point is: How is the law dealing with this whole issue of bias and accuracy? In the last few years we’ve seen so much foundational research from people like Joy Buolamwini, Timnit Gebru, and Deb Raji that existentially challenges: Do these systems work? Who do they work against? And even when they pass these so-called accuracy tests, how do they actually perform in a real-life context?

Data privacy doesn’t concern itself with these types of issues. So what we’ve seen now—and this is mostly legislative efforts in the US—is bills that mandate accuracy and nondiscrimination audits for facial-recognition systems. Some of them say: We’re pausing facial-recognition use, but one condition for lifting this moratorium is that you will pass this accuracy and nondiscrimination test. And the tests that they often refer to are technical standards tests like NIST’s face-recognition vendor test.

But as I argue in that first chapter, these tests are evolving; they have been proven to underperform in real-life contexts; and most importantly, they are limited in their ability to address the broader discriminatory impact of these systems when they’re applied to practice. So I’m really worried in some ways about these technical standards becoming a kind of checkbox that needs to be ticked, and that then ignores or obfuscates the other forms of harms that these technologies have when they’re applied.

How did this compendium change the way you think about biometric regulation?

The most important thing it did for me is to not think of regulation just as a tool that will help in limiting these systems. It can be a tool to push back against these systems, but equally it can be a tool to normalize or legitimize these systems. It’s only when we look at examples like the one in India or the one in Australia that we start to see law as a multifaceted instrument, which can be used in different ways. At the moment when we’re really pushing to say “Do these technologies need to exist at all?” the law, and especially weak regulation, can really be weaponized. That was a good reminder for me. We need to be careful against that.

This conversation has definitely been revelatory for me because as someone who covers the way that tech is weaponized, I’m often asked, “What’s the solution?” and I always say, “Regulation.” But now you’re saying, “Regulation can be weaponized too.”

That’s so true! This makes me think of these groups that used to work on domestic violence in India. And I remember they said that at the end of decades of fighting for the rights of survivors of domestic violence, the government finally said, “Okay, we’ve passed this law.” But after that, nothing changed. I remember thinking even then, we sometimes glorify the idea of passing laws, but what happens after that?

And this is a good segue—even as I read Clare Garvie and Jameson Spivack’s chapter on bans and moratoriums, they point out that most of these bans apply only to government use. There’s still this massive multibillion-dollar private industry. So it’s still going to be used at the Taylor Swift concert in very similar ways to the ways in which cops would use it: to keep people out, to discriminate against people. It doesn’t stop the machine. That kind of legal intervention would take unprecedented advocacy. I don’t think it’s impossible to have that so-called complete ban, but we’re not there yet. So yeah, we need to be more circumspect and critical about the way we understand the role of law.

What about the compendium made you hopeful about the future?

That’s always such a hard question, but it shouldn’t be. It was probably Rashida Richardson and Stephanie Coyle’s chapter. Their chapter was almost like an ethnography about this group of parents in New York that felt really strongly about the fact that they didn’t want their kids to be surveilled. And they were like, “We’re going to go to every single meeting, even though they don’t expect us to. And we’re going to say we have a problem with this.”

It was just really reassuring to learn about a story where it was the parents’ group that completely shifted the discourse. They said: Let’s not talk about whether biometrics or surveillance is necessary. Let’s just talk about the real harms of this to our kids and whether this is the best use of money. A senator then picked this up and introduced a bill, and just in August, the New York state senate passed this bill. I celebrated with Rashida because I was like, “Yay! Stories like this happen!” It’s connected very deeply with the story of advocacy.

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

Africa’s push to regulate AI starts now        

AI is expanding across the continent and new policies are taking shape. But poor digital infrastructure and regulatory bottlenecks could slow adoption.

A brief, weird history of brainwashing

L. Ron Hubbard, Operation Midnight Climax, and stochastic terrorism—the race for mind control changed America forever.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.