America’s first confirmed wrongful arrest by facial recognition technology happened in January 2020. Robert Williams, a Black man, was arrested in his driveway just outside Detroit, with his wife and young daughter watching. He spent the night in jail. The next day in the questioning room, a detective slid a picture across the table to Williams of a different Black man who had been caught on video stealing watches from the boutique Shinola.
“Is this you?” he asked.
“No, that’s not me.” Williams replied.
The detective passed over another picture. “I guess this is not you either?”
Williams held the picture next to his face. It clearly wasn’t him. Williams said, “This is not me. I hope you don’t think all Black people look alike.”
“The computer says it’s you,” replied the detective.
The novel thing about the arrest of Robert Williams was not that it occurred, or that it was a mistake. Facial recognition is known to be less accurate for darker-skinned people. And the technology is widely used by police departments in the United States, although there isn’t good data on how pervasive it is. The unusual part of Williams’s story is that police admitted to using facial recognition in his arrest.
The news of the case went public in early August, and—after a summer of protest focused on the way Black communities are policed in America—it was met with nationwide outrage. A couple of weeks later, another wrongful arrest of a Black man in Detroit because of facial recognition technology came to light.
Even before this, activists had been demanding an end to Project Greenlight, the citywide public-private initiative that uses facial recognition in an effort to reduce crime. And yet not only is the project still running: at the end of September, the city council voted to extend the contract between the Detroit Police Department and its facial recognition provider, DataWorks Plus.
A year of contradictions
The events in Detroit exemplify our complicated relationship with facial recognition right now. Its use is growing, and in some fields the technology has become integral. In others, such as retail, facial recognition is starting to be rolled out with high hopes for the future. Many technology providers are betting that the public will get increasingly comfortable with the use of biometrics, and soon it will be an organic part of digital life: Apple has bet heavily on it, and now millions of people use its Face ID system to unlock their iPhones every day.
But the public also has a new consciousness of the dangers facial recognition poses, especially in criminal justice. There’s significantly more awareness, more concern, and more conversation now than ever before, and this year has seen more legislation on facial recognition than all previous years combined. There were bans or moratoria in six cities across the US in 2019, and the same again this year.
Reconciling these laws with the growth of the industry will be hard. But the events of 2020 give some clues as to how these compromises might play out over the coming year.
Small players, big industry
In January, the New York Times published an investigation of ClearviewAI, a small facial recognition company that ran its algorithm on a database of billions of pictures grabbed from social media. Police departments using ClearviewAI’s system were effectively accessing your Facebook photos to match often blurry or incomplete police images during investigations.
The company was heavily criticized, and subsequent reporting by BuzzFeed News showed that the system was being used by as many as 2,200 law enforcement agencies in the US, as well as by Immigration and Customs Enforcement, the Department of Justice, and retailers including Macy’s and Walmart.
“The Clearview story really freaked a lot of people out—as it should,” says Jameson Spivack, a policy associate at Georgetown University’s Center on Privacy and Technology. Many of the concerns focus on how fragmented the field is. While major companies like IBM and Microsoft are significant forces, there are also lots of smaller private companies, like ClearviewAI and NtechLab, that operate with little public oversight. The reporting also exposed how little the public knew about the widespread government use of the technology.
The catalyst: Race protests
These stories raised awareness of the problems, but Spivack says the Black Lives Matter protests following the murder of George Floyd were the “single biggest catalyst” for legislation restricting use of facial recognition in the United States. Americans suddenly started reexamining policing and its tools, policies, and culture.
Concern had begun growing after researchers Joy Buolamwini and Timnit Gebru discovered and documented racial bias in commercial facial recognition products in 2018, leading several cities and states to pass laws that prevented the police from using facial recognition in concert with body cameras.
But during the largest protest movement in American history, activists were worried that police surveillance technologies would be used for retaliation. It has since been confirmed that at least the New York, Miami, and Washington, DC, police departments did use facial recognition to surveil protesters.
On June 1 in Washington, DC, police used pepper balls and tear gas to push back protesters in Lafayette Square so that President Trump could score a photo opportunity at a nearby church. Amid the chaos, a protester punched a police officer. Days later, officers found a picture of the man on Twitter and ran it through their facial recognition system, got a match, and made an arrest. Similarly in Miami, a woman accused of throwing rocks at police during a protest was arrested on the basis of a facial recognition match.
Spivack saw grassroots activists against facial recognition work closely with police reform groups throughout the summer and fall, led by other advocacy groups like the American Civil Liberties Union. In Portland, Oregon, one protester even created a facial recognition system to identify anonymous police officers.
As 2020 went on, legislation to limit police use of such technology was proposed at the municipal, state, and even federal levels. In June, Democratic lawmakers introduced a bill that would ban the use of facial recognition by federal law enforcement. In Vermont, an executive order from the governor created a statewide ban on government use of the technology. In Massachusetts, the cities of Cambridge and Boston passed bans on the technology this summer, and the state government approved a ban of facial recognition for public agencies, which includes law enforcement, in December; Governor Charlie Baker is currently refusing to sign the bill.
California started its own debate on statewide legislation in May, and the cities of San Francisco and Oakland already have banned use of facial recognition by law enforcement. In July, New York City instituted a moratorium on face recognition in schools until 2022. In Portland, Oregon, a new citywide ban forbids the use of the technology by any public or private group.
But this shift is not happening everywhere, as the recommitment to surveillance in Detroit shows. Spivack speculates that racial power dynamics might be influencing the political fight around police surveillance. “If you look at a lot of the cities that were some of the first to ban face recognition, they were typically—not always, but typically—wealthier, whiter, very progressive, maybe with more political capital and ability to impact lawmakers, more so than more marginalized communities,” he says.
A national prospect?
Not all the reaction has taken the form of legislation, however. In early June, IBM announced that it had stopped selling any of its facial recognition products. Amazon and Microsoft followed suit by temporarily discontinuing their contracts with police departments. And in July, the ACLU filed a lawsuit against ClearviewAI for failing to comply with the Illinois Biometric Information Privacy Act—the first full legal challenge to the company.
Microsoft, Amazon, IBM, and industry groups like the Security Industry Association are preparing for a fight. They dramatically increased the amount of lobbying on facial recognition from 2018 to 2019, and it’s expected that 2020 will show an even greater increase. Many are in favor of increased regulation, but not bans. Amazon’s moratorium will end in June, and Microsoft’s is contingent on the institution of a federal law.
Meanwhile, the ACLU continues to draft legislation that seeks to ban the technology. A statement on its website reads that the organization “is taking to the courts, streets, legislatures, city councils, and even corporate boardrooms to defend our rights against the growing dangers of this unregulated surveillance technology.”
The priorities of the new administration will also shape regulation in 2021 and beyond. As a presidential candidate, Kamala Harris cited regulation of facial recognition in law enforcement as part of her police reform plan. If the administration does push for federal legislation, it's more likely to become a national issue, with the result that fewer resources will be directed to more local oversight campaigns. But if not, the fight will likely continue to play out on the state and city level.
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
Deepfakes of Chinese influencers are livestreaming 24/7
With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
AI hype is built on high test scores. Those tests are flawed.
With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.
You need to talk to your kid about AI. Here are 6 things you should say.
As children start back at school this week, it’s not just ChatGPT you need to be thinking about.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.