Skip to Content
Artificial intelligence

The new lawsuit that shows facial recognition is officially a civil rights issue

Robert Williams, who was wrongfully arrested because of a faulty facial recognition match, is asking for the technology to be banned. 

ACLU

On January 9, 2020, Detroit police drove to the suburb of Farmington Hill and arrested Robert Williams in his driveway while his wife and young daughters looked on. Williams, a Black man, was accused of stealing watches from Shinola, a luxury store. He was held overnight in jail.

During questioning, an officer showed Williams a picture of a suspect. His response, as he told the ACLU, was to reject the claim. “This is not me,” he told the officer. “I hope y’all don’t think all black people look alike.” He says the officer replied: “The computer says it’s you.”

Williams’s wrongful arrest, which was first reported by the New York Times in August 2020,  was based on a bad match from the Detroit Police Department’s facial recognition system. Two more instances of false arrests have since been made public. Both are also Black men, and both have taken legal action. 

Now Williams is following in their path and going further—not only by suing the department for his wrongful arrest, but by trying to get the technology banned. 

On Tuesday, the ACLU and the University of Michigan Law School’s Civil Rights Litigation Initiative filed a lawsuit on behalf of Williams, alleging that the arrest violated his Fourth Amendment rights and was in defiance of Michigan’s civil rights law.

The suit requests compensation, greater transparency about the use of facial recognition, and an end to the Detroit Police Department’s use of facial recognition technology, whether direct or indirect.

What the lawsuit says

The documents filed on Tuesday lay out the case. In March 2019, the DPD had run a grainy photo of a Black man with a red cap from Shinola’s surveillance video through its facial recognition system, made by a company called DataWorks Plus. The system returned a match with an old driver’s license photo of Williams. Investigating officers then included William’s license photo as part of a photo line-up, and a Shinola security contractor (who wasn't actually present at the time of the theft) identified Williams as the thief. The officers obtained a warrant, which requires multiple sign-offs from department leadership, and Williams was arrested.

The complaint argues that the false arrest of Williams was a direct result of the facial recognition system, and that  “this wrongful arrest and imprisonment case exemplifies the grave harm caused by the misuse of, and reliance upon, facial recognition technology.” 

The case contains four counts, three of which focus on the lack of probable cause for the arrest while one focuses on the racial disparities in the impact of facial recognition. “By employing technology that is empirically proven to misidentify Black people at rates far higher than other groups of people,” it states, ”the DPD denied Mr. Williams the full and equal enjoyment of the Detroit Police Department’s services, privileges, and advantages because of his race or color.”

Facial recognition technology’s difficulties in identifying darker-skinned people are well documented. After the killing of George Floyd in Minneapolis in 2020, some cities and states announced bans and moratoriums on the police use of facial recognition. But many others, including Detroit, continued to use it despite growing concerns. 

“Relying on subpar images”

When MIT Technology review spoke with Williams’s ACLU lawyer, Phil Mayor, last year, he stressed that problems of racism within American law enforcement made the use of facial recognition even more concerning. 

“This isn’t a one-bad-actor situation,” Mayor said. “This is a situation in which we have a criminal legal system that is extremely quick to charge, and extremely slow to protect people’s rights, especially when we’re talking about people of color.”

Eric Williams, a senior staff attorney at the Economic Equity Practice in Detroit, says cameras have many technological limitations, not least that they are hard-coded with color ranges for recognizing skin tone and often simply cannot process darker skin.

"I think every Black person in the country has had the experience of being in a photo and the picture turns up either way lighter or way darker."

“I think every Black person in the country has had the experience of being in a photo and the picture turns up either way lighter or way darker,” says Williams, who is a member of the ACLU of Michigan’s lawyers committee but is not working on the Robert Williams case. “Lighting is one of the primary factors when it comes to the quality of an image. So the fact that law enforcement is relying, to some degree … on really subpar images is problematic.” 

There have been cases that challenged biased algorithms and artificial-intelligence technologies on the basis of race. Facebook, for example, underwent a massive civil rights audit after its targeted advertising algorithms were found to serve ads on the basis of race, gender, and religion. YouTube was sued in a class action lawsuit by Black creators who alleged that its AI systems profile users and censor or discriminate against content on the basis of race. YouTube was also sued by LGBTQ+ creators who said that content moderation systems flagged the words “gay” and “lesbian.” 

Some experts say it was only a matter of time until the use of biased technology by a major institution like the police was met with legal challenges. 

“Government use of face recognition plainly has a disparate impact against people of color,” says Adam Schwartz, senior staff lawyer at the Electronic Frontier Foundation. “Study after study shows that this dangerous technology has far higher rates of false positives for people of color compared to white people. Thus, government use of this technology violates laws that prohibit government from adopting practices that cause disparate impact.”

But Mayor, Williams’s lawyer, has been expecting a tough fight. He told MIT Technology Review last year that he expected the Detroit Police Department to continue to argue that facial recognition is a great “investigative tool.” 

“The Williams case proves it is not. It is not at all,” he said. “And in fact, it can harm people when you use it as an investigative tool.”

Under the microscope 

In a statement, Lawrence Garcia, the counsel for the City of Detroit, said that the city aimed to "achieve resolution" in the case, but said facial recognition was not to blame for the situation.

"As the police chief has explained, the arrest was the result of shoddy investigation - not faulty technology," said Garcia. "The Detroit Police Department has conducted an internal investigation and has sustained misconduct charges relative to several members of the department. New protocols have been put in place by DPD to prevent similar issues from occurring."

But the Williams suit comes at a critical time for race and policing in the US. It was filed as defense lawyers began arguments in the trial of Derek Chauvin, the officer charged with murdering George Floyd in Minneapolis last May—and on the third day of protests in response to the shooting of Daunte Wright in nearby Brooklyn Center, Minnesota. Wright, a 20-year-old Black man, was pulled over for a traffic stop and arrested under a warrant before officer Kim Potter shot and killed him, allegedly mistaking her handgun for a taser. 

Eric Williams says it’s essential to understand facial recognition in this wider context of policing failures:

“When DPD decided to purchase the technology ... it was known that facial recognition technology was prone to misidentify, darker-skinned people before Mr. Williams was taken into custody, right? Despite that fact, in a city that is over 80% Black, they chose to use this technology.

“You’re clearly placing less value on the lives and livelihoods and on the civil liberties of Black people than you are on white people. That’s just too common in the current United States.”

This story has been updated to include a statement from the City of Detroit. Jennifer Strong contributed reporting to this story.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.