MIT Technology Review Subscribe

Why San Francisco’s ban on face recognition is only the start of a long fight

The city government can’t use the technology, but private companies still can, and regulating those uses is a thornier problem.

San Francisco has become the first US city to ban the use of facial recognition by its government. But though privacy advocates are celebrating, the ordinance doesn’t stop private companies from using facial ID in ways that many people find creepy.

It might, however, be a first step.

Advertisement

The use of face recognition technology has become increasingly common, despite evidence that it frequently misidentifies people of color. Activists warn that it could lead to false arrests, or be used to track people’s whereabouts and target dissenters who have done nothing wrong.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

In recent years, San Francisco officials have had to second-guess their use of high-tech surveillance tools. In 2009, police pulled over a driver, Denise Green, and held her at gunpoint while searching her car, all because a license plate reader wrongly said it had been stolen. Green sued and the city ended up paying her $495,000. Such episodes undoubtedly contributed to the pressure for a ban, though San Francisco’s police don’t currently use face recognition tech.

In many ways, though, it’s unsurprising that the tech-obsessed city is the first to restrict the technology. “[It’s like how] Silicon Valley parents are the most likely to ban screen time for their kids,” says Laura Noren, a data ethicist and vice president of privacy and trust at Obsidian Security. Other tech-savvy cities are likely to follow its lead, Noren says, which explains why nearby Oakland and Somerville in Massachusetts have already proposed similar bans. Still, she thinks a federal ban is unlikely under the Trump administration.

Public vs. private

However, most people’s experience with facial analysis and recognition won’t happen because of police monitoring. Rather, it’ll be because of non-government endeavors, like school security cameras or stores that show consumers targeted ads. These uses come with the same risks of misidentification and discrimination, but bans like the one in San Francisco won’t prohibit them.

In one example of the problems that could arise, an 18-year-old sued Apple last month because, he alleged, a face recognition system in one of its stores falsely linked him to thefts. (According to Apple, it doesn’t use such systems in its stores.) In another case, tenants at Atlantic Plaza Towers, an apartment building in New York City, are fighting their landlord’s plan to replace a key-fob entry system with a face recognition system. The technology is discriminatory, the tenants say, because the tower residents are mostly people of color. “Why did [the landlord] choose our building to try this system out? Versus 11 other buildings that have a different makeup?” asks Icemae Downes, one of the residents. 

A full ban in the private sector is unlikely, though. Many consumers are already using FaceID to unlock their iPhones or buying video doorbells, like Google’s Nest Hello, that identify familiar people. “When your narrative is ‘government surveillance,’ that tends to have powerful resonance,” says Joseph Jerome, policy counsel for the Privacy & Data Project at the Center for Democracy and Technology. “When you’re dealing with the private sector, we start having debates about what constitutes beneficial innovation.”

Such debates get complicated fast. If companies use face recognition technology, how should they notify customers? What rights should people have to opt out, and how easy should that process be? Should the data ever be given or sold to third parties? These were some of the questions that came up during discussion of a Washington state privacy bill that failed earlier this year, according to Jevan Hutson, a technology policy researcher at the University of Washington. The two sides were unable to agree on how strong the privacy restrictions should be.

Still, restrictions on commercial uses have begun to appear, says Jennifer Lynch, surveillance litigation director at the Electronic Frontier Foundation. For example, a law in Illinois requires companies to get consent before collecting any kind of biometric data. A bipartisan bill with a narrower requirement, the Commercial Facial Recognition Privacy Act, is currently in committee hearings in Congress.

Advertisement

For her part, Noren believes companies will pursue an “accuracy threshold” requirement—in effect, proposing that face recognition be allowed so long as they can prove it doesn’t make too many mistakes.

Ultimately, says Jerome, it’s too early to tell how much the SF ordinance will influence commercial regulation. “I think it will juice the debate that states and the federal government are having around facial recognition, but whether that leads to action is unclear,” he says. There was a similar public/private split over drones a few years back, Jerome adds: many local cities restricted their use by law enforcement, but did little to regulate them for commercial purposes.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement