In San Francisco, a cop can’t use facial recognition technology on a person arrested. But a landlord can use it on a tenant, and a school district can use it on students.
This is where we find ourselves, smack in the middle of an era when cameras on the corner can automatically recognize passersby, whether they like it or not. The question of who should be able to use this technology, and who shouldn’t, remains largely unanswered in the US. So far, American backlash against facial recognition has been directed mainly at law enforcement. San Francisco and Oakland, as well as Somerville, Massachusetts, have all banned police from using the technology in the past year because the algorithms aren’t accurate for people of color and women. Presidential candidate Bernie Sanders has even called for a moratorium on police use.
Private companies and property owners have had no such restrictions, and facial recognition is increasingly cropping up in apartment buildings, hotels, and more. Privacy advocates worry that constant surveillance will lead to discrimination and have a chilling effect on free speech—and the American public isn’t very comfortable with it either. According to a recent survey by Pew Research, people in the US actually feel better about cops using facial recognition than they do about private businesses.
Anyone waiting for a quick federal ban to take shape, either for law enforcement or private industry, is likely to be disappointed, says AI policy expert Mutale Nkonde, a fellow at Harvard’s Berkman Klein Center. “From a federal perspective, anything that seems to undermine business or innovation is not going to be favored,” she says. In theory, bans in cities that have so far been aimed at cops could widen to include private interests. States could then take them up, which might finally spur action in Washington. But it’s going to take a while, if it happens at all.
In the meantime, there is growing momentum toward curtailing private surveillance, using an array of tactics. From going toe to toe with big corporate interests to leaning on legal theory about what constitutes civil rights in America, here are three main approaches currently in play that could one day drastically change how facial recognition is used in our lives.
The first tactic is “old-school corporate pressure,” says Evan Greer, deputy director of digital rights group Fight for the Future. The organization has created a website listing the airlines that use facial recognition, to encourage consumers to choose other options. More recently, Fight for the Future launched a campaign pressuring concert venues and festivals not to use the technology, partly inspired by Ticketmaster’s statement that it might replace tickets with facial ID. Musicians including singer-songwriter Amanda Palmer, rapper Atmosphere, and Tom Morello of Rage Against the Machine have all supported the effort.
Big-name music festivals like Governors Ball, Austin City Limits, Bonnaroo, and Pitchfork have now promised not to use facial surveillance. “There’s value in getting commitments,” Greer says. “We don’t need to wait until an industry is already widely using technology and weaving it in their business model.”
The legislative method
Another model follows the city-by-city progression of cop bans. The city of Portland, Oregon, is considering two separate ordinances, one that would ban cops from using the technology and one that would stop private businesses too. The private ban wouldn’t affect, say, Apple’s FaceID or Facebook’s use of facial recognition in its tagging feature. City officials are more concerned about the prospect of stores and other establishments requiring facial recognition for entry, something that Jacksons, a local convenience store, started doing on a limited basis more than a year ago. The council will discuss the proposal again at a meeting in November.
Meanwhile, US Congresswomen Yvette Clarke, Ayanna Pressley, and Rashida Tlaib are focusing not on geographical regions, but on certain groups. They just introduced a federal bill that would ban facial recognition in federally funded public housing.
Landlords’ use of facial recognition is quickly becoming a hot-button issue. According to the recent Pew report, only 36% of Americans think it’s okay to require facial recognition to enter the place they live. The issue is an even bigger concern in federal or low-income housing. Not only is the technology invasive, residents say, but it’s discriminatory, because many tenants are people of color. In New York, residents of a low-income building have been fighting their landlord’s plan to replace a key-fob entry system with a facial recognition system. “Why did [the landlord] choose our building to try this system out? Versus 11 other buildings that have a different makeup?” asked Icemae Downes, one of the residents.
No need to reinvent the wheel
Existing law can also be updated to cover facial recognition, says Jevan Hutson, a law student and technology policy researcher at the University of Washington. States already have civil rights laws that prevent discrimination in public venues like restaurants, hotels, schools, hospitals, parks, convention centers, and more. Given the technology’s track record of being unable to treat people fairly, Hutson says it’s possible to build a legal argument that facial recognition violates civil rights. If such a change passed, the law would effectively prevent the technology from being deployed in a slew of public spaces.
Another route would be to update a state’s consumer protection laws. Many companies claim that their technology can detect emotion, but studies have shown that their methods are deeply flawed. It’s possible to argue, then, that these algorithms are violating laws against unfair or deceptive practices.
Such a move forces lobbyists to engage with the language of civil rights. “It’s like, okay, we’re updating civil rights law. You care about principles of civil rights,” Hutson says. “If you don’t want us to do it, how can we expect any of your [suggested] safeguards to matter?’” He's working with lawmakers and hopes to introduce a bill during the next legislative session in Washington state, which begins in January.
Separation? Not really
In practice, the distinction between government and private facial recognition is a false one. Normalizing one normalizes the other, says Evan Selinger, a philosopher at the Rochester Institute of Technology. Once everyone is used to using Facebook’s facial recognition system, he says, “it becomes a lot harder to say that law enforcement, who is looking out for the good, should have less freedom than you do.” When facial recognition is taken for granted, “you ultimately provide the private sector with information that it can share with law enforcement.”
That private sector is powerful and will want to have a say in regulation. Amazon CEO Jeff Bezos recently said the company is creating its own draft facial recognition guidelines to present to lawmakers. Earlier this year, Microsoft supported privacy legislation in Washington state that would have put some restrictions on facial ID. But the bill also said it was okay to use facial recognition for profiling as long as someone reviewed the results. It failed after six privacy groups argued that it was far too weak.
That’s part of the reason activists like Greer insist that a multi-pronged strategy looking at legislative and economic approaches will be necessary. “We need all of the above,” she says. “Members of the public should absolutely be holding corporations accountable. Lawmakers should absolutely be addressing this. If there’s one thing we know, it’s that we can’t trust industries to regulate themselves.”