That’s just the beginning, according to Mutale Nkonde, a Harvard fellow and AI policy advisor. That trend will soon spread to states, and there will eventually be a federal ban on some uses of the technology, she said at MIT Technology Review’s EmTech conference.
Which uses will face a ban, it’s not yet clear: while some cities have banned use by police departments, Portland’s focus is restricting use by the private sector. And the debate is not confined to the US. In the UK, there is growing concern over the use of live facial recognition after it emerged that a property developer had been collecting images of people’s faces in an area of London for two years without informing them. We still don’t know how that data was used, Daragh Murray, a human rights lawyer at the University of Essex, said on stage.
“There will be legal challenges, and there will eventually be regulation,” he predicted.
Nkonde agreed this will happen in the US, too. “A constitutional right we have is innocent until proven guilty. Facial recognition could flip that idea around,” she said.
Explaining how support for a ban could spread in the US, Nkonde pointed to the example of a campaign in New York by tenants to stop a plan for using facial recognition instead of keys to access their apartments. This deployment mostly affected poor, black, and brown women. However, the tenants involved human rights lawyers, and more affluent groups started to take notice and ally with them.
“The marginalization of minority groups by facial recognition is step one [toward a ban],” she said. When it is used to target groups with more power, it will be outlawed, she said.
But the “proper use” of facial recognition by government is still supported by 83% of Chinese people and 80% of Americans, said Yi Zeng, head of AI ethics and safety at the Chinese Academy of Sciences. Without specific examples of what proper use is or is not, though, it’s hard to be sure of public opinion—and it’s still quite early in the technology’s development.
“This wouldn’t be the first time a society has looked at a new technology, and decided not to use it,” Nkonde said. For now, she thinks an immediate moratorium is warranted.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.