Skip to Content
Policy

AI researchers ask Amazon to stop selling face recognition to law enforcement

April 3, 2019

AI researchers from industry and academia have signed an open letter calling on Amazon to stop selling its face recognition technology to law enforcement after apparent biases were discovered.

Face palm: This January, MIT’s Deborah Raji and Joy Buolamwini revealed research suggesting that Amazon’s Rekognition product misidentifies women and people with darker skin more often than other subjects. Buolamwini has previously highlighted the racial bias in other face recognition systems (Microsoft addressed the problems with its technology identified in that work).

Change afoot: Face recognition has become a banner issue for those concerned about irresponsible uses of AI, and it seems increasingly likely that some form of regulation will arrive. But the technology is spreading rapidly, and companies are struggling to adjust their positions. Microsoft has said it will continue to work with law enforcement but has also backed legislation that would require signs showing where face recognition is being used. Google has said it won’t supply face recognition until it can come up with an appropriate policy.

War of words: The letter also counters criticism of a study posted in January by two Amazon executives, Matthew Wood and Michael Punke. Amazon’s rebuttals claimed that the original research misrepresented the capabilities (and limitations) of Rekognition. They also noted that Amazon requires its technology to be used in accordance with the law and said that Amazon would endorse greater transparency from law enforcement about use of the technology.

Pioneering vision: Those who signed the letter include prominent voices in AI and ethics as well as Yoshua Bengio, a computer scientist who recently received the $1 million Turing Award with two colleagues for his role in developing the deep-learning technology that underpins modern AI—which is crucial to face recognition. Bengio has recently emerged as a key voice on the risks of AI.

Deep Dive

Policy

What’s next for AI regulation in 2024? 

The coming year is going to see the first sweeping AI laws enter into force, with global efforts to hold tech companies accountable. 

Three technology trends shaping 2024’s elections

The biggest story of this year will be elections in the US and all around the globe

Four lessons from 2023 that tell us where AI regulation is going

What we should expect in the coming 12 months in AI policy

The FTC’s unprecedented move against data brokers, explained

It could signal more aggressive action from policy makers to curb the corrosive effects that data brokers have on personal privacy.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.