Skip to Content
Artificial intelligence

Lawmakers will look more closely at facial-recognition software after being mistaken for criminals

July 26, 2018

The American Civil Liberties Union (ACLU) came up with a clever way to draw lawmakers’ attention to the limitations of facial-recognition technology: it used it to (falsely) identify more than 28 of them as criminals.

Lawmakers or breakers?: The ACLU used Amazon’s Rekognition platform to compare federal lawmakers’ faces with 25,000 publically available mugshots. They found that 28 members of Congress matched incorrectly with known criminals. Rekognition is currently being used by a number of US police departments. In its defense, Amazon says law enforcement should only use the technology to “narrow the field.”

Err, Jeff? The incident may encourage lawmakers to take a closer look at technology that has significant privacy implications. Three of the US lawmakers erroneously identified have written a letter asking Amazon to explain how its system works. Two more have asked for meeting with Amazon’s CEO, Jeff Bezos, to discuss the issue.

Troubling bias: Amazon’s software disproportionately identified African-American and Latino lawmakers. Racial bias has been found in other commercial face-recognition systems. 

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.