Lawmakers will look more closely at facial-recognition software after being mistaken for criminals
The American Civil Liberties Union (ACLU) came up with a clever way to draw lawmakers’ attention to the limitations of facial-recognition technology: it used it to (falsely) identify more than 28 of them as criminals.
Lawmakers or breakers?: The ACLU used Amazon’s Rekognition platform to compare federal lawmakers’ faces with 25,000 publically available mugshots. They found that 28 members of Congress matched incorrectly with known criminals. Rekognition is currently being used by a number of US police departments. In its defense, Amazon says law enforcement should only use the technology to “narrow the field.”
Err, Jeff? The incident may encourage lawmakers to take a closer look at technology that has significant privacy implications. Three of the US lawmakers erroneously identified have written a letter asking Amazon to explain how its system works. Two more have asked for meeting with Amazon’s CEO, Jeff Bezos, to discuss the issue.
Troubling bias: Amazon’s software disproportionately identified African-American and Latino lawmakers. Racial bias has been found in other commercial face-recognition systems.
Deep Dive
Artificial intelligence
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
Google DeepMind’s new generative model makes Super Mario–like games from scratch
Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.
What’s next for generative video
OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.