AI that identifies people in crowds is already pervasive in China—and now it’s augmenting police officers’ eyes, too.
Smart specs: The Wall Street Journal says the hardware, made by LLVision, sends data from its camera to a handheld device, where AI software crunches through an offline database of 10,000 pictures of suspects in about 100 milliseconds to help officers spot criminals. It’s unclear how accurate it is.
How they’re used: The glasses will be used to monitor busy crowds in China as citizens travel for next week’s Lunar New Year. But the People’s Daily newspaper says they’ve already been tested in Zhengzhou railway station, catching seven wanted criminals and 26 people travelling on fake ID.
Why it matters: LLVision says a big benefit of the specs is that they put facial recognition wherever a police officer looks, rather than limiting it to fixed CCTV cameras. You can decide if that’s good or bad.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.