A camera trained at your face can reveal an awful lot about you. We’ve written a lot recently about how facial recognition software has reached an incredibly high standard in the last couple of years. So high, in fact, that the technology is robust enough to be used across China to authorize payments and catch trains.
But some recent technologically impressive advances are causing more of a stir. Earlier this week, Michal Kosinski and Yilun Wang from Stanford University posted a paper on the PsyArxiv preprint servers, soon to be published by the Journal of Personality and Social Psychology, which shows that facial recognition AI is more accurate than humans at detecting sexual orientation from pictures of people.
Trained on 130,741 images taken from dating sites where people explain their sexual orientation, the pair’s neural networks can distinguish between gay and heterosexual men in 81 percent of cases from just a single photo. That compares to an accuracy of 61 percent for a human. Given five images, the AI’s figures rises to 91 percent. (The same numbers for women are 71, 54, and 83, respectively.)
The approach is reminiscent of work described last year by researchers from Shanghai Jiao Tong University in China, who trained neural networks using photographs of known criminals and non-criminals. They were then able to correctly identify criminals from new images with an accuracy of 89.5 percent.
The Economist suggests that the same approach could be used to try to identify other qualities, such as IQ or political leaning. The phenomenon is, clearly, troubling to those who hold privacy dear—especially if the technology is used by authoritarian regimes where even a suggestion of homosexuality or criminal intent may be viewed harshly.
A natural conclusion: maybe we should obscure our faces while out on the streets?
Sadly, another new study may render that idea rather pointless. Late last week, Amarjot Singh from Cambridge University and his colleagues in India published a study on the arXiv showing that in many cases it is possible to identify a person using facial recognition even when their face is obscured. When people are wearing a hat, scarf, and glasses to cover their faces, their algorithms were able to identify people with around 55 percent accuracy. The figure rose to 69 percent when just glasses were removed.
Zeynep Tufekci from the Berkman Center for Internet and Society at Harvard University commented on this final finding on Twitter, described the research as part of an “ever-increasing new capability that will serve authoritarians well.” She likely has a point.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.