Emotion recognition technology should be banned, says an AI research institute

There’s little scientific basis to emotion recognition technology, so it should be banned from use in decisions that affect people’s lives, says research institute AI Now in its annual report.
A booming market: Despite the lack of evidence that machines can work out how we’re feeling, emotion recognition is estimated to be at least a $20 billion market, and it’s growing rapidly. The technology is currently being used to assess job applicants and people suspected of crimes, and it’s being tested for further applications, such as in VR headsets to deduce gamers’ emotional states.
Further problems: There’s also evidence emotion recognition can amplify race and gender disparities. Regulators should step in to heavily restrict its use, and until then, AI companies should stop deploying it, AI Now said. Specifically, it cited a recent study by the Association for Psychological Science, which spent two years reviewing more than 1,000 papers on emotion detection and concluded it’s very hard to use facial expressions alone to accurately tell how someone is feeling.
Other concerns: In its report, AI Now called for governments and businesses to stop using facial recognition technology for sensitive applications until the risks have been studied properly, and attacked the AI industry for its “systemic racism, misogyny, and lack of diversity.” It also called for mandatory disclosure of the AI’s industry environmental impact.
Deep Dive
Artificial intelligence
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
AI hype is built on high test scores. Those tests are flawed.
With hopes and fears about the technology running wild, it's time to agree on what it can and can't do.
You need to talk to your kid about AI. Here are 6 things you should say.
As children start back at school this week, it’s not just ChatGPT you need to be thinking about.
AI language models are rife with different political biases
New research explains you’ll get more right- or left-wing answers, depending on which AI model you ask.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.