Skip to Content
Artificial intelligence

IBM says it is no longer working on face recognition because it’s used for racial profiling

Teguhjati Pras | Pixabay

The news: IBM has said the company will stop developing or selling facial recognition software due to concerns the technology is used to promote racism. In a letter to Congress, IBM’s CEO Arvind Krishna said the tech giant opposes any technology used “for mass surveillance, racial profiling, violations of basic human rights and freedoms.”

He called for a “national dialogue” on whether and how it is appropriate for facial recognition technology to be used by domestic law enforcement agencies. The letter also called for new federal rules to crack down on police misconduct, and more training and education for in-demand skills to improve economic opportunities for people of color.

Not a new concern: Activists and experts have been pointing out for years that facial recognition systems are biased, and flagging concerns about its potential for abuse. Their concerns are legitimate: a landmark study by the US National Institute of Standards and Technology last year confirmed that the majority of facial recognition algorithms performed worse on non-white faces.

How it’s been received: IBM is the first big tech company to withdraw from developing the technology altogether. Although the news has broadly been received positively by tech workers, and in particular campaigners worried about the use of facial recognition, critics have pointed out that it’s hardly a great sacrifice for IBM to quit a market that it barely had a foothold in to begin with.

Despite that, it’s still a big tech company taking an unusually strong moral stance over one of the most controversial topics of the day. It’s particularly timely given the ongoing protests over police violence and racism in the US and around the world. But it’s unclear whether it will remain a one-off change by IBM or a move that helps to nudge other tech companies into action.

Deep Dive

Artificial intelligence

The inside story of how ChatGPT was built from the people who made it

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.

AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.

GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why

We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.

The original startup behind Stable Diffusion has launched a generative AI for video

Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.