Skip to Content
Artificial intelligence

Facial recognition has to be regulated to protect the public, says AI report

The research institute AI Now has identified facial recognition as a key challenge for society and policymakers—but is it too late?
December 6, 2018

Artificial intelligence has made major strides in the past few years, but those rapid advances are now raising some big ethical conundrums.

Chief among them is the way machine learning can identify people’s faces in photos and video footage with great accuracy. This might let you unlock your phone with a smile, but it also means that governments and big corporations have been given a powerful new surveillance tool.

A new report from the AI Now Institute (large PDF), an influential research institute  based in New York, has just identified facial recognition as a key challenge for society and policymakers.

The speed at which facial recognition has grown comes down to the rapid development of a type of machine learning known as deep learning. Deep learning uses large tangles of computations—very roughly analogous to the wiring in a biological brain—to recognize patterns in data. It is now able to carry out pattern recognition with jaw-dropping accuracy. 

The tasks that deep learning excels at include identifying objects, or indeed individual faces, in even poor-quality images and video. Companies have rushed to adopt such tools.

The report calls for the US government to take general steps to improve the regulation of this rapidly moving technology amid much debate over the privacy implications. “The implementation of AI systems is expanding rapidly, without adequate governance, oversight, or accountability regimes,” it says.

The report suggests, for instance, extending the power of existing government bodies in order to regulate AI issues, including use of facial recognition: “Domains like health, education, criminal justice, and welfare all have their own histories, regulatory frameworks, and hazards.”

It also calls for stronger consumer protections against misleading claims regarding AI; urges companies to waive trade-secret claims when the accountability of AI systems is at stake (when algorithms are being used to make critical decisions, for example); and asks that they govern themselves more responsibly when it comes to the use of AI.

And the document suggests that the public should be warned when facial-recognition systems are being used to track them, and that they should have the right to reject the use of such technology.

Implementing such recommendations could prove challenging, however: the toothpaste is already out of the tube. Facial recognition is being adopted and deployed incredibly quickly. It’s used to unlock Apple’s latest iPhones and enable payments, while Facebook scans millions of photos every day to identify specific users. And just this week, Delta Airlines announced a new face-scanning check-in system at Atlanta’s airport. The US Secret Service is also developing a facial-recognition security system for the White House, according to a document highlighted by UCLA. “The role of AI in widespread surveillance has expanded immensely in the U.S., China, and many other countries worldwide,” the report says.

In fact, the technology has been adopted on an even grander scale in China. This often involves collaborations between private AI companies and government agencies. Police forces have used AI to identify criminals, and numerous reports suggest it is being used to track dissidents.

Even if it is not being used in ethically dubious ways, the technology also comes with some in-built issues. For example, some facial-recognition systems have been shown to encode bias. The ACLU researchers demonstrated that a tool offered through Amazon’s cloud program is more likely to misidentify minorities as criminals.

The report also warns about the use of emotion tracking in face-scanning and voice detection systems. Tracking emotion this way is relatively unproven, yet it is being used in potentially discriminatory ways—for example, to track the attention of students.

“It’s time to regulate facial recognition and affect recognition,” says Kate Crawford, cofounder of AINow and one of the lead authors of the report. “Claiming to ‘see’ into people’s interior states is neither scientific nor ethical.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.