Skip to Content

Face Recognition

A camera and algorithm know it’s you.

With airports tightening up security, biometric technologies such as face recognition may come on line. A few techniques exist to match known facial profiles against those of strangers in a crowd or to verify a person’s claimed identity, as at an ATM. But two stand out: local feature analysis, developed by Joseph Atick, who founded Jersey City, NJ-based Visionics; and eigenface, first demonstrated at Helsinki University of Technology, later developed at MIT, and currently marketed by Viisage Technology of Littleton, MA.

A system based on local feature analysis uses a camera and computer to identify a person in a crowd. First it scans a field of view for shapes that could be faces. It then searches for facial features like those already stored in its memory. To be sure the eyes, nose and mouth belong to a living being-and not a mannequin-the program looks for eye-blinks or other telltale facial movements. The system then analyzes the pixels that make up the face image. It compares the darkness of each pixel to that of its neighbors, looking for areas where abrupt differences in value radiate outward from a single pixel. These changes can occur between the eyebrows and skin, the eyes and eyelids, or on features that protrude, such as the cheekbones and nose. The system plots the location of each pixel, known as an “anchor point,” then connects the dots, forming a mesh of triangles. It measures the angles of each triangle and comes up with a number made of 672 ones and zeroes that identifies the face. The program attempts to match that number to a similar one in its database. There can never be a perfect match, so the program ranks how confident it is about the identification. And since the program plots the anchor points by bone structure, disguises such as beards, makeup and eyeglasses won’t fool it.

Like feature analysis, the eigenface method also reduces a face to a number. But instead of looking at a collection of facial features locally, it examines the face as a whole. First it averages out a database of head shots to produce one composite face. Then it compares the face being identified to the composite. An algorithm measures how much the target face differs from the composite and generates a 128-digit personal identification number based on the deviation. Both systems offer security at the expense of constant surveillance. Whether society is willing to pay that cost is yet to be determined.

Got a new technology you’d like to see explained in Visualize? Send your ideas to visualize@technologyreview.com.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.