Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Today there are more low-quality video cameras–surveillance and traffic cameras, cell-phone cameras and webcams–than ever before. But modern search engines can’t identify objects very reliably in clear, static pictures, much less in grainy YouTube clips. A new software approach from researchers at Carnegie Mellon University could make it easier to identify a person’s face in a low-resolution video. The researchers say that the software could be used to identify criminals or missing persons, or it could be integrated into next-generation video search engines.

Today’s face-recognition systems actually work quite well, says Pablo Hennings-Yeomans, a researcher at Carnegie Mellon who developed the system–when, that is, researchers can control the lighting, angle of the face, and type of camera used. “The new science of face recognition is dealing with unconstrained environments,” he says. “Our work, in particular, focuses on the problem of resolution.”

In order for a face-recognition system to identify a person, explains Hennings-Yeomans, it must first be trained on a database of faces. For each face, the system uses a so-called feature-extraction algorithm to discern patterns in the arrangement of image pixels; as it’s trained, it learns to associate some of those patterns with physical traits: eyes that slant down, for instance, or a prominent chin.

The problem, says Hennings-Yeomans, is that existing face-recognition systems can identify faces only in pictures with the same resolution as those with which the systems were trained. This gives researchers two choices if they want to identify low-resolution pictures: they can either train their systems using low-resolution images, which yields poor results in the long run, or they can add pixels, or resolution, to the images to be identified.

The latter approach, which is achieved by using so-called super-resolution algorithms, is common, but its results are mixed, says Hennings-Yeomans. A super-resolution algorithm makes assumptions about the shape of objects in an image and uses them to sharpen object boundaries. While the results may look impressive to the human eye, they don’t accord well with the types of patterns that face-recognition systems are trained to look for. “Super-resolution will give you an interpolated image that looks better,” says Hennings-Yeomans, “but it will have distortions like noise or artificial [features].”

1 comment. Share your thoughts »

Credits: Pablo Hennings-Yeomans

Tagged: Computing, Communications, search, video, facial recognition, computer vision

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me