Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

The tubular crest that runs over the top of your ear is known as the helix. It’s quite distinctive, even if it doesn’t posses the pointy bit that proves you’re descended from a monkey. Best of all, it doesn’t change as you age, unlike the iris, which along with the face are the most popular means by which machines recognize humans.

The problem with using ears (or any other feature, such as your fingerprint or even the way that you walk) for biometric security, is that first a computer must find and isolate the feature to be identified.

That sounds like a simple problem only because humans do it so easily. Feature recognition is one of the biggest challenges of computer vision.

Fortunately, researchers in the School of Electronics and Computer Science of the University of Southampton have come up with a means for identifying ears with a success rate of 99.6% (pdf). That doesn’t mean it can identify who owns what ear at that rate, just that it can successfully complete the first step of any biometric identification exercise, known as enrollment. (Recognition is, of course, the second step.)

If you’re into algorithms, the way they got such consistent results is no less interesting than the potential applications of their work. (Think Minority Report, but instead of keeping around his old eyes, Tom Cruise has to cart around his old ears.)

The researchers followed a burgeoning trend in image analysis in which the algorithm used to highlight a feature is based on some actual physical process. A classic example of this is the use of an algorithm in which every pixel is assumed to act on every other pixel with a gravitational or magnetic pull proportional to its intensity. Add up all those forces, and you get a vector field that uniquely represents the image.

In this experiment, the researchers used the analogy of rays of light passing through the pixels to help them trace the helix of the ear. Depending on the intensity of the pixel, a hypothetical ray of light is either refracted by some angle or even reflected.

The advantage of using physical analogies to define vision algorithms is that they make intuitive sense and can be grasped by our puny human minds, allowing engineers to guess what the results of adjusting relevant parameters will be.

Follow Mims on Twitter or contact him via email.

0 comments about this story. Start the discussion »

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me