Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Ever found a product in a store and wondered if you could get it cheaper somewhere else? Soon a visual search tool will be able to help. Take a snapshot of the product with your phone and it will automatically pull up online pricing information.

The technology, developed by Cortexica, a startup spun out of research conducted at Imperial College London, has already been used to create a wine comparison app called WINEfindr. Last week, the company launched an application-programming interface (API) for the technology, which will allow others to build similar apps.

“It’s a bit like the bar-code scanning apps that link a physical object in the real world to online content,” says Anil Bharath, a researcher at Imperial and cofounder of Cortexica. “But rather than having to create a QR code, it recognizes the object itself,” he says.

Cortexica’s VisualSearch platform uses techniques inspired by the human vision system to compensate for different lighting conditions. It identifies key features of an object irrespective of their orientation, size, or how dark or light they appear in the image. This makes it possible to identify products at a distance or even while they are moving. Cortexica’s technology can also spot logos and objects in videos.

“The technology is interesting, but they aren’t giving away much,” says James Ferryman of the computer-vision group at the University of Reading, in the U.K.

Ferryman notes that other visual search tools already exist, such as Google’s Goggles, which recognizes many objects, labels, and landmarks and automatically searches the Web for information about them; and TinEye, a service that lets users upload an image and search the Web to find webpages on which the thing pictured appears.

Another of Cortexica’s cofounders, Jeffrey Ng, says his company’s technology is more accurate and scalable than any other now available.

The human vision system compares different points of an image with its neighbors—a phenomenon known as “edge extraction”—in order to identify features in a range of different conditions. “We have basically copied that architecture,” says Bharath. Cortexica uses graphical processing units (GPUs) to handle the parallel processing.

Coping with variations and resolving them is a major issue in computational vision, says Ferryman. “It’s crucial. If you can’t have this invariance, then you can’t do reliable matching,” he says.

3 comments. Share your thoughts »

Credit: Cortexica

Tagged: Computing, apps, cell phones, augmented reality

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me