Skip to Content

A Biologically Inspired Visual Search Engine

A startup’s new technology will let smart phones recognize objects by mimicking the human visual system.

Ever found a product in a store and wondered if you could get it cheaper somewhere else? Soon a visual search tool will be able to help. Take a snapshot of the product with your phone and it will automatically pull up online pricing information.

Wine time: An app called WINEfindr uses technology developed by Cortexica to identify bottles, and then find comparative pricing information online.

The technology, developed by Cortexica, a startup spun out of research conducted at Imperial College London, has already been used to create a wine comparison app called WINEfindr. Last week, the company launched an application-programming interface (API) for the technology, which will allow others to build similar apps.

“It’s a bit like the bar-code scanning apps that link a physical object in the real world to online content,” says Anil Bharath, a researcher at Imperial and cofounder of Cortexica. “But rather than having to create a QR code, it recognizes the object itself,” he says.

Cortexica’s VisualSearch platform uses techniques inspired by the human vision system to compensate for different lighting conditions. It identifies key features of an object irrespective of their orientation, size, or how dark or light they appear in the image. This makes it possible to identify products at a distance or even while they are moving. Cortexica’s technology can also spot logos and objects in videos.

“The technology is interesting, but they aren’t giving away much,” says James Ferryman of the computer-vision group at the University of Reading, in the U.K.

Ferryman notes that other visual search tools already exist, such as Google’s Goggles, which recognizes many objects, labels, and landmarks and automatically searches the Web for information about them; and TinEye, a service that lets users upload an image and search the Web to find webpages on which the thing pictured appears.

Another of Cortexica’s cofounders, Jeffrey Ng, says his company’s technology is more accurate and scalable than any other now available.

The human vision system compares different points of an image with its neighbors—a phenomenon known as “edge extraction”—in order to identify features in a range of different conditions. “We have basically copied that architecture,” says Bharath. Cortexica uses graphical processing units (GPUs) to handle the parallel processing.

Coping with variations and resolving them is a major issue in computational vision, says Ferryman. “It’s crucial. If you can’t have this invariance, then you can’t do reliable matching,” he says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.