Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Last week at SIGGRAPH, an international conference on computer graphics, a group presented an innovative system designed to analyze images of the sky. Most commercial image-search systems figure out what’s in an image by analyzing the associated text, such as the words surrounding a picture on a Web page or the tags provided by humans. But ideally, the software would analyze the content of the image itself. Much research has been done in this area, but so far no single system has solved the problem. The new system, called SkyFinder, could offer important insight into how to make an intuitive, automatic, scalable search tool for all images.

Jian Sun, who worked on SkyFinder and is the lead researcher for the Visual Computing Group at Microsoft Research, says that the traditional approach to image search sometimes leads to nonsensical results when a computer misinterprets the surrounding text. Typically, engines that analyze the content of images instead of text need a picture to guide the search–something submitted by the user that looks a lot like their intended result. Unfortunately, such an image may not be easy for the user to find. Sun says SkyFinder, in contrast, provides good results while also letting the user interact intuitively with the search engine.

To search for a specific kind of sky image, the user simply enters a request in fairly natural language, such as “a sky covered with black clouds, with the horizon at the very bottom.” SkyFinder will offer suggested images matching that description.

Each image is processed after it is added to the database. Using a popular method called “bag of words,” Sun explains, the image is broken into small patches, each of which is analyzed and assigned a codeword describing it visually. By analyzing the patterns of the codewords, the system classifies the image in categories such as “blue sky” or “sunset,” and determines the position of the sun and horizon. By doing this work offline, Sun says, the system can easily be scaled to search very large image databases. (The SkyFinder database currently contains half a million images.)

It’s also possible to fine-tune search terms using a visual interface. The system offers a screen, for example, where the user can adjust icons to show the desired positions of the sun and horizon. Those coordinates are added to the search.

SkyFinder arranges images logically on the screen–for example, from blue sky to cloudy sky, or from daytime to sunset. Once the user has found an image she likes, she can use it to guide a more targeted search to find similar images.

0 comments about this story. Start the discussion »

Credit: Hamed Saber, CoreBurn, Beyond the Lens, pfly (CC2.0 license)

Tagged: Computing, Web, machine learning, computer vision, image search, SIGGRAPH

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me