Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Chandratillake believes that his company has figured out a better way to search for videos across the Web and for TV shows in particular. The Blinkx search engine uses speech-recognition technology in addition to standard metadata and surrounding text searches. For each video that the Blinkx engine encounters, it extracts audio information–strings of phonemes–that it uses to create a searchable index of words. The recognition system assembles these phonemes into words by taking into account which words typically appear in which contexts; “sail” might appear with “boat,” for instance. Also, the system uses all other information, from metadata to surrounding text, that provides clues as to how the phonemes fit together. (See “Millions of Videos, and Now a Way to Search Inside Them.”)

Blinkx Remote adds a few new tricks to the company’s standard speech-recognition system. Chandratillake explains that Blinkx has developed software that can automatically match a searched television show to other types of information from resources beyond the one that supplied the video. This requires being able to identify and assemble disparate pieces of information from around the Web–video, text, and links from numerous different sources, such as ABC.com and IMDB.com–to automatically create a concise result for a single show. In this way, he says, “it’s sort of like the Semantic Web approach,” in which information from a number of different sources is combined to produce a high-level concept. In the case of Blinkx Remote, that high-level concept is a rich, multimedia set of data about a given television episode.

The new tool “should be helpful” for finding television shows, says Horacio Franco, chief scientist in the speech-technology and research laboratory at SRI International, a research company based in Menlo Park, CA. Franco is working on systems that can recognize speech in video with high accuracy by matching audio to large vocabulary databases. Recognizing speech in video is a tough problem, though, because often there is background noise, or multiple people are talking, he says. Franco suspects that eventually, the most accurate video-search engines will also include types of optical character-recognition software that can read words that appear in videos, such as names on storefronts, episode credits, and news tickers.

Right now, Chandratillake says, Blinkx has found about 300 online shows that can be accessed using Blinkx Remote. In total, Blinkx has indexed more than seven million hours of video and audio content. And while the debate over copyrighted material rages around YouTube, Blinkx avoids these issues because users don’t upload videos. Instead, Blinkx indexes videos that are hosted by other sources (including YouTube). It has partnerships with more than 100 content providers, indexing video from sources ranging from A&E Television Networks to Rollingstone.com.

0 comments about this story. Start the discussion »

Credit: Blinkx

Tagged: Web, Internet, search, networks, video, television, Flash

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me