Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

A new kind of visual-search engine has been developed to automatically scour sports footage for clips showing specific types of action and events. According to its creators, borrowing a few tricks from the field of machine translation seems to make all the difference in improving the accuracy of video search.

Despite recent advances in visual-search engines, accurate video search still remains a challenge, particularly when dealing with sports footage, says Michael Fleischman, a computer scientist at MIT. “The difference between a home run and a foul ball is often hard for a human novice to notice, and nearly impossible for a machine to recognize.”

To cope with growing video repositories, cutting-edge systems are now emerging that use automatic speech recognition (ASR) to try to improve the search accuracy by generating text transcripts. (See “More-Accurate Video Search.”)

The trouble is, search terms are often repeated out of context, says Fleischman. This is particularly the case in sport footage, such as baseball, in which commentators frequently talk about home runs and other events regardless of what is actually happening on the field.

To address this issue, Fleischman and Deb Roy, director of MIT’s Cognitive Machines Group, developed a system that provides a way to associate search terms with aspects of the video, and not just with what is being said as the video plays. “We collect hundreds of hours of baseball games and automatically encode all the video based on features, such as how much grass is visible and whether there is cheering in the background,” says Fleischman.

Using machine-learning algorithms, researchers analyze these video clips to identify discrete temporal “events” by extracting patterns in the different types of shots and the order in which they occur. For example, a fly ball could be described as a sequence involving a camera panning up and a camera panning down, which also occurs during a field scene and before a pitching scene.

The search system then tries to map these events to words that appear in the transcript text by looking at their probabilistic distribution. According to Fleischman, this technique is commonly used in automatic machine translation, in which words from one language are automatically mapped onto words from another, even though they may appear in completely different orders or at different frequencies. It this case, it’s a matter of translating video into audio, Fleischman says. The system tries to find the best “translation” of the events in the video into the words uttered by the announcer.

0 comments about this story. Start the discussion »

Credit: Michael Fleischman MIT and Major League Baseball

Tagged: Communications, search, video, sports

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me