Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Once a new video clip is encoded using such patterns, the system looks for co-occurrences between the matched patterns and phrases. “In this way, the system is able to find correlations with events in the game, without requiring a human to explicitly design representations for any specific events,” says Fleischman.

Giving precise figures on the accuracy of the system is difficult because there is no standard for judging. Even so, trials carried out by Fleischman and Roy involving searching six baseball games for occurrences of home runs showed promise. Using just visual search alone yielded poor results, as was the case using just speech. “However, when you combine the two sources of information, we have seen results that nearly double the performance of either one on their own,” says Fleischman.

The researchers are now looking to extend this system to other sport-video archives, such as for basketball. But it shouldn’t just benefit sports fans, says Fleischman.

In theory, the system could help with other video-search processes, such as security-video analysis, says David Hogg, a professor of computer science and head of the Vision Group at Leeds University, in the United Kingdom. This system is a very novel approach, he says, and one that shows the way forward for the unsupervised learning systems that are needed to make this kind of search automatic.

Using speech and visual information together is a powerful combination for machine learning, Hogg says. “In machine learning, it is very likely to be easier the more information there is available about each situation.”

Speech can help remove ambiguities in visual data, and visual data can help disambiguate speech, says Richard Stern, a professor of electrical and computer engineering at Carnegie Mellon University, in Pittsburgh. It’s a natural marriage, he says, but one that’s just beginning to emerge.

Until recently, there has been relatively little use of ASR to aid in search, says Stern. “But this is all changing very rapidly,” he says. “Google has been recruiting speech scientists aggressively for the past several years–another indication that multimedia search is moving from the research lab to the consumer very rapidly.”

0 comments about this story. Start the discussion »

Credit: Michael Fleischman MIT and Major League Baseball

Tagged: Communications, search, video, sports

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me