Skip to Content
Artificial intelligence

The Next Big Step for AI? Understanding Video

Perceiving dynamic actions could be a huge advance in how software makes sense of the world.
December 6, 2017
A screenshot from one of the videos in the Moments in Time Dataset, which could help AI better understand video content.
Moments in Time Dataset

For a computer, recognizing a cat or a duck in a still image is pretty clever. But a stiffer test for artificial intelligence will be understanding when the cat is riding a Roomba and chasing the duck around a kitchen.

MIT and IBM this week released a vast data set of video clips painstakingly annotated with details of the action being carried out. The Moments in Time Dataset includes three-second snippets of everything from fishing to break-dancing.

“A lot of things in the world change from one second to the next,” says Aude Oliva, a principal research scientist at MIT and one of the people behind the project. “If you want to understand why something is happening, motion gives you lot of information that you cannot capture in a single frame.”

The current boom in artificial intelligence was sparked, in part, by success in teaching computers to recognize the contents of static images by training deep neural networks on large labeled data sets (see “The Revolutionary Technique That Quietly Changed Machine Vision Forever”).

AI systems that interpret video today, including the systems found in some self-driving cars, often rely on identifying objects in static frames rather than interpreting actions. On Monday Google launched a tool capable of recognizing the objects in video as part of its Cloud Platform, a service that already includes AI tools for processing image, audio, and text.

The next challenge may be teaching machines to understand not just what a video contains, but what’s happening in the footage as well. That could have some practical benefits, perhaps leading to powerful new ways of searching, annotating, and mining video footage. It also figures to give robots or self-driving cars a better understanding of how the world around them is unfolding.

The MIT-IBM project is in fact just one of several video data sets designed to spur progress in training machines to understand actions in the physical world. Last year, for example, Google released a set of eight million tagged YouTube videos called YouTube-8M. Facebook is developing an annotated data set of video actions called the Scenes, Actions, and Objects set.

Olga Russakovsky, an assistant professor at Princeton University who specializes in computer vision, says it has proved difficult to develop useful video data sets because they require more storage and computing power than still images do. “I’m excited to play with this new data,” she says. “I think the three-second length is great—it provides temporal context while keeping the storage and computation requirements low.”

Others are taking a more creative approach. Twenty Billion Neurons, a startup based in Toronto and Berlin, created a custom data set by paying crowdsourced workers to perform simple tasks. One of the company’s cofounders, Roland Memisevic, says it also uses a neural network designed specifically to process temporal vision information.

“Networks trained on the other data sets can tell you whether the video shows a soccer match or a party,” he says. “Our networks can tell you whether someone just entered the room.”

Danny Gutfreund, a researcher at IBM who collaborated on the project, says recognizing actions effectively will require that machines learn about, say, a person taking an action and transfer this knowledge to a case where, say, an animal is performing the same action. Progress in this area, known as transfer learning, will be important for the future of AI. “Let’s see how machines can do this transfer learning, this analogy, that we do very well,” he says.

Gutfreund adds that the technology could have practical applications. “You could use it for elder care, telling if someone has fallen or if they have taken their medicine,” he says. “You can think of devices that help blind people.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.