Skip to Content

Yahoo Labs’ Algorithm Identifies Creativity in 6-Second Vine Videos

Nobody knew how to automatically identify creativity until researchers at Yahoo Labs began studying the Vine livestream.

In January 2013, a video sharing service called Vine suddenly hit cyberspace. The service, owned by Twitter, was unique because users were allowed to record and share videos that were no more than six seconds long. But within months, it had become the most popular video sharing application on the web and the most downloaded free app on the Apple store.

The time constraint has had an interesting impact on the creative process: it has forced users to tell their stories in just six seconds. That,  in turn, has lead to an entirely new genre of filmmaking that now has its own six second filmmaking category at the Tribeca Film Festival in New York.

The extraordinary success of six second videos offers a curious opportunity. Because the videos are so short, they are relatively easy to analyse using machine vision algorithms and audio analysis techniques. And that raises an interesting question. Can these automated techniques tell the difference between six second videos that humans consider creative and those considered non-creative.

Today, we get an answer thanks to the work of Miriam Redi at Yahoo Labs in Barcelona, Spain, and a few pals who have used crowdsourcing techniques and machine algorithms to analyse some 4000 six-second videos from the Vine streamline. Their results suggest that machines can do a pretty good job of distinguishing between creative and non-creative content—at least in the six-second genre.

The team began with the data-set compiled by choosing 1000 videos that had already been highlighted as being creative. They selected a further 200 videos from online articles about Vine creativity and scoured the content produced by the authors of this content to find another 2300 videos. Finally, they picked a further 500 videos at random from the Vine streamline.

The next task was to determine which of these videos were creative and which were non-creative. To find out, they asked some 300 crowdsourced volunteers to look at the videos and answer the question “is this video creative?” with possible answers being positive, negative or don’t know. Each video was rated by five different volunteers.

These workers produced surprisingly consistent results. They were in 100 per cent agreement on 48 per cent of the videos. In other words, all five evaluators gave the same score to almost half the videos. Of these, they agreed that 25 percent were creative. To put this in perspective, the volunteers identified only 1.9 percent of the 500 randomly chosen videos as creative, giving a background rate of creativity.

They then analysed each video with various algorithms. For example, they looked for compositional features such as the rule of thirds and shallow depth of field. They used an algorithm for analysing the content of video scenes that studies the contours and layout in an image. They also looked for any evidence that the videos were stop motion animations or designed to run on a seemingly endless loop by looking for similarities between the first and last frame. And they assessed the novelty of each video by comparing its properties against a randomly selected group of others.

They then looked for correlations between the features found by machine algorithms and the videos identified as creative by human volunteers. It turns out that the scene content is most strongly correlated with creativity, followed by compositional features and video novelty.

In a final step, they trained a machine learning algorithm to use these features to find creative videos in a data-set it had not seen before. That algorithm was able to correctly classify videos as either creative or non-creative 80 per cent of the time.

That’s an interesting result that opens the possibility of automatically filtering the Vine livestream for the most creative content. “This allows us to study audio-visual creativity at a fine-grained level, helping us to understand what, exactly, constitutes creativity in micro-videos,” say Redi and co.

And if it is possible for an algorithm to identify creativity accurately, why wouldn’t it be possible for a computer to generate creative content? In fact, spotting the difference between human-produced creativity and computer generated creativity may one day be an interesting Turing test-style exercise.

Ref: arxiv.org/abs/1411.4080

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.