Yahoo Labs’ Algorithm Identifies Creativity in 6-Second Vine Videos
In January 2013, a video sharing service called Vine suddenly hit cyberspace. The service, owned by Twitter, was unique because users were allowed to record and share videos that were no more than six seconds long. But within months, it had become the most popular video sharing application on the web and the most downloaded free app on the Apple store.
The time constraint has had an interesting impact on the creative process: it has forced users to tell their stories in just six seconds. That, in turn, has lead to an entirely new genre of filmmaking that now has its own six second filmmaking category at the Tribeca Film Festival in New York.
The extraordinary success of six second videos offers a curious opportunity. Because the videos are so short, they are relatively easy to analyse using machine vision algorithms and audio analysis techniques. And that raises an interesting question. Can these automated techniques tell the difference between six second videos that humans consider creative and those considered non-creative.
Today, we get an answer thanks to the work of Miriam Redi at Yahoo Labs in Barcelona, Spain, and a few pals who have used crowdsourcing techniques and machine algorithms to analyse some 4000 six-second videos from the Vine streamline. Their results suggest that machines can do a pretty good job of distinguishing between creative and non-creative content—at least in the six-second genre.
The team began with the data-set compiled by choosing 1000 videos that had already been highlighted as being creative. They selected a further 200 videos from online articles about Vine creativity and scoured the content produced by the authors of this content to find another 2300 videos. Finally, they picked a further 500 videos at random from the Vine streamline.
The next task was to determine which of these videos were creative and which were non-creative. To find out, they asked some 300 crowdsourced volunteers to look at the videos and answer the question “is this video creative?” with possible answers being positive, negative or don’t know. Each video was rated by five different volunteers.
These workers produced surprisingly consistent results. They were in 100 per cent agreement on 48 per cent of the videos. In other words, all five evaluators gave the same score to almost half the videos. Of these, they agreed that 25 percent were creative. To put this in perspective, the volunteers identified only 1.9 percent of the 500 randomly chosen videos as creative, giving a background rate of creativity.
They then analysed each video with various algorithms. For example, they looked for compositional features such as the rule of thirds and shallow depth of field. They used an algorithm for analysing the content of video scenes that studies the contours and layout in an image. They also looked for any evidence that the videos were stop motion animations or designed to run on a seemingly endless loop by looking for similarities between the first and last frame. And they assessed the novelty of each video by comparing its properties against a randomly selected group of others.
They then looked for correlations between the features found by machine algorithms and the videos identified as creative by human volunteers. It turns out that the scene content is most strongly correlated with creativity, followed by compositional features and video novelty.
In a final step, they trained a machine learning algorithm to use these features to find creative videos in a data-set it had not seen before. That algorithm was able to correctly classify videos as either creative or non-creative 80 per cent of the time.
That’s an interesting result that opens the possibility of automatically filtering the Vine livestream for the most creative content. “This allows us to study audio-visual creativity at a fine-grained level, helping us to understand what, exactly, constitutes creativity in micro-videos,” say Redi and co.
And if it is possible for an algorithm to identify creativity accurately, why wouldn’t it be possible for a computer to generate creative content? In fact, spotting the difference between human-produced creativity and computer generated creativity may one day be an interesting Turing test-style exercise.
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
ChatGPT is about to revolutionize the economy. We need to decide what that looks like.
New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.