MIT Technology Review Subscribe

Neural Network Rates Images for Happiness Levels

Sentiment analysis is booming for blogs and tweets but more or less ignored when it comes to pictures. That looks set to change.

Sentiment analysis is revolutionizing the study of communication with a numerous companies now offering it as a service. The idea is to study the patterns of words in messages such as tweets and blogs to determine to what extent they are positive or negative. That allows companies, organizations, and political parties to automatically track opinions about their brands.

But while this technology has been evolving, little research has focused on the sentiment in pictures. Today, that changes thanks to the work of Can Xu at the University of California, San Diego, and a group of researchers from Yahoo Labs in Sunnyvale. These folks have developed a way to automatically assess the sentiment associated with a picture and say that it outperforms other state-of-the-art techniques.

Advertisement

Xu and co do not start from scratch. While sentiment in pictures has been largely ignored, the problem of object recognition in images is a well-developed field that has improved in leaps and bounds in recent years.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

So Xu and co begin with a neural network already trained on a data set of images showing objects divided into 1000 classifications. When shown an image, this network gives a distribution showing how likely it is that the image falls into each of these 1,000 classifications.

It is this 1,000-dimension output that Xu and co use in their research. They first take two datasets of images from Tumblr and Twitter that have already been assessed for sentiment on a five-point scale of very negative, negative, neutral, positive, and very positive.

They then train a machine learning algorithm to find a correlation between the 1,000-dimension output and the sentiment. Having trained the machine, they then compare it two other state-of-the-art sentiment analysis techniques, such as one that relies on low level visual features like image color and another called SentiBank, which generates an adjective-noun description of a picture and hence gives a sense of sentiment.

Xu and co say their technique dramatically outperforms the existing approaches. “Experiments demonstrate that our proposed models outperform the state-of-the-art methods on both Twitter and Tumblr datasets,” they say.

That’s a useful start in the incipient field of image sentient analysis. “The results for the first time suggest that Convolutional Neural Networks are highly promising for visual sentiment analysis,” they say.

Nevertheless, there is significant work ahead. One notorious problem with word-based sentiment analysis is that it does not cope with subtle cultural influences, such as sarcasm and irony. And this kind of uniquely human behavior can severely reduce the reliability of sentiment analysis.

Just how important these kinds of idiosyncrasies will be for pictures has yet to be determined but image sentiment is could yet be another area in which human performance will soon be monitored and perhaps even matched by machines.

Advertisement

Ref: arxiv.org/abs/1411.5731 : Visual Sentiment Prediction with Deep Convolutional Neural Networks

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement