Skip to Content

Software with an Eye for Starbucks (and Nike and Coke)

Startup gazeMetrix uses computer vision to glean information from Instagram photos. It may be the future of marketing.
February 21, 2013

Among the 40 million images that people post to Instagram each day are a slew of sunsets, puppies, and—according to Deobrat Singh—Starbucks coffee cups. He would know: he counts them.

Singh is CEO and cofounder of gazeMetrix, a startup that uses computer vision and machine learning to recognize brand logos in photos shared on social-media sites. The company is one of several trying to analyze images for marketing and advertising purposes, making it easier for companies to track and promote their brands online and perhaps target ads more accurately to consumers.

As images become an increasingly popular form of social content, such analysis makes sense: collecting “likes,” tracking hashtags, and mining tweets and comments for mentions of a brand can be helpful, but images show more precisely how people are using (and sharing) products like Nike running shoes.

“One thing that’s absolutely clear to us is it’s an indicator of how visible those brands are in people’s lives,” Singh says.

Companies can use gazeMetrix to see how often their brand logos pop up on Instagram (and soon other services, too), and to respond to the people posting these images. Eventually, gazeMetrix’s information could lead to insights about subjects like which other products a company’s customers prefer. That’s potentially useful for those trying to target ads and make decisions about business partnerships. Singh says big companies such as Coca-Cola and Nike are trying out the service.

Singh and his two cofounders started experimenting with logging logos in social-media photos this past summer. They used image recognition technology they had originally developed for a service that could identify apps on your friends’ smartphones and then find them in the application store on your handset. (Called Bring, it failed to catch on.) They analyzed images shared on Twitter to see how many people uploaded pictures that included the Starbucks logo, which they presumed would be easy to spot and rather common, given the preponderance of Starbucks coffee shops and cup-carrying commuters. It was quickly clear they were on the right track: they spotted more than 10,000 logos the first day.

“I didn’t believe it at first, but we dug deeper into it and realized it was real—people were taking a lot of pictures of Starbucks mugs,” Singh says.

GazeMetrix launched in December on Instagram and has since collected data on 35,000 brands, about 100 of which it’s actively tracking. The company has seen over 250,000 Starbucks logos in February alone, Singh says. In the coming weeks, it also plans to start tracking photos posted to services popular on Twitter, such as Twitpic and Yfrog. But not Facebook; Singh says that most of the publicly posted photos on the largest social network aren’t user generated, so it’s not worth the time.

GazeMetrix takes advantage of Instagram’s application programming interface—which allows third-party programmers to access its data—by using it as a spigot, sending its flood of images to multiple servers where an algorithm determines if there may be a logo present. If there is, gazeMetrix uses other algorithms to try to match the logo with one in its database. If the software is extremely confident that it has found a match between an existing logo and one on a new photo, the image is sent on to that company’s logo feed. If it’s a bit more hesitant, a human can review the match.

What gazeMetrix is doing doesn’t sound that complicated, computer vision experts say, especially since corporate logos are designed to stand out. Nonetheless, Kevin Bowyer, chair of the University of Notre Dame’s computer science and engineering department, calls it a “cool and interesting application of technology that’s matured over the last two decades.” James Hays, an assistant professor in Brown University’s computer science department, adds: “I definitely expect to see a lot more of this.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.