Facebook is set to get an even better understanding of the 700 million people who use the social network to share details of their personal lives each day.
A new research group within the company is working on an emerging and powerful approach to artificial intelligence known as deep learning, which uses simulated networks of brain cells to process data. Applying this method to data shared on Facebook could allow for novel features and perhaps boost the company’s ad targeting.
Deep learning has shown potential as the basis for software that could work out the emotions or events described in text even if they aren’t explicitly referenced, recognize objects in photos, and make sophisticated predictions about people’s likely future behavior.
The eight-person group, known internally as the AI team, only recently started work, and details of its experiments are still secret. But Facebook’s chief technology officer, Mike Schroepfer, will say that one obvious way to use deep learning is to improve the news feed, the personalized list of recent updates he calls Facebook’s “killer app.” The company already uses conventional machine learning techniques to prune the 1,500 updates that average Facebook users could possibly see down to 30 to 60 that are judged most likely to be important to them. Schroepfer says Facebook needs to get better at picking the best updates because its users are generating more data and using the social network in different ways.
“The data set is increasing in size, people are getting more friends, and with the advent of mobile, people are online more frequently,” Schroepfer told MIT Technology Review. “It’s not that I look at my news feed once at the end of the day; I constantly pull out my phone while I’m waiting for my friend or I’m at the coffee shop. We have five minutes to really delight you.”
Shroepfer says deep learning could also be used to help people organize their photos or choose which is the best one to share on Facebook.
In looking into deep learning, Facebook follows its competitors Google and Microsoft, which have used the approach to impressive effect in the past year. Google has hired and acquired leading talent in the field (see “10 Breakthrough Technologies 2013: Deep Learning”), and last year it created software that taught itself to recognize cats and other objects by reviewing stills from YouTube videos. The underlying technology was later used to slash the error rate of Google’s voice recognition services (see “Google’s Virtual Brain Goes to Work”).
Meanwhile, researchers at Microsoft have used deep learning to build a system that translates speech from English to Mandarin Chinese in real time (see “Microsoft Brings Star Trek’s Voice Translator to Life”). Chinese Web giant Baidu also recently established a Silicon Valley research lab to work on deep learning.
Less complex forms of machine learning have underpinned some of the most useful features developed by major technology companies in recent years, such as spam detection systems and facial recognition in images. The largest companies have now begun investing heavily in deep learning because it can deliver significant gains over those more established techniques, says Elliot Turner, founder and CEO of AlchemyAPI, which rents access to its own deep learning software for text and images.
“Research into understanding images, text, and language has been going on for decades, but the typical improvement a new technique might offer was a fraction of a percent,” he says. “In tasks like vision or speech, we’re seeing 30 percent-plus improvements with deep learning.” The newer technique also allows much faster progress in training a new piece of software, says Turner.
Conventional forms of machine learning are slower because before data can be fed into learning software, experts must manually choose which features of it the software should pay attention to, and they must label the data to signify, for example, that certain images contain cars.
Deep learning systems can learn with much less human intervention because they can figure out for themselves which features of the raw data are most significant. They can even work on data that hasn’t been labeled, as Google’s cat-recognizing software did. Systems able to do that typically use software that simulates networks of brain cells, known as neural nets, to process data. They require more powerful collections of computers to run.
Facebook’s AI group will work on applications that can help the company’s products as well as on more general research that will be made public, says Srinivas Narayanan, an engineering manager at Facebook who’s helping to assemble the new group. He says one way Facebook can help advance deep learning is by drawing on its recent work creating new types of hardware and software to handle large data sets (see “Inside Facebook’s Not-So-Secret New Data Center”). “It’s both a software and a hardware problem together; the way you scale these networks requires very deep integration of the two,” he says.
Facebook hired deep learning expert Marc’Aurelio Ranzato away from Google for its new group. Other members include Yaniv Taigman, cofounder of the facial recognition startup Face.com (see “When You’re Always a Familiar Face”); computer vision expert Lubomir Bourdev; and veteran Facebook engineer Keith Adams.
Gain the insight you need on artificial intelligence at EmTech MIT.