Apple’s AI Director: Here’s How to Supercharge Deep Learning
Ruslan Salakhutdinov, who leads Apple’s AI efforts, says emerging techniques could make the most popular approach in the field far more powerful.
Recommended for You
Apple’s director of artificial intelligence, Ruslan Salakhutdinov, believes that the deep neural networks that have produced spectacular results in recent years could be supercharged in coming years by the addition of memory, attention, and general knowledge.
Speaking at MIT Technology Review’s EmTech Digital conference in San Francisco on Tuesday, Salakhutdinov said these attributes could help solve some of the outstanding problems in artificial intelligence.
Salakhutdinov, who retains a post as an associate professor at Carnegie Mellon University in Pittsburgh, pointed in his talk to limitations with deep-learning-driven machine vision and natural-language understanding.
Deep learning—a technique that involves using vast numbers of roughly simulated neurons arranged in many interconnected layers—has produced dramatic progress in machine perception over recent years, but there are many ways in which these networks are limited.
Salakhutdinov showed, for example, how image captioning systems based on the technology can label images incorrectly because they tend to focus on everything in the image. He then pointed to a solution in the form of so-called “attention mechanisms,” a tweak to deep learning that has been developed in the last few years. The approach can remedy these errors by having a system focus on specific parts of an image when applying different words in a caption. The same approach can help improve natural-language understanding, too, by enabling a machine to focus on the relevant part of a sentence in order to infer its meaning.
A technique called memory networks, developed by researchers at Facebook, can improve how machines talk with people. As the name suggests, the approach adds a component of long-term memory to neural networks so that they remember the history of a chat.
Memory networks have been shown to improve another kind of AI as well, known as reinforcement learning. For example, two researchers at CMU recently showed how this could create a smarter game-playing algorithm. Researchers at DeepMind, an AI-focused subsidiary of Alphabet, have also demonstrated ways for deep-learning systems to build and access a form of memory.
Reinforcement learning is rapidly emerging as a valuable way to solve hard-to-program problems in robotics and automated driving. It was one of MIT Technology Review’s 10 Breakthrough Technologies of 2017.
Another exciting area of future research, Salakhutdinov said, would be finding ways to combine hand-built sources of knowledge with deep learning. He pointed to general-knowledge databases like Freebase and word-meaning repositories like WordNet.
Just as humans rely heavily on general knowledge when parsing language or interpreting a visual scene, this could help make AI systems smarter, Salakhutdinov said. “How can we incorporate all that prior knowledge into deep learning?” he said during his talk. “That’s a big challenge.”
Salakhutdinov spoke during a session that brought together researchers from several different schools of AI. A common theme among the speakers was the need for different approaches in order to take AI to the next level.
During the session Pedro Domingos, a professor at the University of Washington who studies different machine-learning approaches, said there is also a need to keep searching for completely new approaches to AI. “There’s a school of thought in machine learning that we don’t need fancy new algorithms, we just need more data,” he said. “I think there are really deep, fundamental ideas that need to be discovered before we can really solve AI.”
Hear more about AI at EmTech MIT 2017.Register now