Decades after the idea of artificial intelligence first appeared, we’re starting to see machines learn how to perform some very clever tricks—and recognizing faces and spoken words with impressive accuracy may only be the start. Today, two AI experts shed some light on how far this might go at the MIT Technology Review Digital Summit in San Francisco.
Adam Cheyer, one of the creators of Apple’s personal assistant, Siri, is a cofounder of Viv Labs, a company that’s trying to build a considerably more capable personal assistant, one that can answer sophisticated questions that connect different concepts. For instance, it might connect weather and geographic knowledge with information from your contact book to respond to a query like: “If it’s going to rain tonight, find me a pizza restaurant near my brother’s place.”
Cheyer said that although advances in AI have given computers remarkable skills, those abilities are still quite narrow. Creating a machine that can answer a question that connects different sources of data, or different concepts, will mean finding ways to connect those sources of existing knowledge without hard coding the connections.
Cheyer claims that achieving this means automating some of the underlying programming tasks. So Viv Labs might not only represent an advance in AI, but also an important example of how computers can collaborate with humans in a new area. “The biggest revolution is actually happening under the hood in the way software is built,” Cheyer said. “It’s not just about machines learning narrow functions; they’re going to be helping to program.”
Cheyer suggested that this would go well beyond just programming. “The goal will be how can you get humans and AI working together at scale, where humans are doing the best things they can do, and machines will do the best they can do.”
Much recent progress in AI is due to a field known as deep learning, which involves training simplified virtual neurons to recognize patterns using quantities of data. Quoc Le, a research scientist at Google Brain, described his latest work on deep learning, an area of machine learning that has produced remarkable results in recent years (see “Deep Learning Catches on In New Industries, from Fashion to Finance”).
As with Viv Labs, Le’s most recent work involves combining different approaches to produce more than the sum of their parts. This means linking different deep learning systems together to produce impressive results, such as a system that can answer questions about the content of images (see “Google’s Brain-Inspired Software Describes What It Sees in Complex Images”). “Once we understand images, we understand speech, and we understand text, we can connect the domains together,” Le said.
However, Le said that the biggest obstacle to developing more truly intelligent computers is finding a way for them to learn without requiring labeled training data—an approach called “unsupervised learning.”
Recent progress in artificial intelligence has prompted some people to worry about the future of employment in many industries, and even super-smart machines that might pose an existential threat. Neither Cheyer nor Le seemed particularly concerned about the latter idea. “There are many things that humans can do that machines can’t do today,” Cheyer said. “I do think there will be shifts, but I don’t think we’ll be sitting on the couch, letting the robots run our lives. Humans will adapt.”
EmTech Digital will continue today and tomorrow in San Francisco, so check back here frequently for updates.