Google Rolls Out New Automated Helpers
Users of mobile devices powered by Android are set to get a lot more help from Google’s artificial-intelligence algorithms.
Software that can understand our conversations and photos could change how we use our computing devices.
Who needs a personality? Not Google’s artificial-intelligence technology.
Apple and Microsoft offer virtual personal assistants that answer questions and control phone functions with a combination of smart algorithms and programmed sass. New artificial-intelligence products announced by Google today are faceless by comparison. But they suggest the search company has more ambitious ideas than its competitors about using software that’s able to understand language and photos.
At Google’s annual developer conference in San Francisco Thursday, the company showed off software that helps you understand and act on information inside mobile apps. For example, if your spouse sends you a Facebook message asking you to buy milk on the way home, it will offer to set a reminder. A new photo storage service is built around software that recognizes images of people, places, and things. Both show off Google’s strength in machine-learning research and software, but the features are more like animated search engines than pretend people.
Google executives spoke of their intentional decision to avoid personifying their assistant technology when they launched the company’s closest competitor to Apple’s Siri, an app called Google Now, in 2012. It uses information from your e-mail account and Google search history to let you know about things like flight delays or package deliveries (see “Google’s Answer to Siri Thinks Ahead”).
At Thursday’s event, Google revealed an extension of Now that allows it to watch, and offer assistance in response to, your activity in any app on a device powered by its Android operating system. The feature, called Now on Tap, is activated when a user holds down the device’s home button.
For example, if someone suggested a particular movie during a text conversation, Now would offer up an information card summarizing reviews of that movie and presenting links to read more or view the trailer. In a conversation in which someone asked you to pick up the dry cleaning, Now would offer to remind you about the chore later. The new feature can also be controlled by voice. For example, saying “Okay Google, what’s his real name?” while playing a Skrillex track in a music app such as Spotify gets you an information card with full details on the artist.
Now on Tap relies heavily on technology that can understand everyday language and uses contextual cues to figure out what words like “his,” “that,” and “this” refer to, said Aparna Chennapragada, who leads work on Google Now and showed off the new feature.
“The article you’re reading or the message you’re replying to is the key to understanding the context of the moment,” she said. “Once it has that understanding, it’s able to get you quick answers and [help you do] quick actions.”
Google’s artificial-intelligence technology is also at the heart of its biggest product announcement of the day, a photo storage service called Google Photos. It offers a way to store, automatically back up, share, and edit photos, much like competing services from Apple and Dropbox. Unlike those, Google Photos offers unlimited storage and automatically organizes your snapshots using algorithms that recognize people, places, and things in a photo.
Google’s algorithms can group your photos into albums, such as “stadiums,” “beaches,” or “Santa Cruz.” People who appear frequently in your images get their own dedicated albums. Those will even include photos of children when they were still babies, because Google’s facial-recognition technology has been tuned to cope with gradual changes in appearance. You can search your images with queries like “snowstorm in Toronto.”
All the labels applied by the image-processing algorithms are used only to help a user view his or her own photos, Google says. The new service also automatically compiles collages and video edits for your approval to save you the trouble of making them yourself.
Anil Sabharwal, who led work on the new service, said that Google’s technology provides a way to finally tackle a major inconvenience of modern life. “We thought that taking more photos and videos would make it easier to relive the moments that matter, but it’s actually made it harder,” he said. “Using machine learning, Google Photos understands what’s important and helps you organize your memories.”
Sundar Pichai, senior vice president for products at Google, boasted that Google Photos’ ability to recognize the content of images comes from the company’s investment in an approach to artificial intelligence known as deep learning (see “10 Breakthrough Technologies 2013: Deep Learning”).
Pichai said Google was a leader in the technology and hinted that it would be used to offer users more help. “We believe we have the best capabilities in the world,” he said. “You are deluged with a lot of information on your phones. We are working hard to be more assistive to users.”
Hear more about artificial intelligence at EmTech MIT.
September 11-14, 2018
MIT Media Lab