On Monday Apple showed off a string of new iPhone features powered by recent advances in artificial intelligence—many of them aping ones already launched by rival Google.
But Apple’s announcements of features like facial recognition or software that knows what’s in your photos, made during its annual Worldwide Developer Conference, were distinct in how much they emphasized privacy.
Craig Federighi, senior vice president of software engineering at Apple, repeatedly stated that machine-learning algorithms able to understand personal data such as photos are being used only within the confines of a person’s iPhone, not on Apple’s cloud servers. “We believe you should have great features and great privacy,” he said.
A new version of Apple’s Photos app, coming this fall with a new version of Apple’s mobile operating system, will use facial recognition to maintain virtual albums of snaps containing people you frequently photograph. It will also look at the contents of your photos, so you can search your collection using keywords such as “horses” or “mountains.”
Federighi said those features are powered by deep learning, a technique that underpins significant recent progress in artificial intelligence. They are also playing catch-up with Google, which introduced a photos service with those same features over a year ago (see “Google Rolls Out New Automated Helpers”).
But Federighi said that Apple didn’t want its algorithms to spill data to Apple about the content of user photos. “When it comes to performing advanced deep learning and intelligence of your data, we’re doing it on your device, keeping your personal data under your control,” he said.
Companies including Google and Facebook generally run image-recognition algorithms inside their cloud computing systems, meaning that photos must be uploaded to a company’s servers. Although Google and others have privacy policies governing how data gleaned from customers is used, some experts say it is safer if data such as what’s inside your photos never reaches corporate servers in the first place.
Apple’s mobile operating system update this fall will also see the company’s QuickType mobile keyboard become better at suggesting words, thanks to an ability to understand the context of what you have already typed. For example, it would suggest completing the sentence “The Orioles are playing in the …” with the word “playoffs,” but the sentence “The children are playing in the …” with “yard.”
Again, Google has rolled out this kind of technology more aggressively. Google’s Inbox e-mail app can suggest sentence-long replies to e-mails, and the same feature will be built into the company’s Allo messaging app launching this summer (“See Google Finally Launches Siri-Killer in Search Pivot”).
But Federighi said that Apple had invested in research that allowed the company to collect data on what users type while still protecting their privacy. The company has adopted a method called "differential privacy" that's been widely discussed in academic research. The method makes it possible to collect data such as which words people use most commonly without leaking information about who is using which words, he said.
Federighi said that privacy researcher Aaron Roth, an associate professor at the University of Pennsylvania, was given a chance to review that work. “Incorporating differential privacy broadly into Apple’s technology positions Apple as the clear privacy leader among technology companies today,” said Roth, in a statement released by the company.
Apple’s claims for its new uses of machine learning are in line with its history of emphasizing that features that involve user data such as search are implemented in ways that protect privacy. CEO Tim Cook has said the company is able to take this attitude because Apple’s business model is not built on monetizing customer data.