Skip to Content
Computing

Apple defends its new anti-child-abuse tech against privacy concerns

Apple’s radical new anti-abuse technology provoked both criticism and praise by scanning directly on iPhones.

Photo by Rebecca Harris on Unsplash

Apple has boasted a few iconic ads during the company’s 45-year history, from the famous 1984 Super Bowl ad for Macs to the company’s combative 2019 ad campaign promising that “what happens on your iPhone stays on your iPhone.”

On Thursday, Apple announced new technologies to detect child sexual abuse material (CSAM) right on iPhones—and suddenly, it seems what’s on your iPhone no longer always simply stays there. The controversial new features strike at the heart of concerns about privacy, surveillance, and tech-enabled crimes. 

Apple says these new features preserve privacy as they combat child abuse. For critics, though, the biggest question is not about what the technology might do today; it’s about what it could become tomorrow. 

Will Apple’s new scanning technology enable even broader surveillance around the world? Will governments start demanding that Apple scan for all sorts of forbidden content on the iPhones in their countries? 

Apple says the new detection tool, called NeuralHash, can identify images of child abuse stored on an iPhone without decrypting the image. The company also says it has implemented multiple checks to reduce the chance of errors before images are passed to the National Center for Missing and Exploited Children (NCMEC) and then to law enforcement. (For example, the scanner must detect multiple images rather than just one.) The feature will roll out in the United States this year.

Google, Microsoft, Dropbox, and other big cloud services already scan material stored on their servers for child abuse material, so the general premise is not new. The difference here is that some of Apple’s scans will occur on the iPhone itself—and Apple argues that this is the defining pro-privacy feature of the new technology. 

In a briefing with journalists on Friday, Apple said that while other cloud services scan nearly everything their users upload, Apple’s on-device scanning is meant to only send an unreadable hash to the company, a code that identifies images that depict child abuse based on a database maintained by NCMEC rather than Apple.

Apple has also pointed out that the feature applies only to people who upload photos to iCloud (automatic uploads are the default setting on iPhones but can be disabled). iCloud accounts are not currently encrypted, so law enforcement can already peer into them.

Why, then, doesn’t Apple just do what other big tech companies do and scan images when they’re uploaded to the cloud instead of when they’re still on someone’s phone? Why build a new and complex set of technologies when it could just take existing tech from off the shelf?

Apple’s next move

There is an enormous amount we don’t know about what Apple is doing now and what comes next. One popular theory is that this week’s announcement will be the first of several moves in the coming months.

Apple’s marketing has boasted about privacy for years. The iPhone was one of the first personal devices to be automatically encrypted. iMessage is one of the most popular messaging apps with end-to-end encryption. But many Apple customers automatically back everything up to their iCloud accounts—and iCloud has never been encrypted. So even in 2019, that famous iPhone privacy ad could have used a tiny asterisk.

Governments have for years exerted enormous pressure on all tech companies, Apple included, to allow law enforcement special access to otherwise encrypted data in order to prevent what they see as the most heinous of crimes. Child abuse is always at the top of that list, followed closely by terrorism. Apple’s public embrace of encryption has made some cops’ jobs more difficult—but it’s made some kinds of surveillance and abuse of power harder too.

The big loophole has always been iCloud. Cops have to work hard to get directly into iPhones, but if most of the data is already in an unencrypted iCloud account, a warrant will do the trick.

Following this week’s announcement, some experts think Apple will soon announce that iCloud will be encrypted. If iCloud is encrypted but the company can still identify child abuse material, pass evidence along to law enforcement, and suspend the offender, that may relieve some of the political pressure on Apple executives. 

It wouldn’t relieve all the pressure: most of the same governments that want Apple to do more on child abuse also want more action on content related to terrorism and other crimes. But child abuse is a real and sizable problem where big tech companies have mostly failed to date.

“Apple’s approach preserves privacy better than any other I am aware of,” says David Forsyth, the chair of the computer science department at the University of Illinois Urbana-Champaign, who reviewed Apple’s system. “In my judgement this system will likely significantly increase the likelihood that people who own or traffic in [CSAM] are found; this should help protect children. Harmless users should experience minimal to no loss of privacy, because visual derivatives are revealed only if there are enough matches to CSAM pictures, and only for the images that match known CSAM pictures. The accuracy of the matching system, combined with the threshold, makes it very unlikely that pictures that are not known CSAM pictures will be revealed.”

What about WhatsApp?

Every big tech company faces the horrifying reality of child abuse material on its platform. None have approached it like Apple.

Like iMessage, WhatsApp is an end-to-end encrypted messaging platform with billions of users. Like any platform that size, they face a big abuse problem.

“I read the information Apple put out yesterday and I'm concerned,” WhatsApp head Will Cathcart tweeted on Friday. “I think this is the wrong approach and a setback for people's privacy all over the world. People have asked if we'll adopt this system for WhatsApp. The answer is no.”

WhatsApp includes reporting capabilities so that any user can report abusive content to WhatsApp. While the capabilities are far from perfect, WhatsApp reported over 400,000 cases to NCMEC last year.

“This is an Apple built and operated surveillance system that could very easily be used to scan private content for anything they or a government decides it wants to control,” Cathcart said in his tweets. “Countries where iPhones are sold will have different definitions on what is acceptable. Will this system be used in China? What content will they consider illegal there and how will we ever know? How will they manage requests from governments all around the world to add other types of content to the list for scanning?”

In its briefing with journalists, Apple emphasized that this new scanning technology was releasing only in the United States so far. But the company went on to argue that it has a track record of fighting for privacy and expects to continue to do so. In that way, much of this comes down to trust in Apple. 

The company argued that the new systems cannot be misappropriated easily by government action—and emphasized repeatedly that opting out was as easy as turning off iCloud backup. 

Despite being one of the most popular messaging platforms on earth, iMessage has long been criticized for lacking the kind of reporting capabilities that are now commonplace across the social internet. As a result, Apple has historically reported a tiny fraction of the cases to NCMEC that companies like Facebook do.

Instead of adopting that solution, Apple has built something entirely different—and the final outcomes are an open and worrying question for privacy hawks. For others, it’s a welcome radical change.

“Apple’s expanded protection for children is a game changer,” John Clark, president of the NCMEC, said in a statement. “The reality is that privacy and child protection can coexist.” 

High stakes

An optimist would say that enabling full encryption of iCloud accounts while still detecting child abuse material is both an anti-abuse and privacy win—and perhaps even a deft political move that blunts anti-encryption rhetoric from American, European, Indian, and Chinese officials.

A realist would worry about what comes next from the world’s most powerful countries. It is a virtual guarantee that Apple will get—and probably already has received—calls from capital cities as government officials begin to imagine the surveillance possibilities of this scanning technology. Political pressure is one thing, regulation and authoritarian control are another. But that threat is not new nor is it specific to this system. As a company with a track record of quiet but profitable compromise with China, Apple has a lot of work to do to persuade users of its ability to resist draconian governments.

All of the above can be true. What comes next will ultimately define Apple’s new tech. If this feature is weaponized by governments for broadening surveillance, then the company is clearly failing to deliver on its privacy promises.

Deep Dive

Computing

Inside the hunt for new physics at the world’s largest particle collider

The Large Hadron Collider hasn’t seen any new particles since the discovery of the Higgs boson in 2012. Here’s what researchers are trying to do about it.

Why China is betting big on chiplets

By connecting several less-advanced chips into one, Chinese companies could circumvent the sanctions set by the US government.

How Wi-Fi sensing became usable tech

After a decade of obscurity, the technology is being used to track people’s movements.

Algorithms are everywhere

Three new books warn against turning into the person the algorithm thinks you are.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.