Skip to Content

App bans won’t make US security risks disappear

Punishing individual apps like WeChat and TikTok is a short-term fix for a long-standing problem.
September 21, 2020
A smartphone with the screen unlocked rests on a table.
Jonas Lee/Unsplash

Will the US government ban TikTok and WeChat, or won’t it—and why? With the Trump administration issuing vaguely phrased executive orders and policies about the apps, even as legal challenges against potential bans move through the courts and the president gives his “blessing” to a deal to keep TikTok in US app stores, it’s hard to make out a coherent story.

The Trump administration’s actions against the two Chinese-owned social-media platforms are driven more by politics and an effort to seem tough on China than by actual privacy, safety, or national security concerns. However, that doesn’t mean there aren’t tough challenges ahead in regulating digital platforms based in China, the United States, or anywhere else.

As the TikTok and WeChat stories unfold—and no one should expect a permanent resolution anytime soon—policymakers, technologists, and citizens should look beyond this chaotic start to the deeper, unresolved questions. Now is the time to develop comprehensive policy tools that protect privacy and national security from threats foreign and domestic.

Similarly, if the Trump administration were truly serious about stopping malign actors from abusing personal data from US-based users, or serious about stopping foreign intelligence agencies from gathering massive datasets describing US society, they would go to the root of the problem: an app economy that collects and monetizes as much data as companies can manage.

TikTok and WeChat critics cite the way the apps collect location data, device identifiers, social connections, browsing histories, and more to argue that the Chinese government could use this data in some kind of machine-learning-driven analysis down the road. Cutting off the apps’ access to US-based users, they say, would shield the country from Chinese intelligence—all while protecting US citizens’ privacy.

Now is the time to develop comprehensive policy tools that protect privacy and national security from threats foreign and domestic.

Not so fast. In a 2018 study, Oxford scholars analyzed data flows coming out of almost 1 million apps on the US and UK Google Play stores. They found that the median app sent user data to five tracking companies, and 17% of apps sent data to more than 10 trackers. More than 90% of apps analyzed sent data to a US-based company, while 5% sent data to a China-based company. Granted, these numbers only capture the data’s first stop after our smartphones. Some of the data siphoned to advertising networks and trackers is for sale, and both sellers and buyers can be hard to track down.

It’s not as if the US government is unaware that companies based outside China—including those in the United States—could potentially misuse this kind of data store. The Cambridge Analytica scandal, which largely revolved around data obtained from the US tech giant Facebook, showed that the 2016 Trump campaign was well aware of how digital data could be used for political influence.

Nor are authorities blind to the other ways Chinese intelligence is thought to obtain mass data about Americans. Chinese hackers are suspected in the hack, revealed in 2015, of a poorly secured US Office of Personnel Management database, as well as breaches at Anthem health insurance, Marriott hotels, and the credit agency and data broker Equifax.

The true scandal is not that the Chinese government might exploit personal data—a well-documented and unsurprising move from a major intelligence apparatus. It’s that doing so is so easy for them and many others, and will remain so even if TikTok and WeChat are banned.

That said, the Trump administration’s attempts to ban TikTok and WeChat were a mess. They suffered from the administration’s typical erraticism as Trump, a beleaguered incumbent, tried to be seen as tough on China after weak results from a costly trade war. Moreover, they do almost nothing to address the very real privacy and security risks of corporate data exploitation run amok.

There is an upside, however, to all the attention people are paying to the administration’s claims. These would-be bans might finally drive US citizens and institutions to demand comprehensive privacy and data governance. People rightfully concerned about potential foreign threats online should unite to take on the broader challenge.

There is well-organized opposition to enacting serious privacy rules in the United States, and those opponents can far outspend all existing efforts to make real progress on this issue. Many of the biggest US tech companies make money by monetizing insights about users’ private lives and preferences, and they do so with diverse attitudes toward privacy. These companies fear that national data privacy legislation would be burdensome or badly designed.

Input from tech companies will certainly be necessary to strike the right balance in any federal privacy legislation. But companies’ influence and desire to remain free from scrutiny has long drowned out the public interest. If we’re lucky, the conversation around WeChat and TikTok will engage many more people in the process of addressing data security issues.

Fortunately, the United States need not start from nothing. Newly empowered with the European Union’s General Data Protection Regulation (GDPR), which went into effect in May 2018, civil-society organizations and individuals have begun to take on data brokers. The London-based nonprofit Privacy International has challenged the legality of the data broker business under GDPR, targeting Oracle among others. The California Consumer Privacy Act, and a requirement that data brokers in the state register with a public list, are among a batch of recent state and local privacy rules to hit the books.

A well-designed regulatory scheme for data privacy and security would establish rules for collecting, using, and storing user data, and formal mechanisms to provide citizens and national security authorities with the information they need to feel confident that specific apps do not pose a privacy or security risk. It would help ensure that freedom of expression and privacy are honored across our connected lives. And it would lay out a framework for democratic oversight of the moderation and recommendation algorithms that have reshaped the US public sphere. Crucially, any such scheme should be discussed and agreed upon through the standard legislative process.

Chinese-owned apps like TikTok and WeChat would probably not immediately pass muster under a well-designed US regulatory system. It could be that their deficiencies, and the risks inherent in their close ties to China, are so deep that they cannot be overcome, and a fair process would determine that they may not legally continue to do business in the United States. But it’s also possible that they could take steps to meet US requirements and allay legitimate concerns about censorship, algorithmic manipulation, surveillance of targeted individuals, and more. Audits could potentially ensure they meet the mark.

The United States urgently needs to set that mark for all those who handle US private data. A national framework for privacy and data security would not just govern the well-known Chinese apps. It could also help address a huge and largely unregulated data exploitation economy that continues to operate every day in the United States.

Graham Webster is a research scholar and the editor of DigiChina at the Stanford Cyber Policy Center.

Deep Dive


A brief, weird history of brainwashing

L. Ron Hubbard, Operation Midnight Climax, and stochastic terrorism—the race for mind control changed America forever.

Why the Chinese government is sparing AI from harsh regulations—for now

The Chinese government may have been tough on consumer tech platforms, but its AI regulations are intentionally lax to keep the domestic industry growing.

AI was supposed to make police bodycams better. What happened?

New AI programs that analyze bodycam recordings promise more transparency but are doing little to change culture.

Eric Schmidt: Why America needs an Apollo program for the age of AI

Advanced computing is core to the security and prosperity of the US. We need to lay the groundwork now.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.