The Chinese surveillance state proves that the idea of privacy is more “malleable” than you’d expect
The authors of "Surveillance State" discuss what the West misunderstands about Chinese state control and whether the invasive trajectory of surveillance tech can still be reversed.
It’s no surprise that last week, when the Biden administration updated its list of Chinese military companies blocked from accessing US technologies, it added Dahua. The second-largest surveillance camera company in the world, just after Hikvision, Dahua sells to over 180 countries. It exemplifies how Chinese companies have leapfrogged to the front of the video surveillance industry and have driven the world, especially China, to adopt more surveillance tech.
Over the past decade, the US—and the world more generally—have watched with a growing sense of alarm as China has emerged as a global leader in this space. Indeed, the Chinese government has been at the forefront of exploring ways to apply cutting-edge research in computer vision, the Internet of Things, and hardware manufacturing in day-to-day governance. This has led to a slew of human rights abuses—notably, and perhaps most brutally, in monitoring Muslim ethnic minorities in the Western region of Xinjiang. At the same time, the state has also used surveillance tech for good: to find abducted children, for example, and to improve traffic control and trash management in populous cities.
As Wall Street Journal reporters Josh Chin and Liza Lin argue in their new book Surveillance State, out last month, the Chinese government has managed to build a new social contract with its citizens: they give up their data in exchange for more precise governance that, ideally, makes their lives safer and easier (even if it doesn’t always work out so simply in reality).
MIT Technology Review recently spoke with Chin and Lin about the five years of reporting that culminated in the book, exploring the misconception that privacy is not valued in China.
“A lot of the foreign media coverage, when they encountered that [question], would just brush it off as ‘Oh, Chinese people just don’t have the concept of privacy … they’re brainwashed into accepting it,’” says Chin. “And we felt it was too easy of a conclusion for us, so we wanted to dig into it.” When they did, they realized that the perception of privacy is actually more pliable than it often appears.
We also spoke about how the pandemic has accelerated the use of surveillance tech in China, whether the technology itself can stay neutral, and the extent to which other countries are following China’s lead.
How the world should respond to the rise of surveillance states “might be one of the most important questions facing global politics at the moment,” Chin says, “because these technologies … really do have the potential to completely alter the way governments interact with and control people.”
Here are the key takeaways from our conversation with Josh Chin and Liza Lin.
China has rewritten the definition of privacy to sell a new social contract
After decades of double-digit GDP growth, China’s economic boom has slowed down over the past three years and is expected to face even stronger headwinds. (The World Bank currently estimates that China’s 2022 annual GDP growth will decrease to 2.8%.) So the old social contract, which promised better returns from an economy steered by an authoritarian government, is strained—and a new one is needed.
As Chin and Lin observe, the Chinese government is now proposing that by collecting every Chinese citizen’s data extensively, it can find out what the people want (without giving them votes) and build a society that meets their needs.
But to sell this to its people—who, like others around the world, are increasingly aware of the importance of privacy—China has had to cleverly redefine that concept, moving from an individualistic understanding to a collectivist one.
The idea of privacy itself is “an incredibly confusing and malleable concept,” says Chin. “In US law, there’s a dozen, if not more, definitions of privacy. And I think the Chinese government grasped that and sensed an opportunity to define privacy in ways that not only didn’t undermine the surveillance state but actually reinforced it.”
What the Chinese government has done is position the state and citizens on the same side of the privacy battle against private companies. Consider recent Chinese legislation like the Personal Information Protection Law (in effect since November 2021) and the Data Security Law (since September 2021), under which private companies face harsh penalties for allowing security breaches or failing to get user consent for data collection. State actors, however, largely get a pass under these laws.
“Cybersecurity hacks and data leaks happen not just to companies. They happen to government agencies, too,” says Lin. “But with something like that, you never hear state media play it up at all.” Enabled by its censorship machine, the Chinese government has often successfully directed people’s fury over privacy violations away from the government and entirely toward private companies.
The pandemic was the perfect excuse to expand surveillance tech
When Chin and Lin were planning the book, they envisioned ending with a thought experiment about what would happen to surveillance tech if something like 9/11 happened again. Then the pandemic came.
And just like 9/11, the coronavirus fast-tracked the global surveillance industry, the authors saw—particularly in China.
Chin and Lin report on the striking parallels between the way China used societal security to justify the surveillance regime it built in Xinjiang and the way it used physical safety to justify the overreaching pandemic control tools. “In the past, it was always a metaphorical virus: ‘someone was infected with terrorist ideas,’” says Lin. In Xinjiang, before the pandemic, the term “virus” was used in internal government documents to describe what the state deemed “Islamic radicalism.” “But with covid,” she says, “we saw China really turn the whole state surveillance apparatus against its entire population and against a virus that was completely invisible and contagious.”
Going back to the idea that the perception of privacy can change greatly depending on the circumstances, the pandemic has also provided the exact context in which ordinary citizens may agree to give up more of their privacy in the name of safety. “In the field of public health, disease surveillance has never been controversial, because of course you would want to track a disease in the way it spreads. Otherwise how do you control it?” says Chin.
“They probably saved millions of lives by using those technologies,” he says, “and the result is that sold [the necessity of] state surveillance to a lot of Chinese people.”
Does “good” surveillance tech exist?
Once someone (or some entity) starts using surveillance tech, the downward slope is extremely slippery: no matter how noble the motive for developing and deploying it, the tech can always be used for more malicious purposes. For Chin and Lin, China shows how the “good” and “bad” uses of surveillance tech are always intertwined.
They report extensively on how a surveillance system in Hangzhou, the city that’s home to Alibaba, Hikvision, Dahua, and many other tech companies, was built on the benevolent premise of improving city management. Here, with a dense network of cameras on the street and a cloud-based “city brain” processing data and giving out orders, the “smart city” system is being used to monitor disasters and enable quick emergency responses. In one notable example, the authors talk to a man who accompanied his mother to the hospital in an ambulance in 2019 after she nearly drowned. The city was able to turn all the traffic lights on their path to reduce the time it took to reach the hospital. It’s impossible to argue this isn’t a good use of the technology.
But at the same time, it has come to a point where the “smart city” technologies are almost indistinguishable from “safe city” technologies, which aim to enhance police forces and track down alleged criminals. The surveillance company Hikvision, which partly powers the lifesaving system in Hangzhou, is the same one that facilitated the massive incarceration of Muslim minorities in Xinjiang.
China is far from the only country where police are leaning on a growing number of cameras. Chin and Lin highlight how police in New York City have used and abused cameras to build a facial recognition database and identify suspects, sometimes with legally questionable tactics. (MIT Technology Review also reported earlier this year on how the police in Minnesota built a database to surveil protesters and journalists.)
Chin argues that given this track record, the tech itself can no longer be considered neutral. “Certain technologies by their nature lend themselves to harmful uses. Particularly with AI applied to surveillance, they lend themselves to authoritarian outcomes,” he says. And just like nuclear researchers, for instance, scientists and engineers in these areas should be more careful about the technology’s potential harm.
It’s still possible to disrupt the global supply chain of surveillance tech
There is a sense of pessimism when talking about how surveillance tech will advance in China, because the invasive implementation has become so widespread that it’s hard to imagine the country reversing course.
But that doesn’t mean people should give up. One key way to intervene, Chin and Lin argue, is to cut off the global supply chain of surveillance tech (a network MIT Technology Review wrote about just last month).
The development of surveillance technology has always been a global effort, with many American companies participating. The authors recount how American companies like Intel and Cisco were essential in building the bedrock of China’s surveillance system. And they were able to disclaim their own responsibility by saying they simply didn’t know what the end use of their products would be.
That kind of excuse won’t work as easily in the future, because global tech companies are being held to higher standards. Whether they contributed to human rights violations on the opposite side of the globe “has become a thing that companies are worried about and planning around,” Chin says. “That’s a really interesting shift that we haven’t seen in decades.”
Some of these companies have stopped working with China or have been replaced by Chinese firms that have developed similar technologies, but that doesn’t mean China has a self-sufficient surveillance system now. The supply chain for surveillance technology is still distributed around the world, and Chinese tech companies require parts from the US or other Western countries to continue building their products.
The main example here is the GPU, a type of processor originally produced to run better-quality video games that has since been used to power mass surveillance systems. China still relies for these on foreign companies like Nvidia, which is headquartered in California.
“In the last two years, there’s been a huge push to substitute foreign technology with domestic technology, [but] these are the areas [where] they still can’t achieve independence,” Lin says.
This means the West can still try to slow the development of the Chinese surveillance state by putting pressure on industry. But results will depend on how much political will there is to uncover the key links in surveillance supply chains, and to come up with effective responses.
“The other really important thing is just to strengthen your own democratic institutions … like a free press and a strong and vibrant civil society space,” says Lin. Because China won't be the only country with the potential to become a surveillance state. It can happen anywhere, they warn, including countries with democratic institutions.
How Russia killed its tech industry
The invasion of Ukraine supercharged the decline of the country’s already struggling tech sector—and undercut its biggest success story, Yandex.
How to preserve your digital memories
Following recent announcements by Google and Twitter, more data deletion policies are coming.
Your digital life isn’t as permanent as you think it is
Google will delete accounts after two years of inactivity, and experts expect more data deletion policies to come
Catching bad content in the age of AI
Why haven’t tech companies improved at content moderation?
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.