Skip to Content

Academics Launch Fake Social Network to Get an Inside Look at Chinese Censorship

New research shows China’s online censorship relies on a competitive market where companies vie to offer the best speech-suppressing technology and services.
September 12, 2013

Nine years after Mark Zuckerberg quit Harvard to build Facebook, one of the university’s political science professors, Gary King, decided this year it was time to launch his own social media site. But King didn’t set up his Chinese social network to make money; instead, he wanted to get an insider’s view of Chinese censorship, which relies on Internet providers censoring their own sites in line with government guidelines. King won’t disclose his site’s URL, to protect people involved with his project.

Previous studies of Chinese censorship have mostly involved monitoring Chinese social sites to see which updates censors  remove (see “Social Media Censorship Offers Clues to China’s Plans”). Some have relied on rare interviews with insiders willing to talk about their role in censorship. By contracting with a major Chinese provider of Web software to help run his site, King could instead inspect the available censorship tools firsthand. He could also ask the company’s representatives whatever he wanted about how those tools should be used. “When we had questions, we just called customer service,” says King. “They were being paid to help us.”

Along with some parallel experiments on established social sites, King’s dabble in Internet entrepreneurialism has shown that Chinese censorship relies more heavily than was known on automatic filtering that holds posts back for human review before they appear online. The researchers also uncovered evidence that China’s vast censorship system is underpinned by a surprisingly vibrant, capitalistic market where companies compete to offer better censorship technology and services.

Censorship of Chinese sites is sometimes inconsistent and is known to rely heavily on people screening posts manually. But the software the Harvard researchers bought to run their site came with an unexpectedly complex toolkit of automated censorship tools, says King, and the company that provided it was happy to give advice on how to use them. “The options were really quite astounding.”

Not only could new posts be automatically held back for manual review by a human censor based on specific keywords, but they could be treated differently based on their length, where on the site they appeared, and whether they started a conversation or contributed to an existing one. Specific people could be targeted for more aggressive censorship based on their IP address, how recently they had last posted, and their reputation in the community.

Making customer service calls to the software provider the team had contracted also revealed that it was possible to choose from a range of extra, paid-for plug-ins offering more sophisticated filtering options. Those conversations also shed light on the perennial mystery of just how many censors there are screening online posts in China. King was told that to keep the government happy a site should employ two or three censors for every 50,000 users. Based on that, he estimates that there are between 50,000 and 75,000 censors working at Internet companies inside China.

In a parallel experiment, King’s group recruited dozens of people inside China to help post 1,200 different updates to 100 different social sites to see what got censored. Just over 40 percent of all those posts were immediately held back by automated censorship tools. Those filtered posts either appeared within a day or two or never made it online. Watching the fate of different posts suggested sites used a wide variety of different censorship technologies and procedures.

Those findings and King’s experience running his own site suggest that China has created a kind of competitive market in censorship, he says. Companies are free to run their censorship operations mostly as they wish, as long as they don’t allow the wrong kind of speech to flourish. That creates an incentive to find ways to censor more effectively so as to minimize the impact on profitability. “There’s plenty of diversity and room for technical and business innovation in censorship,” says King. “Companies get to experiment and choose from firms trying to sell them censorship technology.”

Jason Q Ng, a research fellow at the University of Toronto specializing in Chinese censorship, says that King’s look at the options available for censorship is unprecedented. “The authorities seem to recognize that government isn’t best suited for the performance of censorship,” says Ng. “It’s better for private companies to do this not just for innovation but for resources.”

That market operates under the constant threat of punitive government action, says Ng. After the Bo Xilai political scandal broke last year, China’s two largest Twitter-style sites, Tencent and Sina Weibo, were shut down for three days, while several smaller companies were closed down for good. “A report in [state press agency] Xinhua said this was a response to those companies not doing a good job,” says Ng.

The results from the Harvard group’s experiment in which posts were made to existing sites adds further evidence that although China’s censorship is rarely consistent, it is more targeted than often assumed, says Ng. By carefully choosing the content of posts to create a randomized trial, King’s group showed that censors don’t target complaints about the government. Instead, they’re much more concerned about talk of collective action.

Ng says that adds numerical weight to a common perception amongst China experts that the country’s government finds it useful to allow people to vent frustrations online. “Allowing people to post about corrupt officials is a tool government can use,” he says.

Revealing how China censors its citizens is unlikely to cause the country into changing its policy. But Ng hopes that understanding the motives behind China’s censorship could help efforts by outsiders to encourage authorities to loosen their controls on online speech. “It will enhance the conversation with people in charge about balancing the collective good versus freedom of expression.”

Keep Reading

Most Popular

This new data poisoning tool lets artists fight back against generative AI

The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. 

Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist

An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.

The Biggest Questions: What is death?

New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.

Driving companywide efficiencies with AI

Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.