How YouTube’s rules are used to silence human rights activists
Instead of protecting communities, online safety policies are being used to silence them. Just ask those documenting oppression in Xinjiang.
For over a week now, a corner of YouTube frequented by Kazakh dissidents and close observers of human rights in Xinjiang has been only intermittently available.
On June 15, the YouTube channel Atajurt Kazakh Human Rights went dark, its feed of videos replaced by a vague statement that the channel had been “terminated for violating YouTube’s community guidelines.” A few days later, it was reinstated without public explanation. Then, several days after that, 12 of the channel’s earliest videos disappeared from its public feed.
Atajurt collects and publishes video testimonies from family members of people imprisoned in China’s internment camps in Xinjiang. To ensure the credibility of these video statements, each public testimony shows proof of identity for the person testifying and the detained relatives. This also underscores the organization’s integrity, says Serikzhan Bilash, a prominent Kazakh activist and the owner of the channel.
Accuracy is especially important not just because so little information is coming out of Xinjiang, but also because testimonies often face criticism from supporters of the Chinese Communist Party—who, Bilash says, are looking for any excuse to deny what the United Nations has called “grave human rights abuses” in the province.
After being published by Atajurt, the information in the videos is then used by other organizations such as the Xinjiang Victims Database, which documents where detentions are occurring, which communities are most affected, and who has disappeared. One representative of Xinjiang Victims Database told MIT Technology Review that the project linked to the Atajurt videos “thousands of times.”
For years, these videos—which date back as far as 2018—have not been a problem, at least not from YouTube’s perspective. That changed last week.
“A thorough review”
“We have strict policies that prohibit harassment on YouTube, including doxing,” a YouTube representative told MIT Technology Review on Friday, later adding, “We welcome responsible efforts to document important human rights cases around the world. We also have policies that do not allow channels to publish personally identifiable information, in order to prevent harassment.”
This was likely a reference to Atajurt’s display of identity documents, which it uses to confirm the veracity of people’s testimonies.
Nevertheless, shortly after MIT Technology Review sent a list of questions about the June 15 takedown, and its content moderation policies more broadly, YouTube reversed its position. “After thorough review of the context of the video,” it reinstated the channel “with a warning,” a company representative wrote in an email. “We ... are working closely with this organization so that they can remove Personally Identifiable Information from their videos to reinstate them.”
As Atajurt was still considering whether, or how, to comply with these community guidelines, on Tuesday, June 22, YouTube took additional action, locking a dozen of Atajurt’s earliest video testimonies and making them private, saying they were in potential violation of its violent criminal organizations policy, which prohibits content produced by or in praise of criminal groups or terrorist organizations.
It’s unclear why YouTube considers video testimonies from family members of detained Chinese Muslims to be potentially pro-violent criminal or terrorist, or how this relates to YouTube’s earlier statements that Atajurt was inappropriately sharing personally identifiable information. YouTube representatives said in an email that its action was the result of “automated messaging that in this case is not related to this creator’s content.”
But it not the first time that Atajurt and Bilash, its founder, have come under attack.
A battle over YouTube, a battle for narrative
In 2019, Bilash was arrested for his vocal criticism of the Kazakh government’s close ties to China, which he blames for its weak stance in support of ethnic Kazakhs caught up in China’s camps. As a result, he faced seven years in jail for “inciting inter-ethnic tensions” and was released only after being forced to agree to stop his activism—an agreement that he ignored once freed.
Then, in September 2019, after multiple attempts to register Atajurt as a nonprofit in Kazakhstan met with failure, a pro-government group registered a different organization with a similar name and tried to gain control of the YouTube channel. This would have given it access to thousands of unpublished video testimonies that the group keeps private on YouTube at the request of the witnesses.
In 2020, Bilash fled Kazakhstan for Turkey. Today, he is in exile in Texas, where he thought the channel and its video testimonies would be safe.
But that was before his videos caught the attention of YouTube community guidelines.
Before the back-and-forth with YouTube this past week, Atajurt had already received two “strikes” in the past two months for “harassment and cyberbullying”—for including identity cards in videos posted in 2018. Appeals were denied. According to YouTube policy, channels are permanently removed if they receive three strikes within 90 days.
But supporters say that the strikes were not evidence of a pattern of bad behavior on Bilash and Atajurt’s part, but rather the result of continued mass reporting campaigns by actors affiliated with the Chinese and Kazakh governments.
Another Atajurt representative showed MIT Technology Review screenshots of what he said were instructional videos shared on WhatsApp, in Kazakh, teaching viewers how to flag Atajurt’s videos en masse to force YouTube to take them down. Earlier this year, similar attacks had caused Atajurt’s Facebook accounts to be temporarily removed.
A common playbook
While there is no definitive proof that either the Chinese or Kazakh government was behind the effort to remove Atajurt’s channel, it follows a playbook that is becoming increasingly common across the world. From the government of Ecuador to the Vietnamese military to US police departments, organizations that do not like critical content are using copyright law and standard social media policies to force—or simply trick—platforms into takedowns
Hiding behind standard policies and laws that apply to all users is “a way to lend an air of legitimacy to arbitrary political censorship, and it also creates plausible deniability for the censor,” says Nick Monaco, the director of China research at Miburo Solutions and a researcher on state disinformation campaigns.
“It’s also about finding a way to hide from security teams at these companies—the more reports you have against a targeted piece of content, the more legitimate the complaint looks, and the more incentive the companies have to remove that content,” he adds. “As long as you cover your tracks well, you can use a team of humans and bots to convincingly make it seem like a piece of content is genuinely offending diverse audiences, when in reality all the complaints are coming from one place.”
Deborah Brown, a digital rights researcher at Human Rights Watch, adds that Atajurt’s experience underscores how poorly equipped YouTube is to handle this kind of coordinated action. Her organization had alerted YouTube that the channel had probably been removed in error, she says. But this was not HRW’s job. YouTube could do better, she says, if it had “more contextual knowledge” and built “in-house human rights expertise.”
And weaponizing content moderation is not the only way state actors are trying to control the narrative. Recent reporting by the New York Times and ProPublica found evidence of a coordinated propaganda campaign in which thousands of residents of Xinjiang speak out, following similar scripts, about their rosy lives as a counter to the growing proof of mass detentions and human rights abuses in the Western province.
Bilash says he and his team were still considering whether to blur out the personally identifying information in order to comply with YouTube policy when they received the notifications that 12 more videos had been locked for supporting “violent criminal organizations.”
He had already been skeptical of the company’s stated reasons for his channel’s removal: “Nobody cares about the documents. It is just an excuse from YouTube,” he says.
Whatever Atajurt decides, being forced to make the decision at all presents the organization with a difficult choice: change its long-standing methods of documenting abuses in Xinjiang and risk being attacked by the Chinese and Kazakh governments for propagating false information—or keep the information up and risk being taken offline by YouTube.
The strikes, takedowns, and reinstatement may have been intended to deliver a message to Atajurt, but in fact YouTube may be sending an even clearer message to bad actors looking to silence Kazakh dissidents and other human rights organizations: if you want to get rid of critical content, just use YouTube’s own community guidelines as a weapon.
Correction: A previous version of this article said that Human Rights Watch was one of the organizations that uses the content in Atajurt's videos in its own human rights documentation. It does not, but its documentation of Xinjiang’s crisis has in part been facilitated by the volunteers of the Atajurt Kazakh Human Rights Organization.
Do you have an experience with unclear content moderation policies to share? Contact the reporter with tips on Signal at +1 626.765.5489 or email email@example.com.
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
ChatGPT is about to revolutionize the economy. We need to decide what that looks like.
New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.