Skip to Content
Humans and technology

YouTube says it is finally trying to make the site less toxic

December 12, 2019
laptop
laptopUnsplash

YouTube is responding to allegations that it allows racist and homophobic harassment on its platform. But now it has to actually enforce it.

What happened? On Wednesday, YouTube announced an update to its harassment policy that means material that “maliciously insulted or demeaned others because of their race, gender or sexual orientation” will be removed. It will also ban “veiled or implied” threats or “language suggesting physical violence may occur.”

Why the change: The update comes about six months after YouTube came under fire for refusing to ban Steven Crowder, a right-wing personality, who had used racist and homophobic language against a Vox journalist on his channel. YouTube said Crowder’s words did not breach its policies. This update looks to be a response to the backlash that followed.

Okay, so how will it do this? A sprinkling of AI, but mostly a lot of help from thousands of new moderators who will be hired to watch videos and scan them for problematic content. YouTube’s track record of actually enforcing its own policies is really not great, however.

And haven’t there been issues with using moderators? Yep. Earlier this year, an investigation from the Verge detailed severe mental health issues among moderators at Facebook; another investigation at the Washington Post found YouTube moderators suffering similarly. Reddit has tried to aid human moderators with AI-powered “automoderators,” but the system is imperfect and still requires human review.

The other YouTube policy hiccup affects kids: In September, YouTube and the Federal Trade Commission reached a $170 million settlement for the company’s illegally keeping and using data on what children watching, a violation of the Children’s Online Privacy Protection Act. YouTube was required to create a labeling system for children’s videos; if they are aimed at kids, then creators aren’t allowed to collect ad money or target potentially interested viewers on the basis of their watch history.

But what exactly is kids’ content? That’s what YouTube and a ton of creators, worried they will lose income, want to know, especially when it comes to content like unboxing videos or animations that might seem to appeal to kids but could have crossover adult appeal. On Wednesday, YouTube wrote a letter to the FTC asking for clarity because its policy is “complex.” Expect a lot of legal tussling and not a lot of clarity in the months to come. 

Deep Dive

Humans and technology

VR is as good as psychedelics at helping people reach transcendence

On key metrics, a VR experience elicited a response indistinguishable from subjects who took medium doses of LSD or magic mushrooms.

The 1,000 Chinese SpaceX engineers who never existed

LinkedIn users are being scammed of millions of dollars by fake connections posing as graduates of prestigious universities and employees at top tech companies.

Social media is polluting society. Moderation alone won’t fix the problem

Companies already have the systems in place that are needed to evaluate their deeper impacts on the social fabric.

The fight for “Instagram face”

Meta banned filters that “encourage plastic surgery,” but a massive demand for beauty augmentation on social media is complicating matters.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.