Skip to Content
Uncategorized

Can 10,000 Humans Clean Up YouTube?

December 5, 2017

The algorithms aren’t working. YouTube, like Facebook, comes under consistent fire for objectionable content—from extremism to child abuse. Tech leaders promise that artificial intelligence will help solve the problem by automatically identifying offensive content before anyone can see it. But YouTube’s best attempts so far spot only 75 percent of such clips before a user reports them.

Clearly, AI is not yet enough. That message appears to be being reiterated by YouTube’s CEO, Susan Wojcicki. In a new blog post, she explains that the video site is swelling its moderation team to include 10,000 staff across Google to help battle objectionable content.

What will they do? To a large extent, more of the same. “Since June, our trust and safety teams have manually reviewed nearly 2 million videos for violent extremist content,” she writes. “We are also taking aggressive action on comments, launching new comment moderation tools and in some cases shutting down comments altogether.”

But Wojcicki also points out that all the work has another use: training AIs. “Human judgment is critical to making contextualized decisions on content,” she writes, adding that by collecting data about how its moderators work, YouTube will be able to “train ... machine-learning technology to identify similar videos in the future.”

If this all sounds familiar, well, that’s because Facebook added 3,000 extra content policers itself earlier this year. At the time, we argued that extra people alone, without robust and reliable AI, would be unlikely to make much of a dent in offensive content—because there’s so damn much of it to sift through.

In YouTube’s case, with 300 hours of footage uploaded every minute, it seems equally unlikely to succeed—at least until its algorithms have learned all they need from the meatspace moderators.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.