YouTube’s Extremism-Spotting AI Is Working Hard, But Must Work Harder
Policing content on a site where 400 hours of footage are uploaded every minute isn't easy, and can’t realistically be done by humans. That's why YouTube—along with others, including Facebook—has always been so keen to play up the fact that AI will help it do the job. Now, we’ve a little insight into how that’s going. Speaking to the Guardian, a YouTube spokesperson explained that “over 75 percent of the videos we’ve removed for violent extremism over the past month were taken down before receiving a single human flag.” That’s fairly impressive progress on a very thorny problem, but that extra 25 percent is a pretty large miss-rate, and must’ve taken a whole lot of human hours to sniff out. In other words: there’ still a ways to go.
Keep Reading
Most Popular
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.