YouTube’s Extremism-Spotting AI Is Working Hard, But Must Work Harder
Policing content on a site where 400 hours of footage are uploaded every minute isn't easy, and can’t realistically be done by humans. That's why YouTube—along with others, including Facebook—has always been so keen to play up the fact that AI will help it do the job. Now, we’ve a little insight into how that’s going. Speaking to the Guardian, a YouTube spokesperson explained that “over 75 percent of the videos we’ve removed for violent extremism over the past month were taken down before receiving a single human flag.” That’s fairly impressive progress on a very thorny problem, but that extra 25 percent is a pretty large miss-rate, and must’ve taken a whole lot of human hours to sniff out. In other words: there’ still a ways to go.
Keep Reading
Most Popular
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Everything you need to know about artificial wombs
Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Data analytics reveal real business value
Sophisticated analytics tools mine insights from data, optimizing operational processes across the enterprise.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.