YouTube’s Extremism-Spotting AI Is Working Hard, But Must Work Harder
Policing content on a site where 400 hours of footage are uploaded every minute isn't easy, and can’t realistically be done by humans. That's why YouTube—along with others, including Facebook—has always been so keen to play up the fact that AI will help it do the job. Now, we’ve a little insight into how that’s going. Speaking to the Guardian, a YouTube spokesperson explained that “over 75 percent of the videos we’ve removed for violent extremism over the past month were taken down before receiving a single human flag.” That’s fairly impressive progress on a very thorny problem, but that extra 25 percent is a pretty large miss-rate, and must’ve taken a whole lot of human hours to sniff out. In other words: there’ still a ways to go.
Keep Reading
Most Popular
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.