MIT Technology Review Subscribe

YouTube’s Extremism-Spotting AI Is Working Hard, But Must Work Harder

Policing content on a site where 400 hours of footage are uploaded every minute isn’t easy, and can’t realistically be done by humans. That’s why YouTube—along with others, including Facebook—has always been so keen to play up the fact that AI will help it do the job. Now, we’ve a little insight into how that’s going. Speaking to the Guardian, a YouTube spokesperson explained that “over 75 percent of the videos we’ve removed for violent extremism over the past month were taken down before receiving a single human flag.” That’s fairly impressive progress on a very thorny problem, but that extra 25 percent is a pretty large miss-rate, and must’ve taken a whole lot of human hours to sniff out. In other words: there’ still a ways to go.

 

Advertisement
This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in
This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement