YouTube’s Extremism-Spotting AI Is Working Hard, But Must Work Harder
Policing content on a site where 400 hours of footage are uploaded every minute isn't easy, and can’t realistically be done by humans. That's why YouTube—along with others, including Facebook—has always been so keen to play up the fact that AI will help it do the job. Now, we’ve a little insight into how that’s going. Speaking to the Guardian, a YouTube spokesperson explained that “over 75 percent of the videos we’ve removed for violent extremism over the past month were taken down before receiving a single human flag.” That’s fairly impressive progress on a very thorny problem, but that extra 25 percent is a pretty large miss-rate, and must’ve taken a whole lot of human hours to sniff out. In other words: there’ still a ways to go.
Keep Reading
Most Popular
How scientists traced a mysterious covid case back to six toilets
When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.
The problem with plug-in hybrids? Their drivers.
Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.
What’s next for generative video
OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.