Facebook claims to have proactively found and removed 99% of terrorist-related content on the site for the past three quarters. It’s given some insight into its processes in a blog post.
Some statistics: First it’s important to note that when it says “terrorism,” Facebook is referring only to ISIS and Al-Qaeda. On average, the firms claims it now removes terrorist content less than two minutes after it’s posted, versus the average of 14 hours it took earlier this year. Facebook took action on 9.4 million pieces of content in Q2 2018, a figure that declined to 3 million in Q3 2018, thanks to its efforts the quarter before, it said.
Detection systems: Facebook has launched a new machine-learning tool that assesses whether posts signal support for ISIS or Al-Qaeda. It produces a score indicating how likely it is that it violates their counterterrorism policies, with the ones that receive higher scores passed to its human reviewers to assess. For the highest-scored cases, posts are removed automatically. In the “rare instances” employees find the possibility of imminent harm, Facebook immediately informs law enforcement, it said.
Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.