Facebook, Twitter, Microsoft, and YouTube are joining forces to remove extremist content from their websites more efficiently.
Savvy social media strategies have helped ISIS and other terrorist organizations disperse video and images online, helping to recruit new members and inspire attacks. As we wrote last year, ISIS in particular has been “using 21st-century technology to promote a medieval ideology involving mass killings, torture, rape, enslavement, and destruction of antiquities.”
Now the companies whose websites are used to promulgate such content are going to work together to try and block it. Using a technique known as hashing—which can ascribe a unique number to a media file—they will share records of content that each has banned from its site.
A report earlier this year claimed that Google and Facebook had started experimenting with the use of hashing to block extremist content. But the new shared database will mean that items banned by one organization will also be removed from the websites of the others. The new database will go into action in early 2017.
The blacklisted content won’t be automatically detected by algorithms. Instead, humans will perform that part of the task in order to ensure that, say, journalistic reporting of terrorism isn’t wrongly taken down.
Hany Farid, a computer scientist from Dartmouth College, has warned the Guardian that such a task should itself be carefully monitored. “You want people who have expertise in extremist content making sure it’s up to date,” he explained. “Otherwise you are relying solely on the individual technology companies to do that.”
Still, the news is welcome. Historically, tech companies have resisted calls to police such content, but increasing political pressure has clearly forced them to reevaluate their approaches.
(Read more: The New York Times, The Guardian, “Fighting ISIS Online,” “Facebook and Google May Be Fighting Terrorist Videos with Algorithms,” “What Role Should Silicon Valley Play in Fighting Terrorism?”)