Google and Facebook have both reportedly started to automate the process of removing extremist videos from their websites. While neither company has confirmed the practice, sources tell Reuters that the companies are using techniques similar to those used to identify and remove copyrighted material from the Internet.
The process that Google and Facebook are believed to be using to take down extremist content is known as “hashing.” This is a mathematical operation that takes a long stream of data of arbitrary length, like a video clip or string of DNA, and assigns it a specific value of a fixed length, known as a hash. The same files or DNA strings will be given the same hash, allowing computers to quickly and easily spot duplicates.
In the past, the likes of YouTube and Dropbox have used the technique to spot copyrighted files: the copyright owner simply provides the hash of the material it wishes to protect, and the website removes files uploaded to its servers if they share that hash. According to Reuters, Facebook and YouTube are using the same approach to block extremist video that is being “re-upped” to their sites.
What’s less clear is how extremist videos are identified in the first place. Some may be reported by regular users. But it’s not known how much human effort these sites expend trawling through video to find potentially extremist content, nor whether there are automated processes in place to mine videos and spot unwelcome footage. At the time of writing, neither Google nor Facebook has responded to us with comment.
The initiatives come as politicians have increasingly called on Silicon Valley to help in the fight against terrorism. Last year, Hillary Clinton asked tech companies to “disrupt” ISIS online, while more recently officials from the Obama administration met with senior executives from Facebook, Twitter, Microsoft, LinkedIn, YouTube, and Apple to plot how they could “make it harder for terrorists to [use] the Internet to recruit, radicalize, and mobilize followers to violence.” This month, Clinton reiterated that as president, she would “work with our great tech companies … to [do] a better job intercepting ISIS’s communications, tracking and analyzing social-media posts, and mapping jihadist networks.”
Facebook and Google certainly have enough data to profile each and every user in minute detail—so, the politicians argue, they could also use it to identify patterns in data that could be used to single out terrorist threats, too. Taking down re-upped videos is not the same thing, of course. But it might be a sign that Silicon Valley is beginning to pay attention to the voices emanating from Washington.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.