Google and Facebook have both reportedly started to automate the process of removing extremist videos from their websites. While neither company has confirmed the practice, sources tell Reuters that the companies are using techniques similar to those used to identify and remove copyrighted material from the Internet.
The process that Google and Facebook are believed to be using to take down extremist content is known as “hashing.” This is a mathematical operation that takes a long stream of data of arbitrary length, like a video clip or string of DNA, and assigns it a specific value of a fixed length, known as a hash. The same files or DNA strings will be given the same hash, allowing computers to quickly and easily spot duplicates.
In the past, the likes of YouTube and Dropbox have used the technique to spot copyrighted files: the copyright owner simply provides the hash of the material it wishes to protect, and the website removes files uploaded to its servers if they share that hash. According to Reuters, Facebook and YouTube are using the same approach to block extremist video that is being “re-upped” to their sites.
What’s less clear is how extremist videos are identified in the first place. Some may be reported by regular users. But it’s not known how much human effort these sites expend trawling through video to find potentially extremist content, nor whether there are automated processes in place to mine videos and spot unwelcome footage. At the time of writing, neither Google nor Facebook has responded to us with comment.
The initiatives come as politicians have increasingly called on Silicon Valley to help in the fight against terrorism. Last year, Hillary Clinton asked tech companies to “disrupt” ISIS online, while more recently officials from the Obama administration met with senior executives from Facebook, Twitter, Microsoft, LinkedIn, YouTube, and Apple to plot how they could “make it harder for terrorists to [use] the Internet to recruit, radicalize, and mobilize followers to violence.” This month, Clinton reiterated that as president, she would “work with our great tech companies … to [do] a better job intercepting ISIS’s communications, tracking and analyzing social-media posts, and mapping jihadist networks.”
Facebook and Google certainly have enough data to profile each and every user in minute detail—so, the politicians argue, they could also use it to identify patterns in data that could be used to single out terrorist threats, too. Taking down re-upped videos is not the same thing, of course. But it might be a sign that Silicon Valley is beginning to pay attention to the voices emanating from Washington.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
The walls are closing in on Clearview AI
The controversial face recognition company was just fined $10 million for scraping UK faces from the web. That might not be the end of it.
A quick guide to the most important AI law you’ve never heard of
The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence.
These materials were meant to revolutionize the solar industry. Why hasn’t it happened?
Perovskites are promising, but real-world conditions have held them back.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.