When Hillary Clinton called on tech companies to help “disrupt” ISIS, major players like Facebook were quick to point out that they forbid terror-related content on their sites. That’s true. But there’s more they can do.
ISIS has succeeded in part because of skillful leveraging of the Internet industry’s tools to spread medieval messaging, disseminate videos of atrocities, and recruit new adherents. Victoria Grand, Google’s policy director, conceded last summer that “ISIS is having a viral moment on social media” and added that Google was trying to figure out how “not [to allow] ourselves to become a distribution channel for this horrible, but very newsworthy, terrorist propaganda.”
But in some ways, Google remains a major ISIS vector. Yes, the company does a good job of scrubbing terrorist content (and copyrighted music videos) from YouTube. But all anyone has to do is visit another Google property: its search engine. Type “Watch ISIS Drowning Video” or similar; and in milliseconds Google’s algorithms will point you to an otherwise obscure website hosting the most horrific material imaginable. Policymakers might ask for the data: who refers Web traffic to the sites hosting terrorist propaganda and depictions of atrocities? And the followup question: could more be done to protect youth and others from being exposed to it—and prevent the victims from being revictimized?
Then there’s Facebook. Like Google, Facebook works hard to remove terror content from news feeds. But Facebook has lots of tools at its disposal, much of it going on behind the scenes. The $300 billion company has a data science division that slices and dices what users write, link to, view online, whom they befriend, and much more. It’s reasonable to ask whether the same firepower that micro-profiles users, seeks clues in text, identifies patterns, and figures out who should get which ads might also help identify which young people are most at risk of radicalization (even if they aren’t yet posting brutal content and buying ammo). Then it’s conceivable that one might test and deploy methods of intervention for the most isolated and vulnerable young people. We could even perhaps identify ways to initiate of one-on-one conversations between such vulnerable youth and caring peers and adults (a concept explored recently in a limited study by the Institue of Strategic Dialogue, with some help from Facebook).
Outlandish? Not when you stop to consider that CEO Mark Zuckerberg has made clear that Facebook can, and should, intervene on a number of fronts: to reduce bullying, prevent suicide, encourage organ donation, and promote voter turnout. In a particularly striking example of how these interventions can pay dividends in the real world: Facebook’s voting suggestion meant 340,000 more people actually went out and voted.
Answering the call from Clinton and other policymakers won’t be easy. But given the tech industry’s remarkable achievements on so many fronts, it is worth asking: what other results can the well-honed data science tools of this industry achieve? How can we better protect young people, reduce violence, limit the reach of terrorist propaganda, and promote peace?
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
The walls are closing in on Clearview AI
The controversial face recognition company was just fined $10 million for scraping UK faces from the web. That might not be the end of it.
This horse-riding astronaut is a milestone in AI’s journey to make sense of the world
OpenAI’s latest picture-making AI is amazing—but raises questions about what we mean by intelligence.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.