Skip to Content
Uncategorized

What Google and Facebook Can Do to Fight ISIS

Facebook intervenes to prevent suicide. How about to prevent radicalization? And let’s face it: Google and others make it easy to find videos of people being murdered.
December 10, 2015

When Hillary Clinton called on tech companies to help “disrupt” ISIS, major players like Facebook were quick to point out that they forbid terror-related content on their sites. That’s true. But there’s more they can do.

ISIS has succeeded in part because of skillful leveraging of the Internet industry’s tools to spread medieval messaging, disseminate videos of atrocities, and recruit new adherents. Victoria Grand, Google’s policy director, conceded last summer that “ISIS is having a viral moment on social media” and added that Google was trying to figure out how “not [to allow] ourselves to become a distribution channel for this horrible, but very newsworthy, terrorist propaganda.”

But in some ways, Google remains a major ISIS vector. Yes, the company does a good job of scrubbing terrorist content (and copyrighted music videos) from YouTube. But all anyone has to do is visit another Google property: its search engine. Type “Watch ISIS Drowning Video” or similar; and in milliseconds Google’s algorithms will point you to an otherwise obscure website hosting the most horrific material imaginable. Policymakers might ask for the data: who refers Web traffic to the sites hosting terrorist propaganda and depictions of atrocities? And the followup question: could more be done to protect youth and others from being exposed to it—and prevent the victims from being revictimized?

Then there’s Facebook. Like Google, Facebook works hard to remove terror content from news feeds. But Facebook has lots of tools at its disposal, much of it going on behind the scenes. The $300 billion company has a data science division that slices and dices what users write, link to, view online, whom they befriend, and much more. It’s reasonable to ask whether the same firepower that micro-profiles users, seeks clues in text, identifies patterns, and figures out who should get which ads might also help identify which young people are most at risk of radicalization (even if they aren’t yet posting brutal content and buying ammo). Then it’s conceivable that one might test and deploy methods of intervention for the most isolated and vulnerable young people. We could even perhaps identify ways to initiate of one-on-one conversations between such vulnerable youth and caring peers and adults (a concept explored recently in a limited study by the Institue of Strategic Dialogue, with some help from Facebook).

Outlandish? Not when you stop to consider that CEO Mark Zuckerberg has made clear that Facebook can, and should, intervene on a number of fronts: to reduce bullying, prevent suicide, encourage organ donation, and promote voter turnout. In a particularly striking example of how these interventions can pay dividends in the real world: Facebook’s voting suggestion meant 340,000 more people actually went out and voted.

Answering the call from Clinton and other policymakers won’t be easy. But given the tech industry’s remarkable achievements on so many fronts, it is worth asking: what other results can the well-honed data science tools of this industry achieve? How can we better protect young people, reduce violence, limit the reach of terrorist propaganda, and promote peace?

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.