Skip to Content

What Role Should Silicon Valley Play in Fighting Terrorism?

Politicians are trying to recruit technology companies to help fight ISIS. Does it make sense?
February 23, 2016

On Friday, January 8, several high-level officials from the Obama administration—including the attorney general, the White House chief of staff, and the directors of the FBI and the NSA—met at a federal office in San Jose with senior executives from Facebook, Twitter, Microsoft, LinkedIn, YouTube, and Apple (including CEO Tim Cook himself). On the agenda for the discussion, according to a one-page memo widely leaked to the press, was this question: “How can we make it harder for terrorists to [use] the Internet to recruit, radicalize, and mobilize followers to violence?”

For the previous month, since the ISIS-inspired shootings in San Bernardino, California, President Obama, as well as some of the candidates vying to succeed him, had been calling on Silicon Valley to join the government in this fight. As Hillary Clinton put it in a campaign speech, “We need to put ‘the great disrupters’ at work disrupting ISIS.” In one of the Republican presidential debates, Donald Trump said he would ask “our brilliant people from Silicon Valley” to keep ISIS from using the Internet—a notion that reflected a misunderstanding of how the Internet works but also a widespread desperation for Silicon Valley to do something.

But what do Obama, Clinton, Trump, and the other politicians have in mind? How would the executives respond, and how should they respond? Many tech entrepreneurs—­libertarian in leanings and especially leery of open collusion with Washington since Edward Snowden’s revelations—question whether government has any business putting private industry to work on such a venture, which could rub up against the First, Fourth, and Fifth Amendments. And if some coöperative strategy could be mounted, quite apart from any philosophical considerations, would it have much effect?

The main thing, Clinton emphasized, was that, above and beyond questions about specific plans, “the tech community and the government have to stop seeing each other as adversaries.”

Political speeches on ISIS

  • Hillary Clinton: “National Security and the Islamic State”

    November 19, 2015, at the Council on Foreign Relations

  • Donald Trump: Republican debate

    December 15, 2015

  • President Obama: “Keeping the American People Safe”

    December 6, 2015

This enmity, especially from the techies toward the spies, is a fairly new phenomenon. Telecommunications companies have a history of coöperating with U.S. intelligence agencies that dates back to the 1920s, when the Cipher Bureau, which grew out of a World War I espionage unit, persuaded Western Union to grant its agents access to all telegrams and telegraphs. Starting in the 1950s, with the founding of the National Security Agency, AT&T and later the Baby Bells allowed signal-intelligence crews to tap into phone lines. A whole industry grew up to build listening posts, dishes, and satellites that intercepted radio and microwave signals. When the world went digital, the new Internet and cellular companies continued the tradition of complicity—sometimes under court order, more often willingly. Favors were reciprocated. For instance, two senior NSA officials told me that when Microsoft released its first Windows software, the agency’s Information Assurance Directorate inspected the product (as it was obligated to do before approving it for procurement by the Defense Department), found 1,500 points of vulnerability, and helped patch almost all of them (leaving a few of the gaps open so the NSA could exploit them in adversaries’ computer systems).

The Snowden leaks, in June 2013, exposed the extent of this arrangement, embarrassing several executives and fomenting fears that consumers abroad might shop elsewhere because they’d assume that American-made products had built-in back doors for NSA intruders. Apple declared its independence in particularly dramatic fashion, designing the encryption for its IOS 8 operating system, released in 2014, in a way that let consumers set their own passcode: Apple couldn’t hand the government a key, because it didn’t have the key. A month after the meeting in San Jose, the government came up with a workaround to get into the phone used by the San Bernardino killers—asking Apple to override a security feature so the FBI could employ "brute-force" decryption, trying all possible passcodes on the phone. When Apple refused to go along, the FBI took the company to court, and a battle has commenced.

But among the major technology companies, Apple essentially stands alone. The rise of ISIS has altered the general climate and softened the hostility. Even the most freewheeling executives in Silicon Valley, including Cook, have said they have no desire to let outfits like ISIS use their networks, sites, or servers at will. So there is, in principle, a revived willingness to coöperate with Washington—or at least, for the moment, to engage in dialogue—on a cyber counterterrorist campaign.

Some coöperation is going on already. Facebook and Twitter have taken steps to spot terrorist posts and take them down, though their efforts have proved futile: new sites and pages spring up as fast as the old ones are shuttered. But there are other ways to disrupt nefarious plots online; and though they weren’t discussed in detail at the San Jose meeting, the history of what has variously been called “information operations,” “information warfare,” and “cyberwarfare” suggests a wide range of possible techniques.

In 2007, four years into the Iraq War, U.S. forces started making headway: American casualties plummeted, insurgent casualties soared. The official story credited the turnaround to President George W. Bush’s troop surge and General David Petraeus’s adoption of a counterinsurgency strategy. There’s something to that story, but another factor, which several officials told me about but no one discussed openly, was a cyberwar campaign. U.S. Special Forces captured insurgent computers. NSA analysts, deployed on the ground, downloaded insurgents’ usernames and passwords, then sent phony e-mails to insurgent fighters, ordering them to meet at a certain location at a certain time—where members of the Special Forces would be waiting to kill them. In the course of several months, 4,000 insurgents were killed in this fashion. (So were 22 NSA analysts, mainly by roadside bombs as they accompanied troops on missions to capture computers.)

The sort of counter-ISIS program discussed by senior officials and Internet executives wouldn’t go quite that far. Actually killing jihadists in this manner would require troops on the ground and (like all cyber-offensive activities that involve killing people or destroying objects) presidential authorization. But it would not be a stretch (and would require no permission from political higher-ups) to capture—or hack into—ISIS computers, track the Twitter feeds and Facebook pages involved in recruiting new fighters, and follow the ensuing e-mails to and from those who respond. The resulting information could be gathered strictly as intelligence—to analyze the personality types, or identify the specific individuals, who are lured. Or messages between the recruiter and the recruited could be disrupted or distorted in a way that undermined the movement’s appeal. Or the pages could be flooded with comments by Muslims—real or invented—disputing or ridiculing the recruiter’s message, snapping susceptible readers out of their reverie or making them think twice before booking a plane ride and taking up arms.

How can we help others to create, publish, and amplify alternative content that would undercut [ISIS]?

Matt Devost, a cybersecurity specialist who ran the Terrorism Research Center for 13 years, says, “Studies of group dynamics indicate that if dissenting voices are introduced, they can diminish the appeal of propaganda.”

The Obama administration may at least be considering these sorts of approaches. The one-page agenda for the January 8 meeting in San Jose asked: “In what ways can we use technology to help disrupt paths to radicalization to violence, identify recruitment patterns, and provide metrics to help measure our efforts?” And: “How can we help others to create, publish, and amplify alternative content that would undercut [ISIS]?”

To some extent, some pushback is already happening spontaneously. In 2014, a message by Abu Bakr al-Baghdadi, the self-proclaimed caliph of the Islamic State, was tweeted: “We urgently call upon every Muslim to join the fight, especially those in the land of the two shrines” (by which he meant Saudi Arabia). Someone named Mohsin Arain replied, “Sorry mate, I don’t want to risk dying before the next Star Wars comes out.” Another, Zay Zadeh, posted, “Sorry … I’m busy being a real Muslim, giving to charity, etc. Also, your dental plan sucks.” Still another, Hossein Aoulad, responded, “Mum just made couscous, next time maybe.”
 Imagine if hundreds of counter-­messages flooded an ISIS message board, and if at least some of them were designed to appeal to the sorts of people whom intelligence analysts had pegged as susceptible to recruitment. Keeping a propaganda line open—in order to track and possibly manipulate its contents and controllers—might be far more effective than a whack-a-mole attempt to shut it down.

Another virtue of this approach is that even if the jihadist leaders suspected some of the dissenting voices weren’t genuine, even if they knew the West was using their sites to mount a counter-propaganda campaign, there wouldn’t be much they could do about it; their anonymous readers, in bedrooms and basements around the world, would regard them as real.

Who would decide to run this sort of campaign—the government or the Internet companies? Law enforcement and intelligence agencies have the necessary resources, personnel, and institutional mandate. But the companies would need to play a role as well: they own the networks. Their role could be passive—for instance, receiving notice that some agency is monitoring or disrupting a particular site, so they don’t shut it down. Or it could be active, ranging from providing new ideas (their business model encourages innovative thinking much more than government bureaucracies do) to carving out a back door in the architecture of a site, a server, or a network so that a spy agency’s hackers could enter. Whatever the precise arrangement, the government needs Silicon Valley to at least be a partner. In that sense, Hillary Clinton got it right when she called on each side to stop seeing the other as an adversary.

This dialogue is in an early phase. The January meeting in San Jose, according to one attending official who was not authorized to speak on the record, amounted to a “preliminary discussion,” which was conducted on “an unclassified basis”—meaning none of the ideas or scenarios cited above would have been outlined, except perhaps on an abstract level. But officials hope—while some libertarians fear—that the meeting may presage a softening of Silicon Valley’s resistance.

Just as telecom executives in the last half of the 20th century felt moved by appeals to national security during the Cold War, so Internet executives today—after two decades of relative peace, a go-go economy, and the motto “information wants to be free”—might be lured back to pledges of allegiance, at least to some extent, by the threat of global terrorism.

 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.