By now, U.K. prime minister Theresa May’s response to terrorist attacks should be familiar: She and the Conservative party like to blame the Internet.
Tragically, there have been three major terrorist attacks in the U.K. in less than three months’ time. After the second, in Manchester, May and others said they would look into finding ways to compel tech companies to put cryptographic “back doors” into their services, so that law enforcement agencies could more easily access suspects’ user data.
May repeated her stance in broaders terms Sunday, following new attacks in London. “The Internet, and the big companies” are providing “safe spaces” for extremism, she said, and new regulations are needed to “regulate cyberspace.” She offered no specifics, but her party’s line, just days from the June 8 national election, is clear: a country that already grants its government some of the most sweeping digital surveillance powers of any democracy needs more and tougher laws to prevent terrorism (see “New U.K. Surveillance Law Will Have Worldwide Implications”).
The trouble is, this kind of talk ignores how the Internet and modern consumer technology works. As Cory Doctorow points out in a detailed look at how you would actually go about creating services with cryptographic holes, the practicalities of such a demand render it ludicrous bordering on impossible. Even if all of the necessary state-mandated technical steps were taken by purveyors of commercial software and devices—like Google or Apple, say—anyone who wanted to could easily skirt their restrictions by running open-source versions of the software, or unlocked phones.
That isn’t to say that May and the Conservatives’ general idea that the government should be able to probe user data as part of an investigation should be dismissed out of hand. The balancing act between national security and digital privacy has become one of the central themes of our digital lives (see “What If Apple Is Wrong?”). And while there are advocates aplenty on both sides, simple answers are hard to come by.
That’s what makes May’s statement Sunday so disappointing. Arguing that undoing encryption will somehow defeat the bad guys once and for all is simplistic and wrong. Pointing fingers at tech companies and painting them as the breeding ground for terrorism is, at best, an unhelpful knee-jerk response to a terrible act of violence.
Yes, terrorist groups like ISIS leverage digital tools to spread their hateful, violent messages (see “Fighting ISIS Online”). And services like Twitter, YouTube, and Facebook have become hugely influential in how people share information. As such, they should be pushed to improve the ability to find and take down extremist content, and ban users that post it.
But this is a problem that will not be solved with heavy-handed pontificating about the dangers of digital life. Take the case of Anjem Choudary, a British lawyer who spent over a decade spewing jihadist polemic, often in person, without breaking any law. According to the New York Times, he may have influenced one of the attackers that struck in London over the weekend. “He was a real-life radical preacher who recruited people face to face,” Peter R. Neumann, a professor of security studies at King’s College London, told the Times. Choudary was “much more important for jihad in Britain than Twitter or Facebook.”