Skip to Content

Theresa May Wants to End “Safe Spaces” for Terrorists on the Internet. What Does That Even Mean?

In the wake of the U.K.’s most recent terrorist attacks, its prime minister is talking tough on Internet regulation, but what she’s suggesting is impractical.
British prime minister Theresa May addressed the country Sunday, the morning after a terrorist attack in London.

By now, U.K. prime minister Theresa May’s response to terrorist attacks should be familiar: She and the Conservative party like to blame the Internet.

Tragically, there have been three major terrorist attacks in the U.K. in less than three months’ time. After the second, in Manchester, May and others said they would look into finding ways to compel tech companies to put cryptographic “back doors” into their services, so that law enforcement agencies could more easily access suspects’ user data.

May repeated her stance in broaders terms Sunday, following new attacks in London. “The Internet, and the big companies” are providing “safe spaces” for extremism, she said, and new regulations are needed to “regulate cyberspace.” She offered no specifics, but her party’s line, just days from the June 8 national election, is clear: a country that already grants its government some of the most sweeping digital surveillance powers of any democracy needs more and tougher laws to prevent terrorism (see “New U.K. Surveillance Law Will Have Worldwide Implications”).

The trouble is, this kind of talk ignores how the Internet and modern consumer technology works. As Cory Doctorow points out in a detailed look at how you would actually go about creating services with cryptographic holes, the practicalities of such a demand render it ludicrous bordering on impossible. Even if all of the necessary state-mandated technical steps were taken by purveyors of commercial software and devices—like Google or Apple, say—anyone who wanted to could easily skirt their restrictions by running open-source versions of the software, or unlocked phones.

That isn’t to say that May and the Conservatives’ general idea that the government should be able to probe user data as part of an investigation should be dismissed out of hand. The balancing act between national security and digital privacy has become one of the central themes of our digital lives (see “What If Apple Is Wrong?”). And while there are advocates aplenty on both sides, simple answers are hard to come by.

That’s what makes May’s statement Sunday so disappointing. Arguing that undoing encryption will somehow defeat the bad guys once and for all is simplistic and wrong. Pointing fingers at tech companies and painting them as the breeding ground for terrorism is, at best, an unhelpful knee-jerk response to a terrible act of violence.

Yes, terrorist groups like ISIS leverage digital tools to spread their hateful, violent messages (see “Fighting ISIS Online”). And services like Twitter, YouTube, and Facebook have become hugely influential in how people share information. As such, they should be pushed to improve the ability to find and take down extremist content, and ban users that post it.

But this is a problem that will not be solved with heavy-handed pontificating about the dangers of digital life. Take the case of Anjem Choudary, a British lawyer who spent over a decade spewing jihadist polemic, often in person, without breaking any law. According to the New York Times, he may have influenced one of the attackers that struck in London over the weekend. “He was a real-life radical preacher who recruited people face to face,” Peter R. Neumann, a professor of security studies at King’s College London, told the Times. Choudary was “much more important for jihad in Britain than Twitter or Facebook.”

(Read more: BBC, the New York Times, BoingBoing, the Guardian, “What If Apple Is Wrong?,” “Fighting ISIS Online”)

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.