Skip to Content

The Internet is Growing More Dangerous. But Does Anyone Care?

Bruce Schneier says “we as a society are heading down a dangerous path.”
April 3, 2013

Whenever I start pursuing a story about a technology that purports to make the Internet more secure, or about a privacy-protecting measure that an Internet company is promoting, I try to check in with the cryptologist and security expect Bruce Schneier. It’s always a good day when Schneier gets back to you–but what he says is usually sobering.

And lately, what he says is downright dystopian.  He’s speaking tomorrow night at Harvard with Jonathan Zittrain. Since most of you can’t attend, I thought I’d take the liberty of sharing what he’s thinking about lately – because what Schneier is thinking about is usually worth other people thinking about.

The passage below is from the Berkman Center for Internet & Society’s promotion of the event, quoting Schneier:

From Bruce Schneier:

What I’ve Been Thinking About:

I have been thinking about the Internet and power: how the Internet affects power, and how power affects the Internet. Increasingly, those in power are using information technology to increase their power. This has many facets, including the following:

1. Ubiquitous surveillance for both government and corporate purposes – aided by cloud computing, social networking, and Internet-enabled everything – resulting in a world without any real privacy.

2. The rise of nationalism on the Internet and a cyberwar arms race, both of which play on our fears and which are resulting in increased military involvement in our information infrastructure.

3. Ill-conceived laws and regulations on behalf of either government or corporate power, either to prop up their business models (copyright protections), enable more surveillance (increased police access to data), or control our actions in cyberspace.

4. A feudal model of security that leaves users with little control over their data or computing platforms, forcing them to trust the companies that sell the hardware, software, and systems.

On the one hand, we need new regimes of trust in the information age. (I wrote about the extensively in my most recent book, Liars and Outliers.) On the other hand, the risks associated with increasing technology might mean that the fear of catastrophic attack will make us unable to create those new regimes.

It is clear to me that we as a society are headed down a dangerous path, and that we need to make some hard choices about what sort of world we want to live in. It’s not clear if we have the social or political will to address those choices, or even have the conversations necessary to make them. But I believe we need to try.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.