Skip to Content

Should We Let Internet Companies Define How We Express Ourselves?

Facebook and Twitter, among others, have agreed to enact a more stringent way of policing hate speech on their platforms in Europe.

Google, Facebook, Twitter, and Microsoft have agreed to a “code of conduct” (PDF) in European Union countries that requires the Internet giants to take down hate speech within 24 hours of posting on their platforms. It’s the latest controversial move in what has been a thorny issue for companies trying to strike a balance between freedom of expression online and curtailing abusive or violent content.

“We remain committed to letting the tweets flow,” Karen White, Twitter’s head of public policy for Europe, said in a statement. “However, there is a clear distinction between freedom of expression and conduct that incites violence and hate.”

Well, that’s been the trouble—there isn’t a clear line. Much of the speech protected by the U.S. Constitution, where these companies are based, can be downright offensive. Expressions of racism, homophobia, and religious intolerance may be deplorable, but they’re not illegal in and of themselves.

Platforms like Twitter and Facebook aren’t required to let stand any comments that are protected by law, of course. They can take down anything they want, and they often do, bowing to forces ranging from public opinion to government pressure to crack down on abusive content, or reverse course when their censors go too far.

In Europe, protections on speech aren’t as sweeping. The European Commission defines “illegal hate speech” as “all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin,” and directs EU member nations to come up with criminal and civil punishments accordingly. The rules agreed to on Tuesday were a response to the recent terror attacks in Paris and Brussels, and are explicitly meant to “counter terrorist propaganda.”

Many Internet companies already have similar language baked into their policies, much of which sounds sensible enough. But the agreement puts the power of deciding what’s acceptable and what’s not in the hands of a company, without the promise of transparency or due process that would normally come if the decision lay with a law enforcement agency.

At least in part because of this, some groups walked out of the EU Internet Forum, where the agreement was drafted. The group European Digital Rights, which advocates for digital freedom, withdrew from the forum and released a statement Tuesday that the agreement allows companies to simply “sweep offences under the carpet,” adding that because the process will take place outside the legal process, it “creates serious risks for freedom of expression as legal but controversial content may well be deleted as a result of this voluntary and unaccountable takedown mechanism.”

Be that as it may, since it is a voluntary, nonbinding agreement, it is unlikely to be much of a watershed. If anything, it highlights the willingness of government to pawn off some of the most difficult decisions about how we express ourselves online. In so doing it unfortunately leaves us, as citizens living increasingly large portions of our lives through the Internet, rather in the dark as to how the rules are being enforced.

(Read more: Financial Times, the Guardian, Bloomberg, Electronic Frontier Foundation, “Fighting ISIS Online”)

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.