Should We Let Internet Companies Define How We Express Ourselves?
Facebook and Twitter, among others, have agreed to enact a more stringent way of policing hate speech on their platforms in Europe.
Google, Facebook, Twitter, and Microsoft have agreed to a “code of conduct” (PDF) in European Union countries that requires the Internet giants to take down hate speech within 24 hours of posting on their platforms. It’s the latest controversial move in what has been a thorny issue for companies trying to strike a balance between freedom of expression online and curtailing abusive or violent content.
“We remain committed to letting the tweets flow,” Karen White, Twitter’s head of public policy for Europe, said in a statement. “However, there is a clear distinction between freedom of expression and conduct that incites violence and hate.”
Well, that’s been the trouble—there isn’t a clear line. Much of the speech protected by the U.S. Constitution, where these companies are based, can be downright offensive. Expressions of racism, homophobia, and religious intolerance may be deplorable, but they’re not illegal in and of themselves.
Platforms like Twitter and Facebook aren’t required to let stand any comments that are protected by law, of course. They can take down anything they want, and they often do, bowing to forces ranging from public opinion to government pressure to crack down on abusive content, or reverse course when their censors go too far.
In Europe, protections on speech aren’t as sweeping. The European Commission defines “illegal hate speech” as “all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin,” and directs EU member nations to come up with criminal and civil punishments accordingly. The rules agreed to on Tuesday were a response to the recent terror attacks in Paris and Brussels, and are explicitly meant to “counter terrorist propaganda.”
Many Internet companies already have similar language baked into their policies, much of which sounds sensible enough. But the agreement puts the power of deciding what’s acceptable and what’s not in the hands of a company, without the promise of transparency or due process that would normally come if the decision lay with a law enforcement agency.
At least in part because of this, some groups walked out of the EU Internet Forum, where the agreement was drafted. The group European Digital Rights, which advocates for digital freedom, withdrew from the forum and released a statement Tuesday that the agreement allows companies to simply “sweep offences under the carpet,” adding that because the process will take place outside the legal process, it “creates serious risks for freedom of expression as legal but controversial content may well be deleted as a result of this voluntary and unaccountable takedown mechanism.”
Be that as it may, since it is a voluntary, nonbinding agreement, it is unlikely to be much of a watershed. If anything, it highlights the willingness of government to pawn off some of the most difficult decisions about how we express ourselves online. In so doing it unfortunately leaves us, as citizens living increasingly large portions of our lives through the Internet, rather in the dark as to how the rules are being enforced.
Be the leader your company needs. Implement ethical AI.
Join us at EmTech Digital 2019.