How Much Should Tech Companies Police Hate Speech?
In the wake of neo-Nazi demonstrations that turned violent in Charlottesville, Virginia, last weekend, the question on many people's minds is: what should tech companies do to curtail hate speech and violent racist groups online?
A piece in Wired puts its finger on the issue. The Daily Stormer, a prominent neo-Nazi website, was kicked off GoDaddy on Monday and denied a home by Google. Airbnb, meanwhile, blocked users who looked to be using the Daily Stormer to organize the event. So some firms have clearly made the choice not to put up with racists who espouse violence.
But policing content is tough, and risks running afoul of users' expectations about freedom of expression online. So while Facebook, YouTube, and others have enlisted AI-powered solutions to try to cope with the deluge of extremist content that comes their way, many still shy away from stronger steps that could do a more thorough job of rooting out hate speech. The good news is that the list of technological solutions is growing every day, meaning that violent and hateful users will find spreading their bile online increasingly difficult—even if the issue is unlikely to be completely resolved anytime soon.
Keep Reading
Most Popular
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.