Nearly 180,000 people live in Chattanooga, Tennessee’s fourth-biggest city. Famed for its scenery and outdoorsy lifestyle, the city sits on the banks of the Tennessee River, at the foot of the Appalachian Mountains. In 2010 it was the first US city to roll out gigabit internet. The New York Times called it “the undiscovered gem of Tennessee.”
On July 26, 2015, Chattanooga came to be known for something else entirely. Muhammad Youssef Abdulazeez went on a killing spree, shooting seven people at two US military facilities. Four of them, all Marines, died on the spot. The FBI later concluded that the 24-year-old shooter had “self-radicalized” by trawling through Al-Qaeda propaganda online as his life spiraled out of control.
Chattanooga has a small, close-knit Muslim population. In the wake of the attack, angry, racist comments appeared online. Several politicians, including Donald Trump, then a Republican presidential candidate, seized on the shootings for campaigning purposes. The city’s Muslim residents feared revenge attacks.
The situation has only grown worse since. Hate crimes are up nationally, according to the FBI, and Tennessee ranks ninth out of all US states for the total number of such offenses. Chattanooga, meanwhile, recorded more religiously motivated incidents than any other Tennessee city in 2017. Last month the city launched an initiative to tackle the problem by getting residents to report hate speech online. It’s the first US city to start recording information about such incidents in this way. The hope is that this will be an important step in making the city a more unified, tolerant place to live.
Chattanooga’s government now has an online form for people to fill in if they see or experience hate speech, either in person or online. It takes seconds to fill in. You just have to explain what the term was, where it was used, whether it was about you or someone else, how you’d define the term, and which language it was in. It’s an anonymous process. No data about the person reporting the terms is collected.
All the data submitted to Chattanooga’s form—the specific terms used, where, and how often—go instantly to Hatebase, a Toronto-based company that spun out of the Sentinel Project, a Canadian nonprofit organization. Hatebase has created the world’s biggest database of hate-speech words across more than 200 countries. These include racist slurs, homophobic terms, sexist phrases, and other forms of derogatory speech targeting a particular group. It’s funded by the company’s work with commercial clients but is free for any local government body that chooses to use it.
Once Hatebase has the data, it is automatically sorted and annotated. These annotations can explain the multiple meanings of the terms used, for example, or their level of offensiveness. The resulting data can also be displayed in a dashboard to make it easier for city officials to visualize the problem.
Hunt for patterns
Once enough data has been gathered (most likely in a few months’ time), the city will use Hatebase’s system to monitor trends in hate-speech usage across Chattanooga, and see if there are any patterns between the words used against particular groups and subsequent hate crimes. Often, violence against a particular group is preceded by an increase in dehumanizing, abusive language used against that group. The Sentinel Project has already used this sort of language monitoring successfully as an early warning system for armed ethnic conflict in Kenya, Uganda, Burma, and Iraq.
The context in Chattanooga is different, but the goal is the same: track hate speech and nip it in the bud before it spills over into violence. Having all this data in one place lets the city instantly identify specific areas of tension between communities, highlighting hateful terms that are rising in frequency and where they are being used
Chattanooga's partnership with Hatebase is also designed to deal with a persistent issue for the city: poor and inconsistent reporting of hate speech to law enforcement, which has made it hard to pinpoint recurring issues. Hayes hopes that making it quick and anonymous to report hate speech—and sidestepping the need for marginalized groups to talk to the police (whom they don’t always trust)—will change that. It should also tell them which particular slurs recur and whether there are any hot spots in the city, and potentially even provide an early indicator of potential violence.
Chattanooga’s officials plan to use the data to inform the policies it produces in response. For example, they might increase security measures at local mosques or churches, set up programs to bring groups together, or open up community centers, Hayes says: “It’s about empathy and mitigating isolation, and developing social bonds between communities.”
Isolated incidents of hate speech may be small on their own, but they can build up into a much bigger problem, says Neil Johnson, a physics professor at George Washington University who studies patterns of hate speech. “This initiative is fantastic,” he says. “It’s data driven, which is crucial—it isn’t just relying on anecdotes. But you’ve got to focus on moving beyond the individual to the wider hate group. And we’ve got to counter these narratives, not just shut them down.”
The plan has drawbacks. It doesn’t include any proactive monitoring of public social-media posts, which would be controversial but useful for anyone trying to keep tabs on racial hatred and how it spills over into real-world incidents. It puts the onus on local citizens to report if they see or hear hate speech. “It’s only as useful as people make it,” Hayes concedes.
That said, the vast majority of cities don’t monitor hate speech at all, according to Timothy Quinn, co-founder of Hatebase. If they want to create policies to tackle divisions between communities, all they can do is guess.
Humans and technology
VR is as good as psychedelics at helping people reach transcendence
On key metrics, a VR experience elicited a response indistinguishable from subjects who took medium doses of LSD or magic mushrooms.
The 1,000 Chinese SpaceX engineers who never existed
LinkedIn users are being scammed of millions of dollars by fake connections posing as graduates of prestigious universities and employees at top tech companies.
Social media is polluting society. Moderation alone won’t fix the problem
Companies already have the systems in place that are needed to evaluate their deeper impacts on the social fabric.
The fight for “Instagram face”
Meta banned filters that “encourage plastic surgery,” but a massive demand for beauty augmentation on social media is complicating matters.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.