At least Twitter admits it has a problem. In an internal memo leaked last week, CEO Dick Costolo acknowledged what many people on Twitter already knew: 140 characters at a time, many of the service’s users are routinely harassed, abused, or threatened, and the company isn’t doing much to stop it.
Costolo’s note suggested that Twitter would take new action against harassers—a potentially important step at a time when online abuse has reached troubling proportions. Twitter’s effort might offer a template for addressing the wider problem, but it may also show the challenge of stamping out unacceptable behavior without eroding the character of an inherently unruly and combative community. Rules that reduce harassment might have the unintended consequences of slowing the flow of information and turning off some ardent users.
Online bullying—on social networks in particular—is now a mainstream issue. According to a Pew Research Center survey from late last year, 40 percent of people have been bullied on the Web, and the majority of those people (66 percent) say it most recently happened on a social network. Even if you haven’t experienced it firsthand, chances are you’ve spotted it: the same Pew report found that 73 percent of people reported seeing someone else being harassed online. Among people between the ages of 18 and 29, this figure jumps up to 92 percent.
The issue was pushed to the forefront last year via Gamergate—a spiraling campaign of online harassment that stemmed from allegations that a female video-game developer slept with a reporter in exchange for positive reviews of her game. Even more recently, the problem was highlighted by writer Lindy West’s January piece for the radio show This American Life that recounted the abuse she suffered on Twitter and how she confronted one particular troll. Costolo’s memo, in fact, was a response to an employee’s post about that piece on an internal Twitter site.
Clay Shirky, an associate professor at New York University who has written two books about social media, thinks the problem has reached a point where Twitter needs to remind users about what is and isn’t okay, and devise some consequences for misbehavior. Even though it may be labor intensive and therefore expensive to keep and enforce such rules, he says, the Twitter community would benefit from it.
“There’s a bigger threat to not taking on this problem than taking on this problem, simply because public sympathy is going to go more in the direction of the abused than the abusers,” Shirky says.
But while the time seems right for Twitter to act, it is far from clear how best to discourage such behavior. It does have methods in place to help deal with abuse, such as the ability to block and report a user who’s bothering you. Yet while that may help if you’re dealing with one or even a few bothersome tweeters, it cannot stop a deluge of nasty posts, and a determined harasser can always just make a new user profile and start the harassment anew.
How to make Twitter safer without turning off some existing users may be an even trickier question. The freewheeling exchange of views and opinions that characterizes Twitter is part of its appeal; if it were to enforce strict controls over who could talk to whom, or—like Facebook—require people to disclose their real identities, that might cut down on the flow of information.
Improvements may come if Twitter carefully combines increased communication with users and technology for automatically identifying harassers.
Twitter might reduce the burden on its own staff by involving its users more directly. Justin Patchin, a professor of criminal justice at the University of Wisconsin-Eau Claire and co-director of the school’s Cyberbullying Research Center, says Twitter could ask its community to regulate itself—perhaps by allowing users to volunteer to vote on whether or not content is abusive.
Kate Crawford, a visiting professor at the MIT Center for Civic Media and a principal researcher at Microsoft Research, suggests that people experiencing harassment could share lists of users they have blocked with others having the same problem. “There is no quick technical fix for a social problem,” says Crawford. “What there can be is a broadening of the understanding of what the problem is.”
Twitter doesn’t preëmptively spot and wipe out offensive content, yet Jerry Zhu, an associate professor of computer science at the University of Wisconsin-Madison who has studied the use of machine learning to track abusive posts on Twitter, says artificial intelligence can help spot and censor nasty posts by looking out for key words and phrases. This is hard to do with certainty if bullying is not explicit, though; sometimes people are being nasty without using obviously mean language. “This is where the current technique is hitting the AI limit in some ways in that computers cannot be that subtle and parse that meaning out of it accurately,” he says.
While any changes will risk upsetting some users, that may be a price that Twitter is willing to pay in the short term to create a more hospitable service over the long run. Says Robin Kowalski, a psychology professor at Clemson University who studies cyberbullying: “If they lose a few users, they’re going to gain a few more.”