For the past four years, Shagun Jhaver has moderated several subreddits, diligently scrolling through pages and blocking posts that violate community rules or are outright offensive.
A PhD student at Georgia Tech whose research focuses on content moderation, Jhaver wondered if an automatic moderator could help him save not only time but also the mental toll of sifting through psychologically draining content. So along with three colleagues, he set out to figure out if an automatic moderator—in this case, AutoMod—actually worked.
The team personally moderated several pages on Reddit and then conducted interviews with 16 other moderators of some of the most popular subreddits on the site—including r/photoshopbattles, r/space, r/explainlikeimfive, r/oddlysatisfying, and r/politics, each of which has millions of subscribers. All rely on AutoMod to help them moderate. Jhaver will present the work next week at the ACM Conference on Computer-Supported Cooperative Work and Social Computing.
Social-media platforms like Facebook, Instagram, and YouTube have long relied on human moderators to manually comb through content and remove violent and offensive material that ranges from racist and sexist hate speech to graphic video of mass shootings. Often working on contract, at minimum wage with few benefits, moderators can find themselves pulling long hours while being pummeled with content that takes a serious toll on their mental health.
Automoderators are an attempt to mitigate the tedium and negative effects of such work. Developed by Redditor Chad Birch as a way to augment his ability to moderate the r/gaming channel, AutoMod is a rule-based tool for identifying words that violate a certain page’s posting policies. It’s since gone into wide use—Reddit adopted it sitewide in 2015, and the hugely popular game-streaming platforms Twitch and Discord followed suit soon after.
Whether AutoMod is actually a time-saver is questionable, though. On the one hand, automoderators are very good at what they do—if they’re programmed to find swear words, they will find and block posts that contain them without fail. It can send notifications to posters about problematic content, which Jhaver says is “educational,” in that authors can learn what was wrong with whatever they posted.
That’s not a small feat. As Jhaver and his colleagues note, about 22% of all submissions on Reddit between March and October of 2018 were removed. That comes out to about 17.4 million posts in that time period.
But let’s say the word is important for context in the post—a discussion in 2016 of soon-to-be-president Donald Trump’s infamous comment about grabbing a woman’s genitals, for example. Such posts would get flagged because of the offensive language, even though discussing that language is the point of the post in the first place. Jhaver says this frustrates users, who then have to go back and ask moderators to reinstate the post.
And in a social-media world where troubling content increasingly consists of offensive memes, live-streams of shootings, or other visual, textless content, AutoMod’s reliance on finding keywords is a big liability.
Robert Peck, a moderator for the large subreddits r/pics and r/aww, knows this all too well. Each of those pages is image driven, and each has millions of followers posting far more content than anyone could be reasonably asked to sift through.
Still, he says that even though it cannot analyze images, AutoMod has made his work easier. “Users add descriptors to images directly, and we can check those titles,” he says. “We look for account fattening or spam that have accounts that automate posts. They often use parentheses. We can tell AutoMod to look for those patterns.”
Like it or not, AutoMod and its ilk are the future of social platform moderating. It will probably always be imperfect, because machines are still a long way from truly understanding human language. But this is what automation is supposed to be all about: saving people time on tedious or objectionable tasks. Being able to concentrate on posts that require a human touch makes a moderator’s job that much more valuable, and allows both moderators and posters to focus on having better conversations.
It won’t solve the problem of people posting nasty, malicious, or otherwise deleterious content—that will still be one of the thorniest problems afflicting the modern internet. But it is making a difference. Peck says he’s grateful for AutoMod’s ability to help him “batch process” posts. “It’s a powerful piece of technology and quite user friendly—nowhere near the difficulty of programming an equivalent bot,” he says. “[AutoMod] is my most powerful tool, and I’d be lost without it.”
Humans and technology
Anti-abortion activists are collecting the data they’ll need for prosecutions post-Roe
Body cams and license plates are already being used to track people arriving at abortion clinics.
How China’s biggest online influencers fell from their thrones
Three top livestreaming personalities on the platform Taobao commanded legions of fans who bought billions of dollars’ worth of goods—until they suddenly went dark.
Inside the experimental world of animal infrastructure
Wildlife crossings cut down on roadkill. But are they really a boon for conservation?
Facebook is bombarding cancer patients with ads for unproven treatments
Clinics offering debunked cancer treatments are still allowed to advertise, despite the company’s stated efforts to control medical misinformation.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.