Skip to Content

Sponsored

Unsung heroes: Moderators on the front lines of internet safety

Digital first responders screen nefarious content online, so that we don’t have to.

September 12, 2022

Provided byTeleperformance

It’s no secret that digital predators are lurking online in record numbers, exposing others to harmful language, images, videos, and activities. With 300 hours of user-generated content uploaded to the internet every minute, protecting unsuspecting users has become a mammoth task. According to Variety, user-generated content represents 39% of all time spent with media. So, how can companies safeguard online spaces and preserve brand integrity with so much content being generated independently?

Enter the resilient human moderators (also called digital first responders) who willingly accept the challenging task of ensuring that our digital experiences are safe.

What, one might ask, does a content moderator do, exactly? To answer that question, let’s start at the beginning.

What is content moderation?

Although the term moderation is often misconstrued, its central goal is clear—to evaluate user-generated content for its potential to harm others. When it comes to content, moderation is the act of preventing extreme or malicious behaviors, such as offensive language, exposure to graphic images or videos, and user fraud or exploitation.

There are six types of content moderation:

  1. No moderation: No content oversight or intervention, where bad actors may inflict harm on others
  2. Pre-moderation: Content is screened before it goes live based on predetermined guidelines
  3. Post-moderation: Content is screened after it goes live and removed if deemed inappropriate
  4. Reactive moderation: Content is only screened if other users report it
  5. Automated moderation: Content is proactively filtered and removed using AI-powered automation
  6. Distributed moderation: Inappropriate content is removed based on votes from multiple community members

Why is content moderation important to companies?

Malicious and illegal behaviors, perpetrated by bad actors, put companies at significant risk in the following ways:

  • Losing credibility and brand reputation
  • Exposing vulnerable audiences, like children, to harmful content
  • Failing to protect customers from fraudulent activity
  • Losing customers to competitors who can offer safer experiences
  • Allowing fake or imposter account

The critical importance of content moderation, though, goes well beyond safeguarding businesses. Managing and removing sensitive and egregious content is important for every age group.

As many third-party trust and safety service experts can attest, it takes a multi-pronged approach to mitigate the broadest range of risks. Content moderators must use both preventative and proactive measures to maximize user safety and protect brand trust. In today’s highly politically and socially charged online environment, taking a wait-and-watch “no moderation” approach is no longer an option.

“The virtue of justice consists in moderation, as regulated by wisdom.” — Aristotle

Why are human content moderators so critical?

Many types of content moderation involve human intervention at some point.  However, reactive moderation and distributed moderation are not ideal approaches, because the harmful content is not addressed until after it has been exposed to users. Post-moderation offers an alternative approach, where AI-powered algorithms monitor content for specific risk factors and then alert a human moderator to verify whether certain posts, images, or videos are in fact harmful and should be removed. With machine learning, the accuracy of these algorithms does improve over time.

Although it would be ideal to eliminate the need for human content moderators, given the nature of content they’re exposed to (including child sexual abuse material, graphic violence, and other harmful online behavior), it’s unlikely that this will ever be possible. Human understanding, comprehension, interpretation, and empathy simply can’t be replicated through artificial means. These human qualities are essential for maintaining integrity and authenticity in communication. In fact, 90% of consumers say authenticity is important when deciding which brands they like and support (up from 86% in 2017). 

While the digital age has given us advanced, intelligent tools (such as automation and AI) needed to prevent or mitigate the lion’s share of today’s risks, human content moderators are still needed to act as intermediaries, consciously putting themselves in harm’s way to protect users and brands alike.

Making the digital world a safer place

While the content moderator’s role makes the digital world a safer place for others, it does expose moderators to disturbing content. They are, essentially, digital first responders who shield innocent, unsuspecting users from emotionally unsettling content—especially those users who are more vulnerable, like children.

Some trust and safety service providers believe that a more thoughtful and user-centric way to approach moderation is to view the issue as a parent trying to shield their child—something that could (and perhaps should) become a baseline for all brands, and what certainly motivates the brave moderators around the world to stay the course in combating today’s online evil.

The next time you’re scrolling through your social media feed with carefree abandon, take a moment to think about more than just the content you see—consider the unwanted content that you don’t see, and silently thank the frontline moderators for the personal sacrifices they make each day.

This content was produced by Teleperformance. It was not written by MIT Technology Review’s editorial staff.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.