Last summer, a TikTok creator named Ziggi Tyler posted a video calling out a disturbing problem he found in the app’s Creator Marketplace, a tool that matches creators with brands looking to pay for sponsored content. Tyler said he was unable to enter phrases like “Black Lives Matter” and “supporting Black excellence” into his Marketplace profile. However, phrases like “white supremacy” and “supporting white excellence” were allowed.
There are two ways to try to understand the impact of content moderation and the algorithms that enforce those rules: by relying on what the platform says, and by asking creators themselves. In Tyler’s case, TikTok apologized and blamed an automatic filter that was set up to flag words associated with hate speech—but that was apparently unable to understand context.
Brooke Erin Duffy, an associate professor at Cornell University, teamed up with graduate student Colten Meisner to interview 30 creators on TikTok, Instagram, Twitch, YouTube, and Twitter around the time Tyler’s video went viral. They wanted to know how creators, particularly those from marginalized groups, navigate the algorithms and moderation practices of the platforms they use.
What they found: Creators invest a lot of labor in understanding the algorithms that shape their experiences and relationships on these platforms. Because many creators use multiple platforms, they must learn the hidden rules for each one. Some creators adapt their entire approach to producing and promoting content in response to the algorithmic and moderation biases they encounter.
Below is our conversation with Duffy about her forthcoming research (edited and condensed for clarity).
Creators have long discussed how algorithms and moderation affect their visibility on the platforms that made them famous. So what most surprised you while doing these interviews?
We had a sense that creators’ experiences are shaped by their understanding of the algorithm, but after doing the interviews, we really started to see how profound [this impact] is in their everyday lives and work … the amount of time, energy, and attention they devote to learning about these algorithms, investing in them. They have this kind of critical awareness that these algorithms are understood to be uneven. Despite that, they’re still investing all of this energy in hopes of understanding them. It just really draws attention to the lopsided nature of the creator economy.
How often are creators thinking about the possibility of being censored or having their content not reach their audience because of algorithmic suppression or moderation practices?
I think it fundamentally structures their content creation process and also their content promotion process. These algorithms change at whim; there’s no insight. There’s no direct communication from the platforms, in many cases. And this completely, fundamentally impacts not just your experience, but your income.
They would invest so much time and labor in these grassroots experiments and would talk about “I would do the same kind of content, but I would vary this thing one day. I would wear this kind of outfit one day, and another kind the next.” Or they’d try different sets of hashtags.
People would say they have both online and offline interactions with their creator community, and they would talk about how to game the algorithm, what’s okay to say, what can potentially be flagged. There are some important forms of collective organization that may not look like what we would traditionally think of as organized workers but are still powerful ways for creators to band together and kind of challenge the top-down systems of power.
One of the things I kept thinking about while reading your findings was the concept of “shadow banning,” the moderation practice of hiding or limiting the reach of content without informing its creator. From a journalist’s perspective, “shadow banning” is hard to report on because it is by definition hidden, but it’s one of the main concerns creators have expressed over the years. How did you consider this concept in your research?
Some people swear they've been shadow-banned, and other people say, “Well, your content is just bad.” It’s a very fraught problem, because anyone can issue these claims.
The ambiguity of shadow banning is in part what makes it so powerful. Because there’s no way to actually prove that any one person on these platforms was or was not shadow-banned, that fuels a lot of speculation. But you know, whether it exists or it doesn’t, the fact that people act as if they are punished through limits on their visibility is worth taking seriously.
Is there anything that can be done to help resolve some of these issues?
Platforms tout all over their websites their benefits to creators and [say] if you are talented enough and have the right content, you can connect with audiences and make all kinds of money. The creators are drawing so much money, through data and eyeballs, to these platforms but don't have much of a say in their content moderation policies and how these are unevenly enacted. Radical transparency is a bit pie-in-the-sky, but I do think creators should have more representation in terms of the decisions that fundamentally impact their businesses.
Humans and technology
Unlocking the power of sustainability
A comprehensive sustainability effort embraces technology, shifting from risk reduction to innovation opportunity.
Drive innovation with a tech culture of ‘connect, learn, and apply’
Offer an ongoing curriculum that aligns to strategic priorities and the latest technology to drive innovation, productivity, and social-good efforts.
People are worried that AI will take everyone’s jobs. We’ve been here before.
In a 1938 article, MIT’s president argued that technical progress didn’t mean fewer jobs. He’s still right.
Building a data-driven health care ecosystem
Harnessing data to improve the equity, affordability, and quality of the health care system.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.