Why Generation Z falls for online misinformation
We can all learn from how today’s young people evaluate truth online.

A teenage girl peers gravely at the camera, the frame wobbling as she angles her phone at her face. A caption superimposed on her hoodie shares an ominous warning: If Joe Biden is elected president of the United States, “trumpies” will commit mass murder of LGBT individuals and people of color. A second caption announces, “this really is ww3.” That video was posted to TikTok on November 2, 2020, and liked more than 20,000 times. Around that time, dozens of other young people shared similar warnings across social media, and their posts drew hundreds of thousands of views, likes, and comments.
Clearly, the claims were false. Why, then, did so many members of Generation Z—a label applied to people aged roughly 9 to 24, who are presumably more digitally savvy than their predecessors—fall for such flagrant misinformation?
I’ve worked as a research assistant at the Stanford Internet Observatory since last summer, analyzing the spread of online misinformation. I’ve studied foreign influence campaigns on social media and examined how misinformation about the 2020 election and covid-19 vaccines went viral. And I’ve found that young people are more likely to believe and pass on misinformation if they feel a sense of common identity with the person who shared it in the first place.
Offline, when deciding whose claims should be trusted and whose should be ignored or doubted, teenagers are likely to draw on the context that their communities provide. Social connections and individual reputations developed through years of shared experiences inform which family members, friends, and classmates teenagers rely on to form their opinions and receive updates on events. In this setting, a community’s collective knowledge about whom to trust on which topics contributes more to credibility than the identity of the person making a claim, even if that identity is one the young person shares.
Social media, however, promotes credibility based on identity rather than community. And when trust is built on identity, authority shifts to influencers. Thanks to looking and sounding like their followers, influencers become trusted messengers on topics in which they have no expertise. According to a survey from Common Sense Media, 60% of teenagers who use YouTube to follow current events turn to influencers rather than news organizations. Creators who have built credibility see their claims elevated to the status of facts while subject matter experts struggle to gain traction.
Young people are more likely to believe and pass on misinformation if they feel a sense of common identity with the person who shared it in the first place.
This, in large part, is how the rumor of plans for post-election violence went viral. The individuals who shared the warning were deeply relatable to their audience. Many were people of color and openly LGBT, and their past posts discussed familiar topics like family conflict and struggles in math class. This sense of shared experience made them easy to believe, even though they offered no evidence for their claims.
Making matters worse was the information overload many people experience on social media, which can lead us to trust and share lower-quality information. The election rumor appeared among dozens of other posts in teenagers’ TikTok feeds, leaving them with little time to think critically about each claim. Any efforts to challenge the rumor were relegated to the comments.
As young people participate in more political discussions online, we can expect those who have successfully cultivated this identity-based credibility to become de facto community leaders, attracting like-minded people and steering the conversation. While that has the potential to empower marginalized groups, it also exacerbates the threat of misinformation. People united by identity will find themselves vulnerable to misleading narratives that target precisely what brings them together.
Who, then, has a role to play in promoting accountability? Social media platforms can implement recommendation algorithms that prioritize a diversity of voices and value discourse over clickbait. Journalists must acknowledge that many readers get their news from social media posts viewed through the lens of identity—and present information accordingly. Policymakers must regulate social media platforms and pass laws to address online misinformation. And educators can teach students to assess the credibility of sources and their claims.
Shifting the dynamics of online dialogue will not be easy, but the dangers misinformation can fuel—and the promise of better conversations—compel us to try.
Jennifer Neda John is a sophomore at Stanford University majoring in human biology. She researches online misinformation at the Stanford Internet Observatory.
Deep Dive
Policy
Three things to know about the White House’s executive order on AI
Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.
A high school’s deepfake porn scandal is pushing US lawmakers into action
Legislators are responding quickly after teens used AI to create nonconsensual sexually explicit images.
A controversial US surveillance program is up for renewal. Critics are speaking out.
Here's what you need to know.
Meta is giving researchers more access to Facebook and Instagram data
There’s still so much we don’t know about social media’s impact. But Meta president of global affairs Nick Clegg tells MIT Technology Review that he hopes new tools the company just released will start to change that.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.