Skip to Content
Humans and technology

How to protect yourself online from misinformation right now

In times of crisis it’s easy to become a spreader of incorrect information online. We asked the experts for tips on how to stay safe—and protect others.
Photo by Joshua Hoehne on Unsplash

There wasn’t a communications blackout in Washington, DC, on Sunday, but #dcblackout trended on Twitter anyway, thanks to some extremely distressing tweets telling people that, mysteriously, no messages were getting out from the nation’s capital. The tweets, Reddit posts, and Facebook messages about the “blackout” got thousands of shares, fueled by pleas to spread the information widely and ominous warnings about what would happen next to protesters.

But I can tell you that there wasn’t a blackout because I live in DC, and I had to assure worried friends that my internet was working as normal. Despite this, the hashtag stayed trending for hours on Monday, with some people questioning its claims, others dismissing attempts to debunk it, and no one clear on exactly how this rumor spread so far.

The logical response to seeing potentially harmful misinformation spread across the internet is to debunk, and to inform others on how they can avoid falling for it themselves. But it’s difficult to evaluate a river of information when you’re going through something traumatic—in the midst of a global pandemic, and with police escalating their use of force against people protesting police brutality.

“Nothing is okay, and we're going through the same motions that we go through every time there's a crisis,” says Whitney Phillips, an assistant professor of communication and rhetorical studies at Syracuse University. “We have a muscle memory of hitting retweet,” to share something that speaks to a personal experience, or to amplify the voices of others during a crisis. “It feels like it's helping.” But that same impulse can also lead to harm, especially when the content you’re sharing turns out to be misleading or false.

I asked Phillips, who has written about the intersection of toxic online misinformation and mental health, and Shireen Mitchell, the founder of Stop Online Violence Against Women, to give their advice on navigating online misinformation when everything is horrible.

Give yourself some credit

“People often think that because they’re not influencers, they're not politicians, they're not journalists, that what they do [online] doesn’t matter,” Phillips says. But trending hashtags are a good example of how volume, from big and small accounts alike, can drive attention to misinformation. Treating your online presence as if you’re inconsequential, no matter how few followers you have, can be dangerous.

“It doesn’t matter how well intentioned you are,” Phillips says. “By retweeting something that has #dcblackout in it, enough people can make it trend and send people into a panic.”

The good news is that your impulse to share injustice on the internet in order to make the world better can have an impact way beyond your immediate follower count. But it also means that if you share something that’s not true, you can cause more harm than you might think.

Hit pause

Misinformation about racist violence can be particularly difficult to examine as it passes in front of you, because the content itself is re-traumatizing, particularly for black Americans.

“For me, this is what happens with our community. People don’t believe us. So when something bad happens, you want people to share it,” Mitchell says. Misinformation targets this same impulse. The goal is to “evoke an emotion,” Mitchell says. “The minute it evokes an emotion, you have to hit pause.”

The danger is even more acute on the ground during a protest, Mitchell says. If a misleading or false rumor is spreading on social media, protesters have limited means to examine that information on the fly, particularly in an environment that might be unsafe. 

Mitchell recommends stepping away from the center of a protest, if possible, when confronted with a distressing rumor to look into its source. “If you discover it’s not true, come back to the crowd,” she says. Let others know what you found.

Think laterally

Mitchell, like many experienced disinformation experts, has learned how to handle potential misinformation through years of practice. But there are ways to get better at it quickly. One of them is to learn to think laterally about a single piece of content—that is, open up some tabs and do your research before sharing something.

Mike Caulfield, a digital literacy expert, has developed what he calls the SIFT approach to looking at information: “Stop, Investigate the source, Find better coverage, and Trace claims, quotes, and media to the original context.” Caulfield has said that his method was adapted from a 2017 Stanford study on how professional fact checkers evaluate digital information. Many of the students and historians who participated in the study fell into the trap of trying to evaluate potential misinformation mainly by looking at it for clues to reliability. The fact checkers —including me—ran Google searches, read news coverage, and did research.

Mitchell’s method is similar. “Every time I go into a trending hashtag, I’m not trying to get the top-level conversation,” Mitchell says. “I’m digging through to find more about it.” And, crucially, she is still on pause.

For example, Mitchell saw a couple of videos that showed protesters acting violently toward bystanders. First, Mitchell looked at the source of the videos: Who posted them? Is this video original or an edited clip from something else? Is that source who they say they are

Then she looked at where they were being shared; she looked for other videos with angles of the scene; she looked at whether the text accompanying the video accurately portrayed what was going on. It turned out The Intercept had a good rundown of how one of those videos had been edited to be misleading.

Understand that misinformation can still be “real”

Many of the most cited misinformation experts are white. When fact checking information about communities of color, these experts risk causing damage, no matter their intentions.

“Most white people do not believe our lived experience,” Mitchell says. By parachuting into a conversation to tell someone that they just shared a misleading video, you can also be implicitly “tell[ing] black people that their lived experience isn’t true.” That’s particularly problematic when you’re handling misinformation that is literally being shared with the intention of making the lived experience of black Americans more visible.

It’s also problematic to say nothing, Mitchell argues. However, if you are engaging with viral misinformation, don’t assume that your expertise should be immediately believed and heeded, or get defensive when your intentions are challenged. Everybody is worried about people’s motivations, especially when authoritative institutions have released inaccurate information or helped spread misinformation about protests.

Phillips says she tries to think about this in terms of “true” vs. “real” information. Something can be empirically untrue and still speak to something that is real. “There’s a way of affirming ‘This is a reality that people navigate,’ even if this specific video wasn’t taken yesterday,” Phillips says. That understanding should inform your approach to addressing misinformation in the middle of a trauma, whether you’re trying to debunk something that has been shared millions of times or you’re just trying to talk to your mom about one of her Facebook posts.

Consider logging off or stepping away

Examining misinformation can be hard work, and the work is harder when the content itself is traumatizing.

This is true even for experts and veterans. “I don't think that it can be emphasized more explicitly or firmly enough: we are being forced to navigate territory that is absolutely uncharted,” Phillips says. “Some of us have been doing this for years.” But even if you have the media literacy tools and deep emotional reserves, that’s not always enough.

“Maybe on paper some of us have resources we can pull from,” Phillips says. “But the fact is that none of us are prepared for this.”

Keep Reading

Most Popular

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Sam Altman says helpful agents are poised to become AI’s killer function

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

A brief, weird history of brainwashing

L. Ron Hubbard, Operation Midnight Climax, and stochastic terrorism—the race for mind control changed America forever.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.