On November 3, Tina Barton ran into a problem. It was Election Day in the US and Barton, a Republican, was city clerk for Rochester Hills, Michigan, a conservative-leaning community near Detroit. As her team was uploading voting results, a technical issue resulted in the double counting of some votes. The error wasn't initially realized, but within 24 hours, it was noticed and reported to Oakland County officials. The voting data was quickly fixed, but by that time the entire country was looking at the state’s election results.
The change was very public, and it generated a huge swell of misinformation. This was supercharged on November 6, when Ronna McDaniel, the chair of the Republican National Committee, flew to Oakland County and held a press conference. She claimed that 2,000 ballots had been counted as Republican before being “given” to Democrats in an accusation of election fraud.
“If we are going to come out of this and say this was a fair and free election, what we are hearing from the city of Detroit is deeply troubling,” McDaniel said.
Upset at how the situation was being misrepresented, Barton posted a video on Twitter refuting the claims. She’s been the Rochester Hills clerk for eight years, and when she spoke out against McDaniel, she knew she was putting her career on the line. In the video, which has since been deleted, Barton said, “I am disturbed that this is intentionally being mischaracterized to undermine the election process."
Her remarks went viral, and they were met with threats and anger. In an email to MIT Technology Review, Barton said that “since Ms. McDaniel’s press conference, I have received threatening voice mails and messages.” One caller claimed to be on the way to Michigan. Barton upgraded the security system of her home.
Targeting our natural fears
Data shows that during the election, disinformation was highly targeted locally, with voters in swing states exposed to significantly more online messages about voter intimidation, fraud, ballot glitches, and unrest than voters in other states.
In a data set provided by Zignal Labs, we looked at mentions across social media of over 30 terms related to voter suppression or intimidation, fraud, technical errors, and unrest that focused on a particular polling location. Our sample of 16 states found that between October 1 and November 13, swing states had more than four times the amount of such mentions: a median of 115,200, while non-swing states saw a median of 28,000 related mentions.
Here’s a chart showing how the volume of messages changed over the days leading up to and after the election.
Mentions relating to voter intimidation, fraud, technical glitches, and voter suppression at specific polling places
Bhaskar Chakravorti, dean of global business at Tuft University’s Fletcher School, conducts research on the conditions that leave a community particularly vulnerable to disinformation. He says that this local focus is typical of effective disinformation campaigns, which are usually pinned to a specific place and slice the target audience into its smallest, stereotyped parts. “Clever misinformation” is organized, he says, in the same way that political campaigning is.
Disinformation is “targeted at our natural or native hopes and fears, and hopes and fears vary depending on who I am,” he says. “It varies depending on how rich or poor I am. It varies depending on what my ethnicity or race is.”
In some places, this localization was more visible than in others. In Florida, Latino voters were subjected to intense campaigns based on their age, heritage, or neighborhood profiles as both parties fought to win the state. As a result of being flooded with this material, says Chakravorti, voters grew distrustful of political information at large and turned to more private spaces for discourse—which were, in fact, ripe environments for localized disinformation that became particularly hard to confront.
These problems all came despite the fact that election officials were significantly better prepared for the challenges in 2020 than in the previous presidential election. Many secretaries of state conducted media blitzes intended to direct people to trusted sources of information for voting, while also battling specific rumors.
Elizabeth Howard, senior counsel at the Brennan Center for Justice, describes it as a two-pronged approach. It involved “proactively educating voters about what’s going on,” she says, “and then, to varying degrees, election officials working to identify and combat mis- and disinformation at the local and hyper-local level.”
Despite all their efforts, however, disinformation about polling still wreaked havoc—particularly for election officials like Tina Barton, who, says Howard, “are just doing their job in compliance with state law across the country.”
Chakravorti says fighting this disinformation in the future may require the use of small-scale media campaigns, local influencers, and community-level ads that spread trusted content. But these tactics won’t fix the deeper structural issues that make a community vulnerable to disinformation. Chakravorti found, unsurprisingly, that some key indicators of vulnerability for US states include political competitiveness, education levels, polarization, and degree of trust in news sources. And none of those issues are new.
In September 1993, the FBI sent a safety alert to the Chicago police department warning of a rumored “new and murderous initiation ritual” for the city’s most notorious street gang. The supposed ceremony required prospective members to drive at night with their headlights off to lure and kill unsuspecting drivers. The claim turned out to be false—but the rumor spread like wildfire.
According to researchers who have studied the “lights out” urban legend, it flourished partly because the summer of 1993 was one of the worst stretches of crime Chicago had ever seen. Tensions were high, seeded by deep-rooted racial friction and political polarization.
Disinformation—whether it’s gang folklore or rumors of election intimidation—is almost always most effective at a local level. It’s worse in polarized, closed environments. We’re most likely to believe things from our own circle. We still struggle to dispel the neighborhood rumor mill, and we certainly don’t know how to do it at scale.
And while the struggle to fight disinformation continues, local officials like Tina Barton are under increasing pressure.
“These are things that take a huge personal toll on our election officials,” says the Brennan Center’s Howard. “These are big stakes for people. These are their neighbors. These are their friends.”
Three things to know about the White House’s executive order on AI
Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.
How generative AI is boosting the spread of disinformation and propaganda
In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.
A controversial US surveillance program is up for renewal. Critics are speaking out.
Here's what you need to know.
Meta is giving researchers more access to Facebook and Instagram data
There’s still so much we don’t know about social media’s impact. But Meta president of global affairs Nick Clegg tells MIT Technology Review that he hopes new tools the company just released will start to change that.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.