Skip to Content

Preventing Misinformation from Spreading through Social Media

New platforms for fact-checking and reputation scoring aim to better channel social media’s power in the wake of a disaster.
April 23, 2013

The online crowds weren’t always wise following the Boston Marathon bombings. For example, the online community Reddit and some Twitter users were criticized for pillorying an innocent student as a possible terrorist suspect. But some emerging technologies might be able to help knock down false reports and wring the truth from the fog of social media during crises.

Researchers from the Masdar Institute of Technology and the Qatar Computing Research Institute plan to launch Verily, a platform that aims to verify social media information, in a beta version this summer. Verily aims to enlist people in collecting and analyzing evidence to confirm or debunk reports. As an incentive, it will award reputation points—or dings—to its contributors.

Verily will join services like Storyful that use various manual and technical means to fact-check viral information, and apps such as Swift River that, among other things, let people set up filters on social media to provide more weight to trusted users in the torrent of posts following major events.

On Reddit, amateur sleuthing to identify possible bombing suspects led to accusations against a student, Sunil Tripathi, a Brown University student reported missing weeks earlier (an apology has since been issued by Reddit); that accusation was then tweeted and retweeted many times. “The underlying problem is a fearsome one—people want to share and spread information, whether accurate or not,” says Ethan Zuckerman, who directs the center for civic media at MIT. “We’re very far from a solution. The reporting around the Marathon bombing demonstrates that mainstream media has issues with verification that are as profound as anything we face online.”

Reputation scoring has worked well for e-commerce sites like eBay and Amazon and could help to clean up social media reports in some situations.

Research efforts have also shown how to effectively mobilize many people on social media for a common task. In a 2009 experiment, the U.S. Defense Advanced Research Projects Agency offered $40,000 to the first team that could identify the locations of 10 large red weather balloons lofted by DARPA at undisclosed locations across the United States. The winning team, from MIT, did it in less than nine hours using an incentive structure, fueled by cash rewards, to drum up viral participation on social media. Anyone who found a single balloon would get $2,000; someone who invited that person to join the hunt would get $1,000. A similar but harder challenge, in 2012, asked teams to find specific individuals within cities within 12 hours with only a single mugshot to work with. There again, a distributed cash reward system worked best.

Verily builds on lessons from both contests. The winning mugshot team included one of Verily’s creators, computer scientist Iyad Rahwan, a graduate of MIT who is now at the Masdar Institute of Technology. “Recruiting people to join is part of the issue, but we also need to figure out how to remove false reports,” Rahwan says. “Where the balloon challenge took nine hours, we hope to facilitate the crowdsourced evaluation of multimedia evidence on individual incidents in less than nine minutes.”

The beta version of Verily will first be tested by its creators on a real-world weather disaster such as a hurricane or flood. Since such disasters come with some warning, Verily’s creators can prepare humanitarian agencies to use the platform. A piece of reported news—such as a photo of a flooded hospital circulating on Twitter—would be posted to Verily with a question: is the hospital really flooded? Users would then examine the photo for signs of authenticity and also leverage their own social networks to investigate its authenticity.

Humanitarian agencies working in the region could promote participation, as could the press and Twitter. Voters’ reputation scores would increase or decrease over time; future votes from reliable people would get increased weight. And voters would be encouraged to bring others to the site; anyone brought in by someone with a good reputation would automatically start with a higher reputation themselves.

In many ways the platform is meant to resolve a design problem inherent in sites like Reddit, adds Patrick Meier, director of innovation at the Qatar institute who is a co-creator of Verily and former director of crisis mapping at Ushahidi, the online incident reporting platform (see “Crisis Mapping Meets Check In”). “They don’t have the design to facilitate these kinds of workflows and collaboration,” he says. Verify could provide a rapid means to vet reports arising on sites like Reddit.

The other approaches are more basic. Storyful verifies videos to make sure news organizations don’t get duped by phony ones. Staffers check veracity based on clues like weather reports, the angle of the sun, and visual landmarks. And beyond the Swift River app is a larger platform aimed at letting humanitarian and other agencies manage and make sense of social media reports and other data.

Meanwhile, old-fashioned methods of finding the truth are holding up pretty well. In Boston, the marathon bombers were actually found through conventional witness reports and reviews of video surveillance camera footage at retail stores.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.