Skip to Content
Artificial intelligence

AI could help scientists fact-check covid claims amid a deluge of research

May 29, 2020
Old library or archive reference catalogue with opened card drawer. Database and knowledge catalog concept
mars58 / Getty

An experimental tool helps researchers wade through the overwhelming amount of coronavirus literature to check whether emerging studies follow scientific consensus.

Why it matters: Since the start of the coronavirus pandemic, there has been a flood of relevant preprints and papers, produced by people with varying degrees of expertise and vetted through varying degrees of peer review. This has made it challenging for researchers trying to advance their understanding of the virus to sort scientific fact from fiction.

How it works: The SciFact tool, developed by the Seattle-based research nonprofit Allen Institute for Artificial Intelligence (AI2), is designed to help with this process. Type a scientific claim into its search bar—say, “hypertension is a comorbidity for covid” (translation: hypertension can cause complications for covid patients)—and it will populate a feed with relevant papers, labeled as either supporting or refuting the assertion. It also displays the abstracts of each paper and highlights the specific sentences within them that provide the most relevant evidence for assessing the claim.

How it was built: The system is built on top of a neural network called VeriSci. It was trained on an existing fact-checking data set compiled from Wikipedia and fine-tuned on a new scientific fact-checking data set containing 1,409 scientific claims, accompanied by 5,183 abstracts.

Researchers at AI2 curated the latter data set using Semantic Scholar, a publicly available database of scientific papers, which the nonprofit launched and has maintained since 2015. They randomly selected a sample of papers from a few dozen well-regarded journals in the life and medical sciences, including Cell, Nature, and JAMA. They then extracted the sentences in the papers that included citations and asked expert annotators to rewrite them into scientific claims that could be corroborated or contradicted by the literature. For every claim, the annotators then read through the abstracts of the corresponding citations and identified the sentences containing supporting or refuting evidence.

How it performs: When the researchers tested VeriSci on scientific claims related to covid-19, they found that it retrieved relevant papers and accurately labeled them 23 out of 36 times. Despite this imperfect performance, the result still outperforms the same neural network trained on other existing fact-checking databases and serves as the first known proof of concept for how an AI-based system for scientific fact-checking can be possible. In the future, some of the tool’s errors could be reduced in part through use of more training data; others will need further advancements in natural-language understanding.

What it should and shouldn’t be used for: SciFact is meant to help scientists researching covid-19 to quickly check their hypotheses or emerging claims against existing scientific literature. It is not meant to dispel the kinds of misinformation or conspiracy theories that circulate on social media (e.g., that covid-19 is a bioweapon) or opinion-based statements (e.g., that the government should require people to stand six feet apart to slow the spread of the virus). Given the tool’s experimental nature, experts should still be sure to read the abstracts rather than rely solely on the “support” and “refute” labels. The researchers also note that the tool doesn’t check the legitimacy of the papers retrieved, so experts should exercise judgment.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.