Skip to Content
Artificial intelligence

AI analyzed 3.3 million scientific abstracts and discovered possible new materials

July 9, 2019
A stack of papers with a magnifying glass
A stack of papers with a magnifying glass
A stack of papers with a magnifying glassGetty

A new paper shows how natural-language processing can accelerate scientific discovery.

The context: Natural-language processing has seen major advancements in recent years, thanks to the development of unsupervised machine-learning techniques that are really good at capturing the relationships between words. They count how often and how closely words are used in relation to one another, and map those relationships in a three-dimensional vector space. The patterns can then be used to predict basic analogies like “man is to king as woman is to queen,” or to construct sentences and power things like autocomplete and other predictive text systems.

New application: A group of researchers have now used this technique to munch through 3.3 million scientific abstracts published between 1922 and 2018 in journals that would likely contain materials science research. The resulting word relationships captured fundamental knowledge within the field, including the structure of the periodic table and the way chemicals’ structures relate to their properties. The paper was published in Nature last week.

Because of the technique’s ability to compute analogies, it also found a number of chemical compounds that demonstrate properties similar to those of thermoelectric materials but have not been studied as such before. The researchers believe this could be a new way to mine existing scientific literature for previously unconsidered correlations and accelerate the advancement of research in a field.

Related work: This isn’t the first time such techniques have discovered fascinating, sometimes surprising relationships in a vast amount of text. In 2017, for example, a paper published in Science found that the same technique used to process a giant corpus of text from the internet successfully reproduced historical human biases against race and gender, and even computed the ratio of men to women in different professions. These papers show how much rich information about our world is implicit in human language. Machine learning is now giving us the tools to unlock that knowledge.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.

Deep Dive

Artificial intelligence

open sourcing language models concept
open sourcing language models concept

Meta has built a massive new language AI—and it’s giving it away for free

Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

images created by Google Imagen
images created by Google Imagen

The dark secret behind those cute AI-generated animal images

Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.

Yann LeCun
Yann LeCun

Yann LeCun has a bold new vision for the future of AI

One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.

AGI is just chatter for now concept
AGI is just chatter for now concept

The hype around DeepMind’s new AI model misses what’s actually cool about it

Some worry that the chatter about these tools is doing the whole field a disservice.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.