Skip to Content
Uncategorized

The evolution of scientific ideas

Treat the links between scientific papers as a network and the changing communities that emerge reveal how scientific ideas evolve

Science is an evolving discipline. Various fields are constantly being born while others are dying away. For example, 20 years ago, quantum computing was a mere twinkle in its founders’ eyes as was proteomics a mere ten years ago. And in the same time scale, a large part of chemistry has morphed in nanotechnology.

But how exactly are these fields changing? And what does it tell us about the evolution of ideas and the changing nature of science?

Now an answer of sorts is emerging from the work of Mark Herrera at the University of Maryland and a few buddies. They have been able to construct a network out of the links between disciplines found in published papers between 1985 and 2006. They consider two disciplines to be linked in this network if both fields are mentioned in the same paper.

They then look for communities within this network and examine how they change over time.

“The communities we identify map to known scientific fields, and their age strongly depends on their size, impact and activity,” say the group.

But this is by no means a static picture. Communities regularly merge and create new groups of ideas. That’s to be expected if the anecdotal evidence is anything to go by but they find some more interesting phenomena too.

For example, communities that are more willing to reinvent themselves tend to be the ones that have most impact per paper. But it also shows that communities with higher impact per paper tend be shorter-lived.

The team say their discoveries raise the prospect of being able to predict how long various communities will survive and the impact they are likely to have by looking at the current dynamics.

Whether the group has the nerve to publish its predictions is another matter.

Ref: arxiv.org/abs/0904.1234: Mapping the evolution of Scientific Ideas

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.