Skip to Content

Why Climate Models Aren’t Better

Even as computer models grow more powerful and more precise, they remain uncertain as to regional effects.
November 18, 2015

Writing in Science last week, a group of researchers headed by Jeremie Mouginot of the University of California, Irvine, reported that the Zachariae Isstrom glacier, in northeast Greenland, is shrinking rapidly and “will increase sea-level rise from the Greenland Ice Sheet for decades to come.” The new paper also included a statement that has become all-too common in scientific journal articles on the effects of global climate change: the rate of melting of Zachariae Isstrom was unexpected.

The rapid melting of major on-land ice sheets is among the phenomena that have climate scientists rethinking their models.

“I think it’s fair to say that we’re seeing things we didn’t expect to see so early,” says Michael Mann, the director of the Earth Systems Science Center at Penn State University. Among the recent examples Mann cites: the very rapid disappearance of Arctic sea ice, the dwindling of the Greenland and West Antarctic ice sheets, and the disruption of ocean circulation patterns detailed last year in work by Mann’s group at Penn State. All of these changes outstrip the rate of change anticipated in today’s most commonly used climate models.

In the run-up to the international negotiations on climate change that begin in Paris on November 29, these findings raise an important question: How good are our models of climate change and its effects?

The Zachariae Isstrom glacier, in northeast Greenland, is shrinking at a rate that surprised many scientists.

The first thing to keep in mind is that, after more than three decades, hundreds of millions of dollars, and countless scientist-hours invested, climate models have gotten much, much better. For example, scientists have learned how to better integrate models of atmospheric and oceanic changes to gain a better sense of the interplay between the two. And the spatial resolution of the models has gotten more and more detailed, even as Moore’s Law fuels enhancements in computing power to run simulations with more and more data points. Finally, better observational data (such as the melting of the Zachariae Isstrom) enables scientists to improve the inputs into the models, naturally leading to better outputs.

At a general level, those models have been remarkably consistent in establishing a linear relationship between the level of carbon dioxide in the atmosphere and global temperature rise. The second thing to remember, though, is that climate models are not good predictors of specific climate effects, such as the melting of Arctic sea ice or the frequency of major hurricanes in the north Atlantic.

There are two types of widely used climate models: large, complicated, planetary-scale models that harness supercomputing capabilities at major research institutes, generally known as atmosphere-ocean general circulation models, and higher-resolution models that use input from the general circulation models to make calculations at regional scales. Around 40 of the general circulation models were used for the Fifth Assessment Report, released by the Intergovernmental Panel on Climate Change in November 2014; they are more accurate for long-term, worldwide forecasts, including the key measure of climate sensitivity—the amount of warming, in global mean temperature, that will happen when the amount of carbon in the atmosphere doubles from pre-industrial levels. The smaller, high-resolution models are better for examining the likely regional effects of climate change.

So models continue to get better. But most climate scientists acknowledge that there are limits: no matter how sophisticated our models become, there will always be an irreducible element of chaos in the earth’s climate system that no supercomputer will ever eliminate.

“The models are getting more accurate in the sense that they simulate many processes more realistically,” explains Reto Knutti, a professor at the Institute for Atmospheric and Climate Science in Zurich who was one of the lead contributors to the Fifth Assessment Report. “But having said that, all of that has not really helped in decreasing the uncertainty in future projections.”

Keep Reading

Most Popular

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Sam Altman says helpful agents are poised to become AI’s killer function

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.