Why Climate Models Aren’t Better
Writing in Science last week, a group of researchers headed by Jeremie Mouginot of the University of California, Irvine, reported that the Zachariae Isstrom glacier, in northeast Greenland, is shrinking rapidly and “will increase sea-level rise from the Greenland Ice Sheet for decades to come.” The new paper also included a statement that has become all-too common in scientific journal articles on the effects of global climate change: the rate of melting of Zachariae Isstrom was unexpected.
“I think it’s fair to say that we’re seeing things we didn’t expect to see so early,” says Michael Mann, the director of the Earth Systems Science Center at Penn State University. Among the recent examples Mann cites: the very rapid disappearance of Arctic sea ice, the dwindling of the Greenland and West Antarctic ice sheets, and the disruption of ocean circulation patterns detailed last year in work by Mann’s group at Penn State. All of these changes outstrip the rate of change anticipated in today’s most commonly used climate models.
In the run-up to the international negotiations on climate change that begin in Paris on November 29, these findings raise an important question: How good are our models of climate change and its effects?
The first thing to keep in mind is that, after more than three decades, hundreds of millions of dollars, and countless scientist-hours invested, climate models have gotten much, much better. For example, scientists have learned how to better integrate models of atmospheric and oceanic changes to gain a better sense of the interplay between the two. And the spatial resolution of the models has gotten more and more detailed, even as Moore’s Law fuels enhancements in computing power to run simulations with more and more data points. Finally, better observational data (such as the melting of the Zachariae Isstrom) enables scientists to improve the inputs into the models, naturally leading to better outputs.
At a general level, those models have been remarkably consistent in establishing a linear relationship between the level of carbon dioxide in the atmosphere and global temperature rise. The second thing to remember, though, is that climate models are not good predictors of specific climate effects, such as the melting of Arctic sea ice or the frequency of major hurricanes in the north Atlantic.
There are two types of widely used climate models: large, complicated, planetary-scale models that harness supercomputing capabilities at major research institutes, generally known as atmosphere-ocean general circulation models, and higher-resolution models that use input from the general circulation models to make calculations at regional scales. Around 40 of the general circulation models were used for the Fifth Assessment Report, released by the Intergovernmental Panel on Climate Change in November 2014; they are more accurate for long-term, worldwide forecasts, including the key measure of climate sensitivity—the amount of warming, in global mean temperature, that will happen when the amount of carbon in the atmosphere doubles from pre-industrial levels. The smaller, high-resolution models are better for examining the likely regional effects of climate change.
So models continue to get better. But most climate scientists acknowledge that there are limits: no matter how sophisticated our models become, there will always be an irreducible element of chaos in the earth’s climate system that no supercomputer will ever eliminate.
“The models are getting more accurate in the sense that they simulate many processes more realistically,” explains Reto Knutti, a professor at the Institute for Atmospheric and Climate Science in Zurich who was one of the lead contributors to the Fifth Assessment Report. “But having said that, all of that has not really helped in decreasing the uncertainty in future projections.”
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.