Skip to Content

The Failing Future For Earthquake Forecasts

The optimism about the future of earthquake prediction is not justified by the quality of the forecasts

Earthquake prediction is a science fraught with difficulty. Humankind has a very real and understandable need to quantify the risks associated with living and working in areas known to be at risk. We want to know when and where the big one will strike.

History is against us on that. Generations of scientists have repeatedly failed to make accurate predictions and a growing body of evidence seems to show that accurate predictions are, to all intents and purposes, impossible to make.

Then there is the debate over what a useful prediction would actually consist of; the forecast of a major land ripper somewhere in northern California in the next 50 years is of little practical use. Residents of San Francisco, Tokyo, Wellington and Rome want a forecast for tomorrow.

None of that is stopping forecasters surging ahead and to their credit, they have come up with a way to test their ideas. The Collaboratory for the Study of Earthquake Prediction is an international project to compare on an equal footing the forecasts of various earthquake models in different parts of the world.

The beauty of the CSEP project is that it forces forecaster to make daily earthquake forecasts for tomorrow. The goal is to use these forecasts to sort the wheat from the chaff; to identify the best forecasting models.

Today, Maximilian Werner from the Swiss Seismological Service in Zurich and a few pals publish a detailed account of a model they have developed for forecasting earthquakes in California. Their model is based on two assumptions. First, that the earthquakes are more likely in places where they have occurred before. Second, that the distribution of earthquakes in the future will be the same as the distribution in the past.

Both assumptions seem reasonable but the devil is in the detail. Geologists face very real difficulties in accurately determining the distribution of earthquakes in the past because high quality data goes back only a few dozen years. So very big earthquakes, which occur on a scale of hundreds or thousands of years, are poorly represented.

A very basic limitation of any model is the data used to develop and calibrate it. And since the prospect of getting significantly better data is poor, this is a limitation that earthquake forecasters will have to live with.

The other problem is that this data then has to be “smoothed” geographically so that earthquake data from a particular site can be used to influence predictions in nearby areas. How that should be done is anyone’s guess.

Werner and co are stoical about the problem. Their model is an improvement on one they developed a few years ago but in testing it against the historical record, they declare themselves to be “unsastisified with its performance”. That’s a damning conclusion, not least because their model is based on patterns derived from this data.

CSEP should eventually produce models that better capture past geological trends. But will it ever lead to better predictions about the future that can be used to mitigate the consequences of real earthquakes? Probably not. But if earthquake forecasters really believe that history is a good guide to the future, then a cursory study of their own field should have told them that already.

Ref: arxiv.org/abs/0910.4981 :High Resolution Long- and Short-Term Earthquake Forecasts for California

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.