Earthquake prediction is a science fraught with difficulty.
Humankind has a very real and understandable need to quantify the
risks associated with living and working in areas known
to be at risk. We want to know when and where the big one will
strike.
History is against us on that. Generations of scientists have
repeatedly failed to make accurate predictions and a growing body of
evidence seems to show that accurate predictions are, to all intents
and purposes, impossible to make.
Advertisement
Then there is the debate over what a useful prediction would
actually consist of; the forecast of a major land ripper somewhere in
northern California in the next 50 years is of little practical use.
Residents of San Francisco, Tokyo, Wellington and Rome want a
forecast for tomorrow.
This story is only available to subscribers.
Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.
None of that is stopping forecasters surging ahead and to their
credit, they have come up with a way to test their ideas.
The Collaboratory for the Study of Earthquake Prediction is an
international project to compare on an equal footing the forecasts of
various earthquake models in different parts of the world.
The beauty of the CSEP project is that it forces forecaster to
make daily earthquake forecasts for tomorrow. The goal is to use
these forecasts to sort the wheat from the chaff; to identify the
best forecasting models.
Today, Maximilian Werner from the Swiss Seismological Service in
Zurich and a few pals publish a detailed account of a model they have
developed for forecasting earthquakes in California. Their model is
based on two assumptions. First, that the earthquakes are more likely
in places where they have occurred before. Second, that the
distribution of earthquakes in the future will be the same as the
distribution in the past.
Both assumptions seem reasonable but the devil is in the detail.
Geologists face very real difficulties in accurately determining the
distribution of earthquakes in the past because high quality data
goes back only a few dozen years. So very big earthquakes, which
occur on a scale of hundreds or thousands of years, are poorly
represented.
A very basic limitation of any model is the data used to develop
and calibrate it. And since the prospect of getting significantly better data is poor, this is a limitation that earthquake forecasters
will have to live with.
The other problem is that this data then has to be “smoothed”
geographically so that earthquake data from a particular site can be used
to influence predictions in nearby areas. How that should be done is
anyone’s guess.
Werner and co are stoical about the problem. Their model is an
improvement on one they developed a few years ago but in testing it
against the historical record, they declare themselves to be
“unsastisified with its performance”. That’s a damning
conclusion, not least because their model is based on patterns
derived from this data.
Advertisement
CSEP should eventually produce models that better capture past geological trends. But will it ever lead to better predictions about the future that can be used to
mitigate the consequences of real earthquakes? Probably not. But if
earthquake forecasters really believe that history is a good guide to
the future, then a cursory study of their own field should have told
them that already.
Ref: arxiv.org/abs/0910.4981 :High Resolution Long- and Short-Term Earthquake Forecasts for California