OPINION

March/April 2012

Earthquake Hazard Maps and Objective Testing: The Hazard Mapper’s Point of View

doi:10.1785/gssrl.83.2.231

The recent SRL Opinion article titled “Bad Assumptions or Bad Luck: Why Earthquake Hazard Maps Need Objective Testing,” by Seth Stein, Robert Geller, and Mian Liu (SRL 82(5), 623–626) suggests that probabilistic seismic hazard (PSH) models have been inadequate in terms of forecasting recent devastating earthquakes, and expresses a need for objective testing of these models. This response comes from the perspective of a hazard mapper.

To give these PSH models the ability to provide actual short-term earthquake forecasts would require the integration of relevant forecasting models into the PSH model framework. Promising relevant efforts have been happening in California, New Zealand, and elsewhere, but it is still early days in terms of a substantial update to standard PSH methodology.

PSH MODELS: INTENDED USAGE

First, I’d like to clarify that the PSH models referred to by Stein and his colleagues are not forecasting tools. They are tools developed to provide estimates of hazard for long return periods (e.g., hundreds to thousands of years) for engineering design and planning purposes. They are not developed to provide short-term (e.g., months to years) probabilities for impending earthquakes. Examples of appropriate PSH model applications are in building construction (typically 500 to 2,500 year return periods) and in nuclear facility and hydro-dam developments (typically >10,000 year return periods). Hazard maps for these return periods typically show large differences in hazard across regions like the western United States and New Zealand, reflecting differences in the expected future activity of earthquake sources across the regions. The differences seem logical, given that one would expect sites close to major plate boundary faults to experience more earthquakes in the long term than sites further away. This is useful information for engineering and planning, including the development of loadings standards like the International Building Code. The PSH-derived hazard estimates can also be disaggregated to identify the most likely (or most unlikely) earthquake scenarios for the site or region in question, and these scenarios are often used by regional authorities and others to plan for future earthquake hazards. However, to give these PSH models the ability to provide actual short-term earthquake forecasts would require the integration of relevant forecasting models into the PSH model framework. Promising relevant efforts have been happening in California, New Zealand, and elsewhere, but it is still early days in terms of a substantial update to standard PSH methodology.

Stein et al. do raise some perfectly valid issues with regard to the performance of the relevant PSH models. While I have said that the models are not intended to be used as forecasting tools, it is true that model parameters like maximum magnitude and expected ground motions should adequately encompass any event observed in the particular region. In this respect it is true that Japanese PSH models underestimated the magnitude of the Mw 9, 11 March 2011 Tohoku earthquake. In New Zealand, the Mw 7.1, 4 September 2010 Darfield, Canterbury earthquake occurred on a previously unknown fault, reflecting a partial lack of knowledge about that part of New Zealand. However, the earthquake was to an extent accounted for in the distributed or background seismicity model, which has a maximum magnitude set at Mw 7.2 in the area of the earthquake. The main purpose of a distributed seismicity model is to allow for the occurrence of earthquakes on unknown sources, which is exactly what happened in New Zealand. Some modern PSH models have gone the extra step of incorporating comprehensive epistemic uncertainties into every component of the model to account for all possible surprise events. The Californian UCERF3 model, for instance, allows virtually every possible combination of rupture geometry on the fault sources and uses seismological and geodetic data to define a range of distributed seismicity rates.

GEM and Yucca Mountain stand as examples of a more holistic approach to PSH modeling and are therefore examples of what needs to happen more widely in the future.

PSH MODEL TESTING

Finally, it is helpful to report that research focused on the objective testing of PSH models has been progressing for some years. The Collaboratory for the Study of Earthquake Predictability (CSEP) has been developing testing strategies and methods for a wide variety of applications, and collaborative work has also focused on developing ground motion–based tests of the New Zealand and U.S. national seismic hazard models. The Global Earthquake Model (GEM), a worldwide seismic hazard and loss modeling initiative, is including testing and evaluation as an integral part of the overall model development. The Yucca Mountain seismic hazard modeling efforts have developed an innovative “points in hazardspace” approach to considering all viable constraints on ground motions for long return periods. GEM and Yucca Mountain stand as examples of a more holistic approach to PSH modeling and are therefore examples of what needs to happen more widely in the future. It will, after all, be the holistic, versatile, and tested models that best stand the test of time.

Discussions with Pilar Villamor, Ned Field, Matt Gerstenberger, and Nicolas Pondard on this article were very helpful.    

Mark W. Stirling GNS Science P.O. Box 30368 Lower Hutt, New Zealand m [dot] stirling [at] gns [dot] cri [dot] nz


To send a letter to the editor regarding this opinion or to write your own opinion, you may contact the SRL editor by sending e-mail to
<srled [at] seismosoc [dot] org>.



[Back]

 

Posted: 14 February 2012