OPINION

July/August 2010

Operational Earthquake Forecasting: Some Thoughts on Why and How

doi:10.1785/gssrl.81.4.571

Suppose seismologists studying faults near an urbanized area could state with empirical reliability that a major earthquake is 100 to 1,000 times more likely to occur in the upcoming week than during a typical seven-day period. What actions, if any, should be taken for civil protection? Should the forecast be publicly broadcast? How should the public be advised to use such information? These quandaries deserve thoughtful consideration, because we have entered the era of operational earthquake forecasting.

The goal of operational earthquake forecasting is to provide the public with authoritative information on the time dependence of regional seismic hazards. We know that seismic hazards change dynamically in time, because earthquakes suddenly alter the conditions within the fault system that will lead to future earthquakes. Statistical and physical models of earthquake interactions have begun to capture many features of natural seismicity, such as aftershock triggering and the clustering of seismic sequences. These short-term models demonstrate a probability gain in forecasting future earthquakes relative to the long-term, time-independent models typically used in seismic hazard analysis. Data other than seismicity have been considered in earthquake forecasting (e.g., geodetic measurements and geoelectrical signals), but so far, studies of nonseismic precursors have not quantified short-term probability gain, and they therefore cannot be incorporated into operational forecasting methodologies. Accordingly, our focus in this article will be on seismicity-based methods that are enabled by highperformance seismic networks.

The goal of operational earthquake forecasting is to provide the public with authoritative information on the time dependence of regional seismic hazards.

An example of a seismicity-based operational system is the short-term earthquake probability (STEP) model, an aftershock forecasting Web service provided for California by the U.S. Geological Survey (USGS) since 2005.1 STEP (http://earthquake.usgs.gov/earthquakes/step) was developed by the USGS in partnership with Southern California Earthquake Center and the Swiss Federal Institute of Technology (Gerstenberger et al. 2007). STEP uses aftershock statistics to make hourly revisions of the probabilities of strong ground motions (Modified Mercalli Intensity ≥ VI) on a 10-km, statewide grid. The nominal probability gain factors in regions close to the epicenters of small-magnitude (M 3–4) events are often 10–100 relative to the long-term base model. At the time of this writing, aftershocks near the central Baja California border of the 4 April 2010 El Mayor–Cucapah earthquake (M 7.2) have increased the probability on the U.S. side of the border by almost three orders of magnitude. STEP is a prototype system that needs to be improved. For example, the probability change calculated to result from a particular earthquake does not depend on the proximity of that earthquake to major faults. The USGS, Southern California Earthquake Center (SCEC), and California Geological Survey (CGS) have set up a new Working Group on California Earthquake Probabilities to incorporate short-term forecasting into the next version of the fault-based Uniform California Earthquake Rupture Forecast (UCERF3), which is due to be submitted to the California Earthquake Authority in mid-2012.

The need to move quickly toward operational earthquake forecasting was underscored by the L’Aquila earthquake disaster of 6 April 2009, which killed about 300 people and destroyed or rendered uninhabitable approximately 20,000 buildings. Seismic activity in the L’Aquila area increased in January 2009. A number of small earthquakes were widely felt and prompted school evacuations and other preparedness measures. The situation was complicated by a series of earthquake predictions issued by Mr. G. Giuliani, a technician working at the Laboratori Nazionali del Gran Sasso and a resident of L’Aquila. These predictions, which were based on radon concentration in the air measured with gamma-ray detectors and analyzed using unpublished techniques, had no official auspices. At least two of Mr. Giuliani’s specific predictions were false alarms; however, they generated widespread public concern and official reactions. Representatives of the Dipartimento della Protezione Civile (DPC) and Istituto Nazionale di Geofisica e Vulcanologia (INGV) responded with statements that 1) there were no scientifically validated methods for earthquake prediction, 2) such swarm activity was common in this part of Italy, and 3) the probability of substantially larger earthquakes remained small. The Commissione Nazionale per la Previsione e la Prevenzione dei Grandi Rischi, convened by the DPC on 31 March, concluded that “there is no reason to say that the sequence of events of low magnitude can be considered precursory to a strong event.”

In this situation, few seismologists would be comfortable with a categorical forecast of no increase in the seismic hazard, which is the way many interpreted the DPC and INGV statements. It is true that foreshocks cannot be discriminated a priori from background seismicity. Worldwide, less than 10 percent of earthquakes are followed by something larger within 10 kilometers and three days; less than half of the large earthquakes have such foreshocks. In Italy, seismic swarms that do not include large earthquakes are much more common than those that turn out to be foreshocks. Nevertheless, owing to the statistics of clustering, most seismologists would agree that the short-term probability of a large earthquake in the L’Aquila region was higher in the weeks before the 2009 mainshock than in a typical, quiescent week. A forecast consistent with this seismological understanding was not communicated to the public, and the need for a better narrative was consequently filled by amateur predictions rather than authoritative information.

The DPC is now revising its operational forecasting procedures according to the findings and recommendations of an International Commission on Earthquake Forecasting (ICEF), which was convened by the Italian government and chaired by one of us (Jordan et al. 2009). The ICEF has recommended that the DPC deploy the infrastructure and expertise needed to utilize probabilistic information for operational purposes, and it has offered guidelines for the implementation of operational forecasting systems.

The case for the public dissemination of short-term, authoritative forecasts is bolstered by the experience accumulated over the last two decades by the California Earthquake Prediction Evaluation Council (CEPEC), on which we both serve. In March 2009, just a few weeks before the L’Aquila earthquake, a swarm of more than 50 small earthquakes occurred within a few kilometers of the southern end of the San Andreas fault (SAF), near Bombay Beach, California, including an M 4.8 event on 24 March. The hypocenters and focal mechanisms of the swarm were aligned with the left-lateral Extra fault, which experienced small surface offsets west of the Salton Sea during the 1987 Superstition Hills earthquake sequence. However, the 24 March event was the largest earthquake located within 10 km of the southern half of the SAF’s Coachella segment since instrumental recording began in 1932. According to the UCERF2 timedependent model, the 30-year probability of an M ≥ 7 earthquake on the Coachella segment—which has not ruptured since circa 1680—is fairly high, about 24%, corresponding to a probability rate of 2.5 × 10−5 per day (Field et al. 2009).

CEPEC met by teleconference three and a half hours after the M 4.8 event at the request of the California Emergency Management Agency (CalEMA)—the new agency that had recently replaced the state’s Office of Emergency Services (OES)—and issued the following statement: “CEPEC believes that stresses associated with this earthquake swarm may increase the probability of a major earthquake on the San Andreas Fault to values between 1 to 5 percent over the next several days. This is based on methodology developed for assessing foreshocks on the San Andreas Fault.2 The CEPEC methodology was based on the formulation by Agnew and Jones (1991). For a recent review, see Michael (2010). This potential will rapidly diminish over this time period.” The short-term probability estimated by CEPEC corresponded to a gain factor of about 100–500 relative to UCERF2.

In issuing this operational forecast, CEPEC adhered to a notification protocol developed by the Southern San Andreas Working Group (1991). This protocol categorizes alerts for major earthquakes (M ≥ 7) at four levels of three-day probability: D (0.1–1%), C (1–5%), B (5–25%), and A (> 25%). Level D alerts occur in most years and have not prompted any action by CEPEC or OES. The 2009 Bombay Beach event initiated a Level C alert. Other relevant examples come from four earlier periods of increased seismicity near the southern San Andreas fault:

  • 23 April 1992 M 6.1 Joshua Tree earthquake. The epicenter of this earthquake was only 8 km from the SAF. An M 4.6 foreshock about two-and-a-half hours before the mainshock initiated a Level C alert. About two hours after the mainshock, the OES, on advice from the USGS, officially stated that the probability of a major earthquake was 5–25%, raising the alert to Level B, and recommended that local jurisdictions take appropriate response.
  • 28 June 1992 M 7.3 Landers earthquake. The epicenter and rupture zone was located away from the SAF, but aftershocks extended into the SAF zone in two locations. At a CEPEC meeting 36 hours after the earthquake, it was decided to establish a protocol informally called the “goto-war scenario.” If certain earthquakes were to occur, such as an M ≥ 6.0 within 3 km of the Coachella or Carrizo segments of the southern San Andreas, the USGS would notify OES within 20 minutes, and OES would act assuming a 1-in-4 chance of a major San Andreas earthquake— essentially a Level A alert. The governor taped a video message to the state and plans were in place for deployment of the National Guard. This augmented protocol remained in effect for five years but was never invoked.
  • 13 November 2001 Bombay Beach swarm, Mmax 4.1. This swarm began with an M 2.4 event just before 6 a.m., continued with several M 3+ events between 8 a.m. and 9 a.m., and had its largest event, M 4.1, at 12:43 p.m. CEPEC met by conference call at 9:30 am and again at 11:00 a.m. the same day. OES finally issued a statement about an increased risk of a San Andreas earthquake about 2 p.m. The public scarcely noticed this Level C alert, and there was little media interest in the situation.
  • 30 September 2004, M 5.9 Parkfield earthquake. Under the Parkfield protocol (Bakun et al. 1987) (from which the southern San Andreas protocol had been derived), the USGS stated that the probability of an 1857-type earthquake was about 10%, implying a Level B alert. However, CEPEC was not convened, and no direct action was taken by OES based on this alert.

This brief history demonstrates that operational earthquake forecasting is already being practiced in California, and the dissemination of forecasting products is becoming more automated.

For every earthquake recorded above M 5.0, the California Integrated Seismic Network, a component of the USGS Advanced National Seismic System, now automatically posts the probability of an M ≥ 5 aftershock and the number of M ≥ 3 aftershocks expected in the next week. Authoritative short-term forecasts are becoming more widely used in other regions as well. For instance, beginning on the morning of 7 April 2009, the day after the L’Aquila mainshock, INGV began to post 24-hour forecasts of aftershock activity in that region of Italy.

The California experience also indicates that operational forecasting will typically be done in a “low-probability environment.” Earthquake probabilities derived from current seismicity models can vary over several orders of magnitude, but their absolute values usually remain low. Since the adoption of the southern San Andreas protocol nearly 20 years ago, the Level A probability threshold of 25% has never been reached, and the Level B threshold of 5% has been exceeded only twice (after the Joshua Tree and Parkfield events). Thus, reliable and skillful earthquake prediction—i.e., casting high-probability space-time-magnitude alarms with low false-alarm and failure-to-predict rates—is still not possible (and may never be).

The public needs an open source of authoritative, scientific information about the short-term probabilities of future earthquakes, and this source needs to properly convey the epistemic uncertainties in these forecasts.

In this age of nearly instant information and high-bandwidth communication, public expectations regarding the availability of authoritative short-term forecasts appear to be evolving rather rapidly. The 2001 Bombay Beach swarm provoked concern at the state level but received little play in the public media. By 2009, the media and many individuals had become accustomed to tracking earthquakes on the Web, and seismological organizations received hundreds of inquiries from the public within hours of the 24 March event.

Information vacuums invite informal predictions and misinformation. Prediction rumors are often spawned in the wake of a large earthquake in southern California. These rumors usually say that seismologists know that another large earthquake will happen within a few days, but they are not broadcasting this knowledge to avoid a panic. After the Landers earthquake in 1992, this rumor continued to grow over several weeks. Similar rumors after the 2010 El Mayor–Cucapah earthquake developed much more quickly with hundreds of messages passing through Twitter in just a few hours. These rumors pose a particular challenge for seismologists because they posit that we will deny the truth; to many people, an official denial suggests a confirmation. The best defense against such tautology is to demonstrate that the scientific information is always available through an open and transparent forecasting process.

The appropriate choices at this juncture seem fairly obvious. The public needs an open source of authoritative, scientific information about the short-term probabilities of future earthquakes, and this source needs to properly convey the epistemic uncertainties in these forecasts. In the past, CEPEC and its federal equivalent, the National Earthquake Prediction Evaluation Council (rechartered by the USGS in 2006 after a decade-long hiatus) have fulfilled these public requirements to a point, but the procedures display several deficiencies. The alerts have generally relied on generic short-term earthquake probabilities or ad hoc estimates calculated informally, rather than probabilities based on operationally qualified, regularly updated seismicity forecasting systems. The procedures are unwieldy, requiring the scheduling of meetings or teleconferences, which lead to delayed and inconsistent alert actions. Finally, how the alerts are used is quite variable, depending on decisions at different levels of government and among the public. For example, the 2001 Bombay Beach M 4.1 earthquake led to a formal advisory from the state but the 2009 Bombay Beach M 4.8 earthquake, which was even closer to the San Andreas fault, did not.

In the future, the earthquake fore- casting procedures should be qualified for usage by the responsible agencies according to three standards for “operational fitness” commonly applied in weather forecasting: they should display quality, a good correspondence between the forecasts and actual earthquake behavior; consistency, compatibility among procedures used at different spatial or temporal scales; and value, realizable benefits (relative to costs incurred) by individuals or organizations who use the forecasts to guide their choices among alternative courses of action.3 Our criteria for operation fitness correspond to the “goodness” norms described by Murphy (1993). All operational procedures should be rigorously reviewed by experts in the creation, delivery, and utility of forecasts, and they should be formally approved by CEPEC and NEPEC, as appropriate.

Operational forecasts should incorporate the results of validated short-term seismicity models that are consistent with the authoritative long-term forecasts. As recommended by the ICEF, the quality of all operational models should be evaluated for reliability and skill by retrospective testing, and the models should be under continuous prospective testing against established long-term forecasts and a wide variety of alternative, time-dependent models. The Collaboratory for the Study of Earthquake Predictability (CSEP) has begun to establish standards and an international infrastructure for the comparative, prospective testing of shortand medium-term forecasting models. Regional experiments are now underway in California, New Zealand, Japan, and Italy, and will soon be started in China; a program for global testing has also been initiated. Continuous testing in a variety of tectonic environments will be critical in demonstrating the reliability and skill of the operational forecasts and quantifying their uncertainties. At present, seismicity-based forecasts can display order-of-magnitude differences in probability gain, depending on the methodology, and there remain substantial issues about how to assimilate the data from ongoing seismic sequences into the models.

Earthquake forecasts possess no intrinsic value; rather, they acquire value through their ability to influence decisions made by users. A critical issue is whether decision-makers can derive significant value from short-term forecasts if their probability gain relative to long-term forecasts is high, but the absolute probability remains low. In the late 1980s, when many scientists were still optimistic about the prospects for earthquake prediction, both the City of Los Angeles and the State of California developed plans to respond to the issuance of high-probability, short-term predictions (as did NEPEC on a national scale). Much less work has been done on the benefits and costs of mitigation and preparedness actions in a low-probability environment. Forecast value is usually measured in terms of economic benefits or in terms of lives saved, but assessments of value are difficult because the measures must take into account the information available to decision-makers in the absence of the forecasts.

Moreover, most value measures do not fully represent less tangible aspects of value-of-information, such as gains in psychological preparedness and resilience. The public’s fear of earthquakes is often disproportionate to the risk earthquakes pose to life safety. Psychological studies of post-traumatic stress disorder have shown that the symptoms are increased by a lack of predictability to the trauma. Authoritative statements of increased risk, even when the absolute probability is low, provide a psychological benefit to the public in terms of an increased perception of regularity and control. The regular issuance of such statements also conditions the public to be more aware of ongoing risk and to learn how to make appropriate decisions based on the available information.

The agencies with statutory responsibilities for operational forecasting are uncertain about their audience and their message, and they have been cautious in developing new operational capabilities. But that may soon change. The USGS has proposed to establish a prototype operational earthquake forecasting activity in southern California in fiscal year 2011. If approved by Congress, the USGS will develop a formal process for issuing forecasts in response to seismic activity. This activity will include forecast research and development, testing, validation, and application assessments. Research will consider earthquake shaking as well as earthquake occurrence. The coupling of physics-based ground motion models, such as SCEC’s CyberShake simulation platform, with earthquake forecasting models offers new possibilities for developing ground motion forecasts. Scientists from the USGS and academic organizations will work with the user community and communication specialists to determine the value of the forecasting and alert procedures, and a vigorous program of public education on the utility and limitations of low-probability forecasting will be conducted.

We close with an important perspective: Although we are still learning what value can be derived from short-term forecasts, the value of long-term forecasts for ensuring seismic safety is indisputable. This was tragically illustrated by the MW 7.0 Haiti earthquake of 12 January 2010, which currently ranks as the fifth-deadliest seismic disaster in recorded history. Though events of this magnitude were anticipated from regional geodetic measurements, buildings in the Port-au-Prince region were not designed to withstand intense seismic shaking. The mainshock struck without warning; no foreshocks or other short-term precursors have been reported. Preparing for earthquakes means being always ready for the unexpected, which is a long-term proposition.   

REFERENCES

Agnew, D., and L. M. Jones (1991). Prediction probabilities from foreshocks. Journal of Geophysical Research 96, 11,959–11,971.

Bakun, W. H., K. S. Breckenridge, J. Bredehoeft, R. O. Burford, W. L. Ellsworth, M. J. S. Johnston, L. Jones, A. G. Lindh, C. Mortensen, R J. Mueller, C. M. Poley, E. Roeloffs, S. Schulz, P. Segall, and W. Thatcher (1987). Parkfield, California, Earthquake Prediction Scenarios and Response Plans. USGS Open File Report 87-192.

Field, E. H., T. E. Dawson, K. R. Felzer, A. D. Frankel, V. Gupta, T. H. Jordan, T. Parsons, M. D. Petersen, R. S. Stein, R. J. Weldon II, and C. J. Wills (2009). Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). Bulletin of the Seismological Society of America 99, 2,053–2,107; doi:10.1785/0120080049.

Gerstenberger, M. C., L. M. Jones, and S. Wiemer (2007). Short-term aftershock probabilities: Case studies in California. Seismological Research Letters 78, 66–77; doi:10.1785/gssrl.81.4.66.

Jordan, T. H., Y.-T. Chen, P. Gasparini, R. Madariaga, I. Main, W. Marzocchi, G. Papadopoulos, G. Sobolev, K. Yamaoka, and J. Zschau (2009). Operational Earthquake Forecasting: State of Knowledge and Guidelines for Implementation. Findings and Recommendations of the International Commission on Earthquake Forecasting for Civil Protection, released by the Dipartimento della Protezione Civile, Rome, Italy, 2 October, 2009.

Michael, A. J. (2010). Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks. Bulletin of the Seismological Society of America, in press.

Murphy, A. H. (1993). What is a good forecast? An essay on the nature of goodness in weather forecasting. Weather and Forecasting 8, 281–293.

Southern San Andreas Working Group (1991). Short-Term Earthquake Hazard Assessment for the San Andreas Fault in Southern California. USGS Open File Report 91-32.

Thomas H. Jordan Director, Southern California Earthquake Center University of Southern California Los Angeles, California 90089-0742 U.S.A. tjordan [at] usc [dot] edu

Lucile M. Jones Chief Scientist, Multi-Hazards Demonstration Project U. S. Geological Survey 525 South Wilson Avenue Pasadena, California 91125 U.S.A. jones [at] usgs [dot] gov

Footnotes

1 STEP (http://earthquake.usgs.gov/earthquakes/step) was developed by the USGS in partnership with Southern California Earthquake Center and the Swiss Federal Institute of Technology (Gerstenberger et al. 2007).

2 The CEPEC methodology was based on the formulation by Agnew and Jones (1991). For a recent review, see Michael (2010).

3 Our criteria for operation fitness correspond to the “goodness” norms described by Murphy (1993).


To send a letter to the editor regarding this opinion or to write your own opinion, you may contact the SRL editor by sending e-mail to
<lastiz [at] ucsd [dot] edu>.



[Back]

 

Posted: 23 June 2010