OPINION

November/December 2012

Characteristic Earthquake Model, 1884–2011, R.I.P.

doi: 10.1785/0220120107

A precept of science is that theories unsupported by observations and experiments must be corrected or rejected, however intuitively appealing they might be. Unfortunately, working scientists sometimes reflexively continue to use buzz phrases grounded in once-prevalent paradigms that have been subsequently refuted. This can impede both earthquake research and hazard mitigation.

Well-worn seismological buzz phrases include “earthquake cycle” (66 instances recorded in the ISI Web of Science database for the period 2009–2012), “seismic gap” (84), and “characteristic earthquake” (22). And the grand prize goes to…“seismic cycle,” with 88 hits. Each phrase carries heavy baggage of implicit assumptions. The primary assumption loading these phrases is that there are sequences of earthquakes that are nearly identical except for the times of their occurrence. If so, the complex process of earthquake occurrence could be reduced to a description of one characteristic earthquake plus the times of the others in the sequence. Often, such a characteristic earthquake sequence is assumed to dominate the displacement on fault or plate boundary segments. This view holds that characteristic earthquakes should be the largest on a given segment and exhibit quasi-periodic recurrence; it thus has characteristic earthquakes occurring at a rate higher than that implied by the classic Gutenberg– Richter distribution.

The problem is that the surmised properties of characteristic earthquakes were inferred by selecting examples from the past and have proven too imprecise to apply to future earthquakes. Perhaps the best-known example of a characteristic sequence is that near Parkfield, California, which was the basis for a 1985 prediction that there was a 95% probability of a repeat before 1993 (see Bakun et al., 2005 and its references). The example sequence included six events, of which only two were recorded by a California seismic network. There were many published descriptions of the hypothetical characteristic earthquake, but the only consistent features were “on the San Andreas fault,” “near Parkfield,” and “about magnitude 6.” Much attention was paid to the fact that no qualifying event occurred before 2004 (11 years after the end of the prediction window), but little was focused on the ambiguities of what was predicted. Any event with magnitude between 5.5 and 7.5 and rupture length over 20 km would arguably have satisfied at least some of the published descriptions.

All of us in earthquake science must wake up to the problems caused by relying on selected data. Arbitrarily chosen data sets are fine for formulating hypotheses, but not for validating them.

Jordan (2006) pointed out that a scientifically valid hypothesis must be prospectively testable. Ironically, his article made the untestable assertion that “the northern San Andreas is entering a mature stage of the Reid cycle.” Buzz phrases die hard. Retrospective analyses cannot provide a rigorous foundation for any model of earthquake occurrence including, but not limited to, the “seismic cycle.” Even the simplest spatial window, a circle, has three degrees of freedom for its characterization. The famous mathematician and physicist John von Neumann remarked that with four parameters he could “fit an elephant … ” (Dyson, 2004). Furthermore, retrospective searches of seismicity patterns can usually find seemingly significant features in completely random simulations (Shearer and Stark, 2012).

The case of Parkfield shows how retrospective analysis can mislead. The presumed characteristic earthquakes were selected from several different types of catalog without a clear set of guidelines. By contrast, Figure 1 shows a magnitude distribution plot for cataloged, instrumentally measured earthquakes within the first published polygon describing where the characteristic events should occur. The recorded earthquakes, including the late-arriving 2004 magnitude-6 event, are fit well within the 95% confidence limits by a Gutenberg– Richter distribution. In contrast, the characteristic earthquake model would imply a surplus of magnitude-6 events exceeding the Gutenberg–Richter upper confidence limit.

The concept of seismic gaps was broached by Gilbert (1884) and Reid (1911) well before plate tectonics was proposed. Implicitly assuming that recent large earthquakes were characteristic (although he did not use that term), Fedotov (1965) postulated that segments (he did not use that term either) of plate boundaries (to use modern terminology) that had not ruptured for some time were due for large earthquakes. His hypothesis, if true, would have significantly advanced longterm forecasting.

Clear examples of such “uncharacteristic” earthquakes include the disastrous 2004 Sumatra and 2011 Tohoku megaquakes, each of which ripped through several previously hypothesized “segment boundaries.”

The “seismic gap” (or the effectively equivalent “seismic cycle”) model depends entirely on the “characteristic” assumption. Gap models assume quasi-periodic behavior of something, and that something must be characteristic earthquakes. By itself, the phrase “characteristic” may describe spatial or size properties without implying quasi-periodicity, but “characteristic” explicitly or implicitly connotes quasi-periodicity if it is grouped with “gap” or “cycle.” As used in the gap model, the characteristic hypothesis often brings even more baggage: characteristic events are assumed to be the largest possible on a segment.

The seismic gap model has been used to forecast large earthquakes around the Pacific Rim. However, testing of these forecasts in the 1990s and later revealed that they performed worse than did random Poisson forecasts (see Rong et al., 2003 and its references). Similarly, the characteristic earthquake model has not survived statistical testing (see Jackson and Kagan, 2011 and its references). Yet, despite these clear negative results, the characteristic earthquake and seismic gap models continue to be invoked.

Figure 1.
▲ Figure 1. Magnitude–Frequency relation for earthquakes from 1967 through 2005 in the Parkfield box proposed by Michael and Jones (1998). The solid line is the best fit Gutenberg–Richter approximation; dashed lines are the 95% confidence limits based on Poisson occurrence. The observations fall within the range expected for a Gutenberg–Richter distribution, contrary to the characteristic model, which would imply a significant surplus of magnitude-6 “characteristic” events. (Figure adapted from Jackson and Kagan, 2006.)

▲ Figure 1. Magnitude–Frequency relation for earthquakes from 1967 through 2005 in the Parkfield box proposed by Michael and Jones (1998). The solid line is the best fit Gutenberg–Richter approximation; dashed lines are the 95% confidence limits based on Poisson occurrence. The observations fall within the range expected for a Gutenberg–Richter distribution, contrary to the characteristic model, which would imply a significant surplus of magnitude-6 “characteristic” events. (Figure adapted from Jackson and Kagan, 2006.)

Some proponents of quasi-periodic characteristic earthquakes draw support from paleoseismic data, which provide radiometric dates of sediments bracketing successive earthquake ruptures at specific trench sites along a fault. In some cases, these data give additional information such as components of the displacement vector between the sides of a fault. However, it is hard to draw firm conclusions about temporal regularities from such generally imprecise and irreproducible data. Why is this so? Some earthquakes may not displace the trench sites. Sample collection and analysis are subjective. Unless significant sedimentation occurs between successive earthquakes, they are indistinguishable. The data cannot determine magnitudes or other properties that could be used to differentiate characteristic earthquakes from others. Nevertheless, some measured sequences of dates appear quasi-periodic and inconsistent with Poisson behavior. Comprehensive studies (e.g., Parsons, 2008) suggest that earthquakes at some sites appear quasi-periodic, whereas others do not (e.g., Grant, 1996). Rarely discussed is the fact that some short sequences drawn from a random process would appear quasi-periodic, others Poissonian, and others clustered. Also, multiple earthquakes separated by time intervals within the error bars could be interpreted as one event, biasing the interpretation toward quasi-periodic occurrence.

The unproven assumptions in the characteristic earthquake and seismic gap models are far from benign. They lead to overestimating the rates of characteristic magnitude earthquakes, often accompanied by underestimating the maximum size and rates of larger events not envisioned in the characteristic model. Clear examples of such “uncharacteristic” earthquakes include the disastrous 2004 Sumatra and 2011 Tohoku megaquakes, each of which ripped through several previously hypothesized “segment boundaries.” These tragic failures should have been the last straw. What additional evidence could possibly be required to refute the characteristic earthquake hypothesis? Worse yet, the gap model sends a false message of relative safety. It implies that in the aftermath of a characteristic earthquake, a region is immune from further large shocks, yet comprehensive studies (e.g., Kagan and Jackson, 1999) show that large earthquakes increase the probability at all magnitudes.

Yet, the hit counts above show how time-worn models can linger on. Their baggage of implicit assumptions often arrives undeclared and goes unquestioned. What can be done to overcome this intellectual inertia?

First, reviewers, editors, and funding agency officials must recognize that there is a problem. The buzz words discussed earlier, such as the words “forecast” and “prediction,” have been used quite loosely. Terms in publications and proposals should be defined clearly, evidence for assumed segmentation and characteristic behavior should be critically examined, dependence on and rules for selected data should be made explicit, and full prospective tests for unproven assumptions should be described.

Second, those responsible for hazard estimates should emphasize models that deal with the whole spectrum of earthquake occurrence, not characteristic and other models based on small subsets of arbitrarily selected data. Statistical studies based on all earthquakes within given time, space, and size limits, not just those which fit the investigator’s preconceptions, are advancing rapidly and should become our new standard. All of us in earthquake science must wake up to the problems caused by relying on selected data. Arbitrarily chosen data sets are fine for formulating hypotheses, but not for validating them.

The gap model sends a false message of relative safety. It implies that in the aftermath of a characteristic earthquake, a region is immune from further large shocks.

Third, characteristic advocates and statistical hypothesis testers should collaborate to develop appropriate tests. Collaboratory for the Study of Earthquake Predictability (CSEP) centers in California, Japan, Switzerland, and New Zealand are prospectively testing hundreds of regional and global earthquake models (see http://cseptesting.org/centers/scec). As far as we know, no characteristic earthquake or seismic cycle models are included, but they could be. Nishenko (1991) and the coauthors of his earlier papers deserve enormous credit for articulating the seismic gap model in a testable form. Their specific models failed, but that is how science works. True believers should now either throw in the towel or reformulate the characteristic model for another explicit prospective test. A problem for such testing is that modern models often address limited regions in which definitive earthquakes may not occur for centuries. A model must be, such as Nishenko’s, basic enough to cover a region with a high combined rate of relevant events whose number will be sufficient (usually 15–20) for decisive testing within a few years.

In summary, the time for case studies, anecdotes, speculation, and Band-Aids for failed models has passed. We must take a rigorous look at characteristic earthquake models and their implicit assumptions. In the play Helen, Euripedes wrote that “Man’s most valuable trait is a judicious sense of what not to believe.” Let us begin by scrapping merely long-standing ideas that have been rejected by objective testing or are too vague to be testable. Otherwise, earthquake science will not deserve the name.   

REFERENCES

Bakun, W. H., B. Aagaard, B. Dost, W. L. Ellsworth, J. L. Hardebeck, R. A. Harris, C. Ji, M. J. S. Johnston, J. Langbein, J. J. Lienkaemper, A. J. Michael, J. R. Murray, R. M. Nadeau, P. A. Reasenberg, M. S. Reichle, E. A. Roeloffs, A. Shakal, R. W. Simpson, and F. Waldhauser (2005). Implications for prediction and hazard assessment from the 2004 Parkfield earthquake, Nature 437, 969–974.

Dyson, F. (2004). A meeting with Enrico Fermi—How one intuitive physicist rescued a team from fruitless research, Nature 427, 297.

Fedotov, S. A. (1965). Regularities of the distribution of strong earthquakes in Kamchatka, the Kurile Islands and northeastern Japan, Tr. Inst. Fiz. Zemli Akad. Nauk SSSR 36, no. 203, 66– 93 (in Russian).

Gilbert, G. K. (1884). A theory of the earthquakes of the Great Basin, with a practical application, Am. J. Sci., ser. 3, 27, no. 157, 49–54.

Grant, L. B. (1996). Uncharacteristic earthquakes on the San Andreas fault, Science 272, 826–827.

Jackson, D. D., and Y. Y. Kagan (2006). The 2004 Parkfield earthquake, the 1985 prediction, and characteristic earthquakes: Lessons for the future, Bull. Seismol. Soc. Am. 96, S397–S409.

Jackson, D. D., and Y. Y. Kagan (2011). Characteristic earthquakes and seismic gaps, in Encyclopedia of Solid Earth Geophysics, H. K. Gupta (Editor), 37–40, Springer, doi: 10.1007/978-90-481-8702-7.

Jordan, T. H. (2006). Earthquake predictability, brick by brick, Seismol. Res. Lett. 77, no. 1, 3–6.

Kagan, Y. Y., and D. D. Jackson (1999). Worldwide doublets of large shallow earthquakes, Bull. Seismol. Soc. Am. 89, 1147–1155.

Michael, A. J., and L. M. Jones (1998). Seismicity alert probabilities at Parkfield, California, revisited, Bull. Seismol. Soc. Am. 88, 117–130.

Nishenko, S. P. (1991). Circum-Pacific seismic potential—1989–1999, Pure Appl. Geophys. 135, 169–259.

Parsons, T. (2008). Monte Carlo method for determining earthquake recurrence parameters from short paleoseismic catalogs: Example calculations for California, J. Geophys. Res. 113, no. B03302, 14, doi: 10.1029/2007JB004998.

Reid, H. F. (1911). The elastic-rebound theory of earthquakes, Univ. Calif. Publ. Bull. Dept. Geol. Sci. 6, 413–444.

Rong, Y.-F., D. D. Jackson, and Y. Y. Kagan (2003). Seismic gaps and earthquakes, J. Geophys. Res. 108, no. 2471, 14, doi: 10.1029/2002JB002334.

Shearer, P., and P. B. Stark (2012). Global risk of big earthquakes has not recently increased, Proc. Natl. Acad. Sci. 109, no. 3, 717–721, doi: 10.1073/pnas.1118525109.

Yan Y. Kagan David D. Jackson Department of Earth and Space Science University of California at Los Angeles 595 Charles E. Young Dr. East Los Angeles, California 90095-1567 U.S.A. kagan [at] moho [dot] ess [dot] ucla [dot] edu david [dot] d [dot] jackson [at] ucla [dot] edu

Robert J. Geller Department of Earth and Planetary Science Graduate School of Science University of Tokyo Hongo 7-3-1, Bunkyo-ku Tokyo 113-0033 Japan bob [at] eps [dot] s [dot] u-tokyo [dot] ac [dot] jp


To send a letter to the editor regarding this opinion or to write your own opinion, you may contact the SRL editor by sending e-mail to
<srled [at] seismosoc [dot] org>.



[Back]

 

Posted: 1 November 2012