OPINION
May/June 1996

THE CASE AGAINST HUGE EARTHQUAKES

In a recent opinion article, Jackson (SRL, Jan/Feb 1996, pp. 3-5; henceforth J96) provides stimulating arguments regarding an issue central to long-term seismic assessment: the maximum magnitude, Mmax, imposed for a given fault, fault system, or region. Combined with an overall strain-accrual rate from geologic or geodetic studies and an assumed distribution of magnitudes, the choice of Mmax will constrain the long-term rate of earthquakes (assuming strain release to be entirely seismic) of all magnitudes and thus significantly affect long-term probabalistic seismic hazard assessment. Moreover, because it is difficult to prove a negative (i.e., that a certain Mmax is not possible), it is unlikely that this issue will ever be settled with conclusive evidence.

However, as I will discuss, if one formalizes Jackson's case for huge earthquakes and implements it in hazard assessment, the result will be to lower the probabilistic hazard in many regions, including much of Southern California. I will argue that, given alternative models that can be considered equally (or perhaps more) plausible, this course of action may be unwise.

J96 does raise several important points regarding prior assumptions that have been used to calculate Mmax. In particular, the existence of recent earthquakes that have "run the stops" is difficult to dispute. In addition to the examples given by J96, the 1988 Armenia earthquake stands as testimony to the potential destructive power of events that rupture multiple, apparently distinct, fault segments (or systems). In fact, detailed source investigations seem to reveal increasingly more earthquakes to be multiple-fault ruptures. These can involve either contiguous fault segments, as in the case of the 1992 Landers earthquake, or conjugate fault rupture, as was argued by Laura Jones and myself (BSSA, 6/95, pp. 688-704) for the M6.5 Big Bear aftershock to the Landers event.

In light of these recent examples, many earth scientists have looked at apparently disjoint fault segments, like the complex system of north- and south-dipping thrust faults that underlies the San Gabriel Mountains in Southern California, and concluded them to be no more disjoint than faults that are known to have ruptured together elsewhere.

The question remains, however: What Mmax is possible? Jackson and Kagan have argued for use of worldwide distributions for all tectonically similar fault systems; this is certainly a possible approach that is not without merit. However, other lines of evidence and/or argument exist and have been made in the literature. It is instructive at this point to differentiate between two cases: (1) a well developed individual fault such as the San Andreas, and (2) a complex regional system of faults such as the Southern California thrust system(s) (the reader might note a minor California bias in this exchange, which hopefully can be forgiven). Different lines of argument can be developed for these two cases. Addressing the latter case first, one can, for example, argue for an overall (truncated) Gutenberg-Richter (G-R) distribution of magnitudes for the region reflecting the fractal nature of a complex fault system. One can further argue that the known rate of small-to-moderate (M5-6.5) events should not be grossly misrepresentative of the long-term average. Following through this calculation, I have shown in a previous study (Science, 1/17/95, pp. 211-213) that an Mmax of approximately 7.5 for the greater Los Angeles region will result in a recurrence rate on the order of 150-300 years for this magnitude and approximately 50 years for M6.5 events.

Although J96 is correct that extrapolations of fault area-magnitude results to higher magnitudes is somewhat tenuous, M7.5 is essentially the magnitude predicted if one postulates simultaneous rupture of the longest contiguous fault systems presented in Dolan's (Science, 1/17/95, pp. 199-205) evaluation of the geologic slip rate data for the greater Los Angeles region. Without grossly discounting existing regression results, arguments can perhaps be made to increase Mmax to 7.7-7.8. Interestingly, one can pose the following question: If one assumes a truncated G-R distribution and that the historic rate of M6-6.9 events in the greater LA region is precisely representative, what Mmax is implied? The answer turns out to be approximately M7.8.

However, magnitudes larger than M7.8 become increasingly improbable given almost any assessment of known fault system length and any demand that the area-moment relationship bear some semblance to existing regression results. One then may ask, is the example of the 1957 Gobi-Altai earthquake relevant? Although some parallel can be found between the two tectonic settings, a disparity in the sheer size of the fault systems must be noted. This observation is at the heart of reservations regarding the use of global earthquake distribution observations (for a given type of faulting): Just as all evidence suggests that some subduction zones rupture in characteristically smaller events than others (such as coastal Mexico compared to Chile), is it unreasonable to assume that some continental thrust systems will give rise to smaller Mmax than others? In the case of the greater Los Angeles region, two scenarios must at least be considered equally plausible: (1) the "huge earthquake" scenario presented by J96, or (2) the "semihuge earthquake" scenario discussed above.

The difference between these two scenarios is not without significant consequence for seismic hazard assessment, as mentioned by J96. Given the semihuge earthquake scenario for the greater L.A. region, events with magnitudes near M7.5 are expected on the order of every few hundred years. In typical probabilistic assessments, which may examine shaking levels expected at 10% probability of exceedence over 50 years (100% in 500 years), the Mmax event is expected and will likely provide the dominant control on predicted peak ground motions. However, allowing Mmax of M8+, the recurrence rate of the Mmax event becomes several thousand years, and the rate of M7-7.5 events will likely be lowered to well under one in 500 years. The use of Mmax = 8+ can therefore substantially lower predicted ground motions over the types of return periods typically considered in hazard assessment.

Returning to the case of a single well developed fault, the same effect can be seen. In a recent study by Ward (BSSA, 10/94, 1,293-1,309), events larger than M8 are allowed, with very long repeat times, for various segments of the San Andreas Fault. For example, the Carrizo segment is allowed to produce M8.0 events with a ~7000-yr recurrence interval. The recurrence interval of M7.5 ("characteristic") events for this interval is then 684 yrs. The consequences of these assumptions are easily seen: Peak predicted ground motion along much of the San Andreas (again at the 10% chance of exeedence in 50 year level) is only on the order of 0.4-0.5g, because you have reduced the odds of seeing either a M7.5 or the Mmax event in a given 500-year window.

This leads to the other line of evidence that must be considered in assessment of individual known faults: paleoseismic estimates of repeat time. Although the issue is not settled definitively even for the San Andreas, considerable evidence exists suggesting a repeat time of M ~ 7.5 events on the order of 150-300 years. Thus, most assessments of peak exceedence acceleration expected along the San Andreas over a 500-year return period would be commensurate with near-field ground motions for a M7.5 event, or closer to twice the 0.4g result obtained by Ward.

Although the San Andreas is arguably the best-studied fault in the world, Wesnousky and his colleagues (e.g., BSSA, 12/94, pp. 1,940-1,959) compiled available geologic/paleoseimic information for a number of individual faults in Southern California and compared the inferred rate of the largest events with the current background rate of small earthquakes. They find that the inferred repeat time of the Mmax events are systematically higher than extrapolations of current b-value curves, consistent with the characteristic, or semicharacteristic, earthquake scenario. Of the faults studied by Wesnousky, the one that comes the closest to having a pure G-R distribution is the San Jacinto, a relatively immature and poorly developed fault compared to the San Andreas. It is thus argued that, as fault systems develop and "mature", they will be increasingly described by the characteristic earthquake model. Similar effects can be seen, in fact, in computer simulations of faulting, such as those by Ben-Zion and Rice (e.g., JGR, 7/95, pp 12,959-12,983): Faults that can be modeled as relatively heterogeneous, in terms of some estimate of strength properties (perhaps a rough equivalent to geometric complexity), produce magnitude statistics closer to G-R, while more uniform fault segments tend to produce characteristic rupture.

A final point concerns the "earthquake deficit" phenomenon that J96 refers to: that earthquake catalogs do not appear to keep pace with strain accrual rates in general (thus lending support to the existence of very infrequent, very large events). However, this "accounting" of the "earthquake budget" is of course extremely difficult given fundamental uncertainities in strain rate, seismogenic thickness, fault geometries, etc. (not to mention the possibility of some aseismic strain release). It is true that I and others (e.g., Hauksson, in Eng. Geol. Practice in S. Ca., 1993) have argued that the historic rate of earthquakes in the greater Los Angeles region is lower than the long-term rate expected based on available geologic and geodetic results. In fact, simple Coulomb failure calculations show that a relative quiescence in the greater L.A. region during the historic period is consistent with the static stress changes caused by the great 1857 Ft. Tejon earthquake. However, our ability to make this sort of calculation must be considered to be in its infancy, even for a region as well studied as Southern California.

Returning to the central issue of whether or not one incorporates very infrequent, "huge" earthquakes in current seismic hazard evaluations, it seems imprudent to adopt a model that can drastically lower probabilistic ground motion predictions given the existence of plausible, mathematically and geologically consistent alternative models.

S. E. Hough
United States Geological Survey
Pasadena, CA 91106, USA


To send a letter to the editor regarding this opinion or to write your own opinion, contact Editor John Ebel by email or telephone him at (617) 552-8300.

Revised: 25 February 1999