OPINION
January/February 1996

THE CASE FOR HUGE EARTHQUAKES

We generally assume that the limiting magnitudes of future earthquakes can be inferred from the length of mapped faults or fault segments. This assumption is based on empirical observations relating earthquake magnitude with fault length or fault area. This assumption is in conflict with seismological observations and other reasonable assumptions, namely the conservation of seismic moment and the stationarity of earthquake occurrence. In many regions, the rate of earthquakes required to explain the observed seismic moment rate exceeds the historic rate, if the earthquake magnitudes are limited by fault length.

The rare occurrence of huge earthquakes (moment magnitude of 8 or greater) could resolve the conflict by providing the moment rate without the need for an excessive rate of "ordinary" large earthquakes. One magnitude 8 earthquake releases the seismic moment of about 30 magnitude 7's. On average, the huge earthquake may even do less damage than the many large ones, although the damage might be qualitatively different. The frequency of very large earthquakes is one of the most important unknowns in earthquake hazard studies: It largely controls the rate of moderate to large earthquakes, and it determines the probability of really severe damage.

The assumption that seismic moment is conserved is widely accepted, at least implicitly. Many studies employ the measured slip rate and mean earthquake displacement to estimate a mean recurrence time. Such calculations assume that faults are segmented and that the displacements of successive earthquakes are about the same. Because the slip is assumed to represent the average over the fault segment's area, this model implies conservation of seismic moment. In the "time predictable" method, the time till the next earthquake is estimated by matching the displacement in the last earthquake with accumulated fault slip. Again, this implicitly assumes conservation of seismic moment. A much less demanding assumption is that over a very long period of time, over the entire length of a fault, and over the elastic thickness of that fault, earthquakes will produce a nearly uniform slip equal to the fault slip rate times the relevant time interval. The principle of conservation of seismic moment is assumed implicitly in the 1988 and 1990 reports of the Working Group on California Earthquake Probabilities, and it is assumed explicitly in the 1995 report of the same group.

The idea that future earthquake rates can be estimated well by past rates is also widely accepted, although it seems a weaker assumption than moment conservation. Many influential hazard studies, including the 1982 report by Algermissen and others, assume stationarity. Of course, the observed earthquake rate will vary from one time interval to another, but such variations are expected for a random process. Thus variations in the observed seismicity do not necessarily imply that there is a change in the underlying rate process causing the earthquakes.

The notion that earthquake magnitudes are limited by the length of faults or fault segments comes from empirical studies showing that surface rupture length and the length of aftershock zones generally increase with increasing magnitude. Regression studies quantify this relationship, providing least squares, maximum likelihood, or other optimal relationships between magnitude, fault length, and fault area. These relationships can be inverted to estimate the magnitude of future earthquakes from the length of mapped faults or fault segments, assuming that future earthquakes will be limited to individual segments.

When we put these three assumptions together, we generally find that there is a conflict between the observed rate of moderate to large earthquakes and the moment rate estimated from plate tectonics or fault slip rates. Thus, one of the three assumptions must be wrong, or else the earthquake or slip rate data must be grossly erroneous. The conflict occurs most apparently when the characteristic assumption is used to estimate earthquake frequency on separate faults. The characteristic earthquake assumption, as used here, is that on a given fault segment the slip, or seismic moment, will be released in earthquakes of about the same size from time to time. Then one can easily calculate the frequency of such earthquakes as the ratio of the fault slip rate to the displacement of a characteristic earthquake. In several studies that I have done myself and with my colleague Yan Kagan at UCLA, the result has been that the sum of the rates calculated as above exceeds the observed rate of earthquakes near the characterisitic magnitudes by a factor of from two to five. Examples include a study of the Pacific rim, all of California, and Southern California, each over different time periods depending on the adequacy of the earthquake catalogs available. A study by Dolan and others in 1995, which covered the central Transverse Ranges near Los Angeles, came to similar conclusions: the predicted earthquake rate far exceeds the observed, if the the 1994 Northridge earthquake is taken as an example of a characteristic earthquake.

The California studies could all be explained by temporal variations in earthquake rate, that is, by abandoning the stationarity assumption. In fact the term "earthquake deficit" has become a part of the seismological lexicon to describe the phenomenon that observed earthquakes haven't been keeping up with the predicted rate. While it is possible that temporal rate variations could explain the apparent deficit in each of the cases mentioned above, it would require a dramatic coincidence, especially in the Pacific rim example. Perhaps the entire state of California may have been in relative quiescence during the last century and a half, but is it reasonable to assume that the whole Pacific margin has been resting?

Specific details of the predictions in the above cases might have introduced errors. For example, the characteristic magnitudes may have been underestimated, causing their frequencies to be overestimated. Perhaps the slip rates on important faults were overestimated. While these effects may explain the difference between observed and estimated rates on a few faults, it is hard to see how they could explain the excess prediction over large areas. It could be true, too, that the characteristic earthquake model has been taken too literally, and that earthquakes of various sizes contribute to the moment rate. Many studies use a truncated version of the Gutenberg-Richter magnitude distribution, rather than the characteristic earthquake model. Some truncate the cumulative distribution, and some the density. These models give similar rates to the characteristic model, at the characteristic magnitude, if the truncation magnitude is near the characteristic magnitude. The seismic moment increases rapidly with magnitude so that small to moderate earthquakes do not contribute much moment. Stated another way, including them in a recurrence model does not change the predicted rates much unless very large earthquakes are involved.

Conservation of seismic moment is assumed over only the "seismogenic" part of the lithosphere, below which moment is released by aseismic means. If we have overestimated that depth, then we have also overestimated the rate of moment accumulation, and probably the size of characteristic earthquakes as well. If so, then we could correct the discrepancy by using a smaller elastic thickness to estimate the earthquake rate. This explanation does not agree well with the characteristic model, because on most fault segments earthquakes have already been observed that have magnitudes comparable to those estimated from the length, displacment, and elastic thickness traditionally assumed. Perhaps some fraction of the slip over the entire seismogenic fault surface is released by aseismic slip. In this case we should see creep at the surface of most faults. While there may be much more creep occurring than we can detect, especially if faults are geometrically diffuse, geodetic studies put strict limits on the creep rate for some important faults. It seems unlikely that pervasive creep could be releasing much seismic moment.

The most likely possibility is that earthquakes are not limited by the length of mapped faults or fault segments. In this explanation, seismic moment is conserved, but it is conserved primarily by very rare huge earthquakes. Note that these monsters must occur not just on the long faults like the San Andreas, but also on the more limited faults. Otherwise, these limited faults would collectively have more moderate earthquakes than appear in the observed catalog. Under this hypothesis, the observed earthquake catalogs could represent well the rate of earthquakes up to the magnitude of adequate sampling (say, the magnitude such that five or more larger earthquakes have occurred). For most regions, the catalog rates agree well with the Gutenberg-Richter relationship. If the Gutenberg-Richter relationship is extrapolated to larger events than yet seen, it can also explain the geologically observed moment rate. A consequence of this reasoning is that the maximum magnitude needed to explain the moment rate is very large, usually in the range from 8 to 9. The actual value depends on the assumed magnitude distribution, but whatever it is, huge earthquakes are required, with recurrence times on the order of thousands of years.

One objection comes quickly to mind. The empirical relations relating earthquake size to fault length suggest that a magnitude 8 earthquake must have a fault length of 500 km or more, and an average displacement near 10 m. "Where could you put such large earthquakes?" is one phrasing of a persistent question. Most of the mapped fault segments are far shorter than 500 km. However, the argument that earthquake sizes should be limited to those implied by regression relationships is tenuous, at best. The regression relationships give average magnitude for a given length (or vice versa), rather than the extremes that may be more relevant in dealing with enormous events. The regression relationships are supported by very few data for large magnitudes, precisely because these are so rare. The most important qualification is that the regression relationships are based on fault lengths determined after the events to which they are associated. These lengths may be substantially longer, especially for giant earthquakes, than those that would have been assigned before the earthquakes. Recent history is replete with examples of earthquakes that did not obey the "stop signs" at the ends of mapped fault segments. In California, the 1952 Kern County earthquake, with moment magnitude about 7.5, occurred on a previously obscure fault that would not, using present methods, have been expected to produce such a large earthquake. The 1992 Landers earthquake connected several faults previously mapped as separate, and it was larger than would have been predicted for any of them. The 1957 Gobi-Altai earthquake, with magnitude over 8, apparently ruptured a combination of strike-slip and thrust faults as far as 30 km apart. These examples show that earthquakes do not necessarily confine themselves to existing, linearly connected fault segments. And certainly earthquakes must be able to extend existing faults, and to create new ones, as the faults that we see today must have been created and extended in the past.

Massive earthquakes have occurred on the San Andreas, where they might be expected, but also in the New Madrid area, where fault zone segmentation would not suggest them. If huge earthquakes can occur there, why couldn't they occur in any seismic area? If they can, that possibility obviously has profound consequences for seismic hazard estimation and for scientific models relating earthquakes to plate tectonics and regional deformation. The possible damage from a magnitude 8 earthquake in an urban area like Los Angeles, San Francisco, Seattle, Salt Lake City, Denver, or other areas would be unprecedented, and it would probably cause failures of physical and cultural systems that we hardly even know we rely on. Dollar losses could be in the trillions. Yet ironically, the possibility of huge earthquakes may be good news, at least for some. Massive earthquakes must be extremely rare, and their destructive potential must be viewed in light of the very small rate at which they occur. Furthermore, they may preclude many more moderate-to-large events.

David D. Jackson, Professor of Geophysics
Southern California Earthquake Center
Department of Earth & Space Sciences
UCLA, Los Angeles, CA 90095-1567
Telephone (310) 825-0421 Fax (310) 825-2779


To send a letter to the editor regarding this opinion or to write your own opinion, contact Editor John Ebel by email or telephone him at (617) 552-8300.

Posted: 11 February 1999