January/February 2005

Success and Failure at Parkfield

I knew that I could never meet Keats's standard, even as I knew that all my "answers" might eventually be proven wrong. I took solace, once again, in Francis Bacon, who praised error, because only from wrong answers can the Truth emerge. -- Robert Kaplan, 1996

Our whole problem is to make the mistakes as fast as possible. -- John Wheeler, 1956

Now that the long-awaited M6 Parkfield earthquake (28 September 2004, 17:15:24 GMT) has occurred, a summary of what hypotheses the "Parkfield Prediction Experiment" (PPE) was designed to test is in order. This is particularly true since many years have passed since the first proposals and papers were written (Lindh, USGS Project Report 8-9930-02098, 1978; Bakun and McEvilly, Science, 1979), and because not all the hypotheses were stated with perfect clarity in the original publications.

As I understood the problem at the time, the main questions posed concerning the next Parkfield earthquake had to do with:

The first two refer to what is called the Characteristic Earthquake hypothesis, the fourth the Elastic Rebound (or time-predictable) hypothesis, and the fifth addressed the question of short-term predictability. The details of the recent event appear to validate our assumptions about the first two and falsify the next two, and while there were no obvious foreshocks or large-scale premonitory slip preceding the recent event, the jury is out on the fifth question until there is time to examine all the high-resolution data sets carefully. Some of the downhole strainmeters may show something in the hours prior to the main event, but it is very small, and very careful analysis will be required before even tentative conclusions can be reached.

In addition to these aspects of the "Prediction Experiment," beginning in 1981 specific arrays of strong-motion accelerometers and strainmeters were installed by the State of California, USGS, and others to document thoroughly the nature of the pre-, co-, and postseismic fields for purposes of inferring the nature of the failure processes and their relation to damaging levels of near-source strong ground shaking. These strong-motion arrays have provided an unprecedented set of on-scale strong-motion measurements at more than 50 stations, at distances ranging from the immediate rupture zone out to about 20 km from the surface projection of the rupture zone. The spatial density of the measurements in the near field provides an exceptionally complete set of empirical measurements to infer the distribution of seismic rupture at improved spatial resolution, and improved estimates of strong ground motion and associated uncertainty for purposes of earthquake-resistant design (see Shakal et al., this issue).

In the end, however, the lasting legacy of the PPE may well be the work with state and local officials on procedures for communicating short-term predictions and warnings to the public. Legislation was passed in the 1980's by the State of California clarifying the associated responsibilities and liability for public officials and scientists, and in the process all involved became more aware of the others' concerns and perspectives.

Extent of Rupture

The 2004 aftershocks during the first day seem to match almost exactly those of 1966 and 1934 (see Langbein et al., this issue); with respect to this measure the 2004 event looks like a near-perfect repeat. Preliminary estimates of the slip patch in the recent earthquake suggest that, as in 1966, slip was predominantly below about 4 km, resulting in only a small fraction of the 30-50 cm of slip being expressed at the surface. Total surface slip averaged about 10 cm, just as it did in 1966, and appears to have occurred almost entirely as exponentially decaying afterslip.

Lindh and Boore (BSSA, 1981), however, also argued for what might be called the "strong version" of the hypothesis. They suggested that not only do events repeat on the same segment, but that these segments are demarcated by discontinuities in the fault structure, microseismicity, geology, or aseismic creep rates and thus may be identifiable even in the absence of a well characterized historic rupture. They concluded that the 1966 rupture occurred between a small bend in the fault at the north end beneath Middle Mountain, and the prominent en-echelon offset at the south end near Gold Hill. While others have contested these conclusions, the rupture of the recent event between these same two discontinuities helps strengthen the "strong version" of the Characteristic Earthquake hypothesis. (The concept of physically controlled segment boundaries originated with Clarence Allen, Lynn Sykes, and others in the 1960's; Allen cited the en-echelon offset at Gold Hill as one such example.)


The magnitude of the recent event was the same as previous events, although the uncertainties and differences between the various magnitude scales likely limit this conclusion to about a 0.2 magnitude-unit precision. It would, of course, have been surprising had they differed, given the apparently identical rupture zones. It will be interesting to see, as the seismic and geodetic data are analyzed, how close the slip distributions in the 1966 and 2004 events are.

Direction of Rupture

One major difference between the recent event and the two previous is that rupture in the recent event initiated at the south end near Gold Hill and ruptured to the north. An explicit assumption of the PPE was that rupture would again initiate at the north end, as it had in 1934 and 1966. This assumption was clearly false, and the rupture was unilateral, but in the opposite direction. It is interesting that even given this difference, the ends of the rupture were apparently the same, suggesting that dynamic effects of the rupture were not as important as the barriers on the fault zone (whatever form they take) in controlling the extent of rupture.

The implications of this observation are clear, and potentially important, given that there are so few repeats of well characterized earthquakes anywhere in the world. Even if you have a segment with a well characterized prior event, do not assume rupture direction is a fixed feature in computing strong-motion estimates for future events.

Time of Occurrence

Most estimates in the 1980's of the expected time for the next Parkfield event were very similar, but they varied in the uncertainty they associated with that estimate (***Table 1). The original "prediction window" of 1985 to 1993 has long since closed, and the associated recurrence model (Bakun and Lindh, Science, 1985) is well falsified. Most of the other published recurrence estimates do poorly also, although the best-of-the-rest is the physically based model of Ben-Zion et al. (JGR, 1993), which attempted to account for the long interval from 1934 to 1966 as a result of an exponentially decreasing strain rate, in response to viscoelastic effects following the great 1857 earthquake (Table 1). Their model, with progressively increasing recurrence times, is clearly strengthened by the 38-year interval from 1966 to 2004; it remains to be seen if the geodetic data can resolve the small decrease in strain rate with time implied by their model.

Short-term Precursors

Another principal component of the PPE was the attempt to detect short-term precursors and provide a warning that the earthquake was about to occur. This effort was based in large part on the near-universal occurrence of premonitory slip in laboratory stick-slip experiments, although two observations figured prominently as well:

It is clear that there were no obvious foreshocks of M 1 or greater in the days preceding the recent event, clearly nothing like the foreshocks in 1934 and 1966. Similarly there were no obvious premonitory creep signals, even though there were seven creepmeters operating along the rupture zone, each capable of detecting as little as 0.1 mm of slip. The failure to detect large premonitory deformation is consistent, however, with other negative observations over the last decade which imply that premonitory deformation, if it occurs at all, is small and will likely only be detected with high-precision downhole instruments. That is precisely why an array of high-precision downhole strain instruments was installed at Parkfield, and it is encouraging that there does appear to be a small premonitory signal at three or four Parkfield strainmeters. The signals are very small, however (of the order of 10 nanostrain, if real), and the jury will remain out until a careful and exhaustive analysis of the records is complete. While this possible signal prior to a M 6 is very small, if it scaled with moment, the expected signal for a M 7 earthquake would be easily observable with current downhole instrumentation, if the instruments were deployed with reasonable density as they originally were at Parkfield. Thus, first impressions notwithstanding, and thanks to the capabilities of modern high-resolution borehole strainmeters, the recent Parkfield earthquake may yet breathe life back into the search for short-term premonitory slip.

It is interesting to note that even if there were optimally sited continuous GPS instruments capable of resolving 1 mm on either side of the fault, 10 km apart, they would only be able to detect a signal larger than 10-7. The possible signal on the downhole strainmeters preceding the recent Parkfield earthquake is about 10-8, far smaller than even a GPS array could detect. Even if a signal ten times larger were to precede an M 7 earthquake, for instance, it would still not be resolvable with any confidence on a GPS array, although it would be a large, clear signal on good downhole strainmeters. Continuous GPS arrays are a powerful tool for studying strain accumulation and release, but they fail by about two orders of magnitude in matching the resolution of good downhole arrays, at periods of hours to days.


The recent Parkfield event can also be viewed from a broader perspective. A series of probability reports and journal articles in the 1980's computed long-term probabilities for most of the San Andreas system (see references in Table 1). The approach was simple and direct, and assumed that characteristic earthquakes occurred on relatively fixed segments and were "quasiperiodic", that is, obeyed Reid's elastic rebound model approximately, with coefficients of variation (CV) of about one third. Of the segments assigned high 30-year probabilities, two have now produced the expected earthquakes: the M 6.9 Loma Prieta earthquake in 1989 and the recent Parkfield event. Equally important, none of the segments assigned low 30-year probabilities has had an event anywhere close to the expected magnitude. This is certainly not sufficient to confirm the correctness of this approach or the parameters assumed, but the models were clearly enough stated that they are in fact falsifiable; twenty years have passed, two critical earthquakes have occurred, and the overall picture has held up remarkably well.

Starting in the mid-1990's, however, a somewhat different approach was adopted, and large committees began producing consensus reports that modeled the earthquake occurrence in the San Andreas system as a much more random phenomenon, with variances so large that the results scarcely differ from Poisson estimates in space or time. These reports may have had some societal usefulness, but I would argue that as far as scientific progress on understanding the San Andreas system is concerned, they are of little help, for they are unfalsifiable. They are so broad in their estimates that almost anything that happens is consistent with their predictions; they can't be wrong, and thus the myriad of assumptions and hypotheses that went into them cannot be tested. More seriously, I would argue that in light of the large amount of valuable new information that has been collected on the San Andreas system in the last decade, they are no longer accurate enough to be used for broader societal purposes. A few examples serve to highlight the problem that has emerged.

In 1988 I estimated there was a 0.44 chance in 30 years of a M 8 earthquake on the southern San Andreas, between Cajon Pass and the Salton Sea; Sykes and Nishenko (JGR, 1984) had earlier obtained comparable numbers. These estimates were based on the absence of microseismicity along much of that segment, the clear geodetic evidence for strain accumulation, and Kerry Sieh's paleoseismic evidence from Indio Hills that there had been repeated great earthquakes in the past, but none since about 1680. But WG95 estimated the 30-year probability for the main Coachella Valley portion of the segment at 0.22, little different from their Poisson estimate of 0.17, even though we know with near certainty that there has not been a great earthquake on that segment in about 300 years. Since that time Tom Fumal and others have collected and brought together a great deal of new paleoseismic data, and it is now clear that the segment has been the source of five great earthquakes in the last 1,200 years, with the last event in the latter portion of the 17th century. They state that this segment is "near failure and could rupture in a single large magnitude earthquake in the near future" (Fumal et al., BSSA, 2002); clearly an update is called for. (It may be relevant that WG95 also determined that the Parkfield segment was characterized by earthquakes of M 6.8 and estimated their annual rate at one to three events per thousand years.)

Similarly, in the San Francisco Bay area (SFBA), the southern segment of the Hayward Fault last failed in 1868 in a M 6.8 event, and about 1.3 m of displacement has accumulated across that segment since then. Recently Lienkaemper et al. (BSSA, 2002) have published new data documenting four similar earthquakes on that segment in about 500 years, with an average return time of 130-140 years. WG90 (USGS Circ. 1053) estimated a 30-year probability of 0.23, but WG02 (USGS OFR 03-214) reduced this to 0.12; Lynn Sykes and I get 0.64 and 0.39, respectively, for the same segment using Lienkaemper's new data.

Looking at the entire SFBA, WG02 averaged across five different models and estimated that all seven of the major strike-slip segments had 30-year probabilities between 0.10 and 0.17 for a M 6.7 or larger event. Yet one of these segments on the San Andreas, the San Francisco Peninsula segment (SFP), has produced two major ruptures in historic time, and about 2 m of displacement have accumulated across this segment since 1906. Using very simple models, and an updated geodetic estimate of slip in 1906, Lynn Sykes and I have independently estimated the 30-year probability at 0.27 and .32, respectively, compared to 0.13 by WG02.

That this is more than an "academic concern", and has serious implications for society at large, is clear when we compare the estimate of 0.13 by WG02 for the SFP to the estimate of 0.10 they published for the 70-km-long northern segment of the Calaveras Fault (CN). The CN segment has a long-term slip rate of about 5 mm/yr, as contrasted with about 20 mm/yr for the SFP. All we know about the seismic history of the CN comes from one trench site, which contains possible evidence for several large events in the last 2,000 years, as contrasted with two large historic events on SFP. We have no knowledge of the time of the last large event on the CN, if one has ever occurred, yet on the SFP we have measured slip of about 2.6 m. in 1906. In addition, it is unclear how much of the 5 mm/yr on the CN segment is even going into strain accumulation, given that aseismic slip at rates of 3-4 mm/yr has been observed at two sites on that segment; in contrast, the SFP segment is fully locked. It is simply not plausible that two segments which differ so radically in their history and characteristics have essentially the same 30-year probability. Society is, in my opinion, not well served by such analysis--it is simply misled. Something has gone terribly wrong in the "Working Group" process, and a careful review by an independent group is called for.

I believe that such a review would concur strongly with the opinion expressed above, that several segments are odds-on favorites for the next large California earthquake. In light of the recent successes and failures at Parkfield, a very positive and forward-looking approach to the seismic threat in California would be to designate the three segments highlighted above, with the possible addition of one or two others, as sites of focused "prediction experiments." (After all, there won't be much need for another PPE for about 20 years.) While I understand that some in the field of seismology are afraid of the "P word", the public is not; they think it's what seismologists are working on. It is my opinion that the public would respond very positively to our highlighting some of the most serious threats to their lives and welfare, particularly if it were accompanied by a serious commitment to do everything we could to further our understanding of those segments, and maybe in the process even reduce the risk they represent. Everyone has something to criticize in the PPE, and with the benefit of hindsight it is easy to see many things that could have been done differently. Yet almost no one would deny that by focusing a great deal of attention and instruments and some money on one short segment of the fault, a great deal has been learned, and I believe the ratio of journal articles to dollars spent has been very high. By learning from our experience at Parkfield, much better experiments could be designed today for the segments mentioned above. Moreover, given the instruments that are already in these areas, or are planned as part of the NSF-funded Plate Boundary Observatory initiative, beginnings might be possible with current budgets, badly stretched as they are. (As in most things, what is needed most in the beginning is a few good ideas, a large dose of perseverance, a little nerve, and maybe a little luck.) The impact on societal awareness of the threat posed by these segments would be enormous, I believe, and in the wake of overwhelming public support it might be that public officials and politicians would take note and augment the monies available as well.

The original impetus to do a focused experiment at Parkfield was the widely perceived need to escape the conundrum posed by the prevailing philosophy of "putting instruments everywhere", and then when an earthquake happened, ending up with too little data near the event to resolve much of anything. This problem still exists today, and while the details have changed, it is still true that dense arrays of the best instruments cannot be deployed everywhere in California. The Parkfield Experiment was made possible by the work done in the early 1980's in roughing out the probabilistic seismic landscape of the San Andreas system. The estimates made then were far from perfect, but they were sufficient to separate the "contenders" from the "pretenders", and that is all that is required to greatly increase the likelihood of "capturing" a significant earthquake. Since that time the amount of information available for estimating long-term probabilities in California has increased severalfold, and much better estimates are possible today. It is clear, to me anyway, that three or four segments are odds-on favorites to produce a large earthquake in coming decades. They are all capable of M 7 or larger events, and all are close enough to major urban areas to constitute real threats to the people and economy of California. These segments can and should be the subjects of focused experiments in the coming decades--not like Parkfield, but much, much better. Call it prediction, call it monitoring, call it anything you like; the only interesting question is whether our current best understanding of earthquakes in California will be used to design experiments that will advance our understanding even further in the decades to come.

Following the 1966 Parkfield earthquake, two seminal papers were published in the Bulletin by Tom McEvilly of UC Berkeley (McEvilly et al., 1967) and Jerry Eaton of USGS (Eaton et al., 1970). One can hardly conceive of seismology in central California in the last three decades without the contributions of these two remarkable gentlemen, neither of whom quite lived long enough to see the "next" Parkfield earthquake. I believe the enormous volume of high-quality data collected for the 2004 event, much of which is the product of their unrelenting and extremely resourceful efforts to understand the San Andreas Fault system, is the kind of memorial each would have wished for.

Allan Goddard Lindh
USGS (Emeritus)
Menlo Park, CA 94025

To send a letter to the editor regarding this opinion or to write your own opinion, contact the SRL editor by e-mail.



Posted: 23 June 2005