OPINION

May/June 2013

A Dispassionate View of Seismic-Hazard Assessment

doi: 10.1785/0220130005

PRELIMINARY REFLECTIONS

An intuitive exercise conducted by Allin Cornell in the 1960s, which was intended for the determination of the rational seismic design level of a small earth-fill dam somewhere in Turkey, has become the linchpin of performancebased engineering and many other stateof- the-art tools for earthquake engineering. It is an active area of research and development that currently supports the activities of a good many professionals in the engineering applied sciences. In these days of post-Fukushima the nuclear industry is crucially dependent on the so-called accurate determination of the seismic threat to its assets in countries where there are power-generating plants. The quest for the proper determination of the seismic hazard using probabilistic tools has assumed surreally abstruse dimensions that are beyond the comprehension of most engineers, or at least this one.

But, is the study of seismic hazard a science as we understand it? Does the output of the hazard study upon which so much depends fulfill attributes of other products of science? On closer reflection, one becomes dismayed by how readily (or naively) some of the fundamental questions related to seismic hazard (where occasionally the ubiquitous 2,475.4-year return period ground-motion level is expressed with astronomic exactitude) have become absorbed into the lexicon of everyday practice. These numbers, which correspond to the probabilities for predefined levels of some ground-motion parameter being surpassed during a predefined interval of time, have become the authoritative metrics of hazard, having been imported into the canonical syntax of the trade. But the provenance of the socalled 2/50 rule that corresponds to the roughly 2,500-year recurrence interval is itself subject to pretty questionable collective discretion, so that in many cases codes allow design to be based on two-thirds of that level. In the interest of pulling a rabbit out of the hat, we turn a blind eye to the episodic nature of seismic activity or the magnitude clustering of earthquakes or the ineradicable vagaries in ground motion that have been observed routinely by everyone in the earthquake field. There has never been a confirmation, even for short repeat periods, that 64% of all ground-motion measurements at a given location have confirmed the hazard calculated for that location. In many cases, nature has exposed the falsity of the hazard estimates, often much to the chagrin, not only of those who made them, but also those who were exposed to them.

The quest for the proper determination of the seismic hazard using probabilistic tools has assumed surreally abstruse dimensions that are beyond the comprehension of most engineers, or at least this one.

As someone who too has dabbled in seismic-hazard studies, I think I have some claim to admitting publicly my hitherto private misgivings, given empirical refutations of my own (and those of other experts’) elaborate calculations.

Perhaps we need to examine, in the light of scientific clarity, the very notion upon which any hazard study is based, conscious that perhaps something as vulnerable as the linguistically controversial but ever-present buzzword “exceedence”11. A word that the venerable Oxford English Dictionary has not yet admitted exists. Merriam-Webster does list it. may lie furtively behind the veil of complacent assurance. We are faced with the basic challenge in the methodology of science: How does one move from a few observations to universal laws? How can one validly infer a universal statement from any number of existential observations? The prediction of ground-motion characteristics over much longer periods than have been subject to empirical verification cannot instill much confidence in their use. Yet, for consistency, prediction must draw a conclusion about a future individual event from a sample of past events, not from their untestable conjectures. In seismic-hazard assessment, we often must come up with numbers that define the hazard in some convenient way at a given, specific location (or at a collection of locations, which then becomes a hazard map). The approach is to generalize past experience and measurements at many different seismic environments by blending geological inferences with ground-motion-prediction instruments in the hope that they will all unify to throw a beacon of light for a specific coordinate somewhere on the surface of the earth.

I’m not surrendering to Schadenfreude by recalling that we have witnessed during the past decade or so a number of notable earthquakes that have proved to be not only deadly to those who experienced them, but that were fully unexpected, as it were, by scientists and engineers who had prepared hazard maps and other graphical manifestations of the seismic threat. The existence of a very old earthen fort in the ancient city of Bam in Iran with its intact walls rising from a hill would have been considered by many to be a sign that, at least during the two-millennia-long lifetime of that weak brittle enclosure, no major earthquakes had visited the place. Yet, in 2004, visited it was. In the ensuing debate about why Bam had not been included in the highest hazard zone of the map, there was the customary earth science hand-wringing about active faults traversing the region that had been known but not yet mapped. The same type of exculpation has been expressed in other earthquakes, most recently in the aftermath of the February 2011 Canterbury earthquake and its sequel, which too were attributed to the rupturing of a related fault spline adjacent to the one, which had gone off in September of the previous year near Darfield. No seismichazard study would have provided the ground accelerations of about 2g, which were measured in Christchurch during the second, shallow event. The geological consensus was that these earthquakes were the result of a blind (an admirable euphemism for unknown-by-us) fault system that had ruptured. If we don’t know where and how many unknown faults there are, then how well can we foretell what the ground motion will be like when, or if, they do rupture? If seismic-hazard estimations are apparently torpedoed every time an earthquake occurs, and if we are lucky to record the ground motions it causes at a meaningful number of stations, then it would seem that the exercise is a bit too optimistic, if not outright moot. No responsible, sane seismologist subscribing to conventional wisdom would have estimated ground motions of the levels that were measured in the Central Business District of Christchurch unless, and at the risk of suffering derision by his peers, he invoked return periods of many powers of ten, and heaped ground-motion uncertainty figures on top of that. Yet once the measurements are made and the facts lay bare our impotence at making even rough estimates of our estimates, we fall back on the inexhaustibly verbose toolkits of strong-motion seismology and geotechnical earthquake engineering to rationalize why those measurements say what they say. Hindsight is indeed 20/20.

The fundamental requirement for making a statement like “the peak acceleration that the earthquake that recurs every v years in geographical location defined by coordinates x and y is on average z, expressed as a multiple of g” must be that we have at hand many measurements over a much longer period of observation than v years to underpin that claim, which is based on some sort of fundamental scientific theory. This is never the case. Thus, according to the premise of scientific method, because no product of seismic hazard is testable (or falsifiable), it cannot be considered to be strictly scientific. The concept was first popularized by Karl Popper, who, in his philosophical criticism of the popular positivist view of the scientific method, concluded that a hypothesis, proposition, or theory talks about the observable only if it is falsifiable. Falsifiable is often taken to loosely mean testable. An aphorism states it loosely as, “If it’s not falsifiable, then it’s not scientific.” But the state of being falsifiable or scientific says nothing about its truth, soundness, or validity, as would be shown, for example, by the unfalsifiable statement, “That sunrise is beautiful” or “Moonrise will be observable tomorrow from somewhere on earth.” There is a need to become immersed in the depths of Popperian scientific logic and method in which this author is manifestly uncomfortable. I will also avoid any discussion of the nagging issue of earthquake prediction, which is beyond the scope of this article. Naïveté is an essential part of new discovery in science, but it should be considered with deference, for too much of it may lead us to tumble over the cliff of sanity.

No responsible, sane seismologist would have estimated ground motions of the levels measured in the Central Business District of Christchurch unless, and at the risk of suffering derision by his peers, he invoked return periods of many powers of ten.

During the last few years the Opinion column of SRL, a favorite platform where seismologists (and occasionally engineers) like to kibitz, has served as a forum for public soul-searching and open debate about whether earthquake prediction, and its tangential offspring, probabilistic seismic-hazard assessment (PSHA), in which features of the ground motion are predicted, have the stringent attributes of what we understand the products of the scientific method to be. They touch the utility of probabilistic hazard studies and earthquake forecasts. I have listed some of them at the end of this article (Jordan, 1997; Whiteside, 1998; Lomnitz, 1999; Anderson, 2001; Stein, 2006; Heaton, 2007; Stein et al., 2009, 2011; Mulargia, 2010; Stirling, 2012; Kagan et al., 2012; Jordan, 2013). It would seem that, even among these respected scientists, unanimity of opinion is in short supply. That should be grounds for discomfort on the part of end users.

There seems to be some consensus that, whereas individual earthquakes are inherently not forecastable, seismic activity over a long period of time (measured in hundreds or thousands of years) is pretty stable, so the hazard that it represents may be expressed in probabilistic format when the ground-motionprediction component (itself the subject of heated debate about the merits of increasingly more sophisticated, but ultimately statistical, models) has been incorporated. This consensus has found its expression in hazard maps where contours of groundmotion parameters with prescribed annual probabilities of occurrence are drawn above the visual representation of the territory. Those maps repudiate the classical observationalist/ inductivist form of the scientific method in favor of empirical falsification.

Engineers pride themselves for being pragmatists, and are typically much more at home with cutting around corners than scientists are. For engineers, the principle of utility commands paramount respect: if it works, then they don’t question its path all that much. Treading into the dark chambers of probability theory, inference, and Bayesian reflections is considered to return little in added value and is blithely ignored. Seismic hazard yields larger values of the hazard metric for lower probabilities of occurrence, which is the way it is for determining spillway size to discharge excess water from overfilled dams or snow load above roofs. The rarer the event the greater is the potential hazard that it represents. Those ground-motion values are then used by other professionals, for example, by engineers deciding whether expenditures for building retrofit are economically justifiable or by insurance risk modelers in figuring out what annualized earthquake losses will be in a city so that they can arrive at defensible premiums. Hazard and risk are of regional character, so single, point-wise disconfirmations are claimed not to invalidate the basic quantification for either of them. Yet regional confirmations for hazard are equally illusory.

Seismic-hazard assessment brings together a number of different disciplines of science, which are steeped in disparate traditions. It starts with geologic time spans, the culmination of which, coupled with the chaotic processes that govern the way the outermost (and highly opaque) 50 km-thick crust of the planet Earth functions, leads to the nucleation of an earthquake that requires only seconds to transpire. The engineer needs to design and construct the built environment such that it will be able to sustain the effects it experiences during those few seconds. Earthquake preparedness and risk mitigation are served by hazard studies. Reinsurance companies need them when they plan for fiscal contingencies crafted after their loss models. Works of man cover a wide spectrum of types of construction with different levels of criticality, so discrimination must be made among the individual elements with respect to how much resistive capacity to inject into each. Logical reasoning would immediately suggest that we should classify structures on the basis of the importance that they represent to society, so a greater degree of strictness, as it were, should apply to those structures we judge to be more critical. As a consequence, critical structures must be able to sustain less frequent ground-motion effects than regular ones that may be assigned capacities corresponding to less demanding ground-motion effects. Empirical evidence shows that ground motions during less frequent earthquakes are often more violent than during frequent ones. It is supported by many other occurrences of nature; amount of rainfall over a day, wind speed during storms, and wave height in rough seas all have longer intervals for recurrence when their descriptive metric increases. For a statement to be questioned using observation, it needs to be at least theoretically possible that it can come in conflict with observation. A key observation of falsificationism is thus that a criterion of demarcation is needed to distinguish those statements that can come in conflict with observation and those that cannot. Popper chose falsifiability as the name of this criterion. The probabilistic format for describing hazard does not abrogate this fundamental requirement because the time window that characterizes the hazard cannot elude empirical verification forever.

REFUTABILITY

All scientific propositions need to be subjected to the grueling test of confirmation. Scientific knowledge is closely tied to empirical findings, and always remains subject to falsification if new experimental observation is found that is incompatible with it. That is, no theory can ever be considered completely certain, because new evidence falsifying it might be discovered. What distinguishes science from all other human endeavors is that the accounts of the world that our best, mature sciences deliver are strongly supported by evidence and this evidence gives us the strongest reason to believe them. But there is a pressing need to separate the smoke from the science. Debunking a given seismic-hazard assessment is easy when, all other parameters remaining the same, a location closer to an active fault is said to have a hazard metric lower in value than another location that is further removed from it. This is not evolution with time but a sure giveaway sign that the supplier bends to the needs of the customer. It is seldom a result of new scientific borders being conquered. It is all too frequent that the result of seismic-hazard study for the same site by two different hazard teams might differ by an order of magnitude.

This is a call for us to exercise greater temperance about what PSHA represents so that we don’t join this or that religious camp to debate how it is best done or reflect abstrusely about its interpretation.

The question of whether ground motion has an upper limit is existential for seismic-hazard studies because probability theory points to overwhelmingly large values of ground acceleration for very small probabilities. This question is more than just mystifying because for radioactive waste repositories, such long periods must be considered, because it is then when the harmful material will have become less lethal. Hanks et al.(2005) examine the upper bound of ground motions. A statement of the kind “ground accelerations of 12g might exist” is falsifiable by only what we have so far measured or observed, but it is not possible to falsify the proposition that such ground acceleration will never be observed (this presupposes that we can measure all ground motions that are yet to occur everywhere, which is, of course, falsifiable only on the basis of practicality). It is for that reason that we must resort to inferential tools, such as precariously standing rocks near faults (Anderson et al., 2005), which may help constrain ground motions over very long periods, though that too is a path that is not without its own stones.

An important attribute of strong-motion seismology is that no ground motion that is ever recorded can be linked to the estimated (or claimed) return period of the ground motion at its location. It is thus a one-way street: We can estimate the characteristics of the yet-to-occur earthquake ground motion anywhere on the globe, but once the ground motion from an earthquake has been recorded by a stationed instrument we have no idea of how frequent an occurrence it is for that spot, because a multiplicity of return periods would be attributed to the same earthquake from other nearby records. When we recall that, in a well-instrumented region, the same earthquake causes extremely different ground motions at similar distances, the non-uniqueness of the hazard estimation becomes even more manifest.

IN SHORT

This is neither a funeral oration for seismic-hazard assessment nor a cynical appraisal. PSHA will stay with us for a long time. It is a call for us to exercise greater temperance about what it represents so that we don’t join this or that religious camp to debate how it is best done or reflect abstrusely about its interpretation. Controversies about the correctness of any set of seismic-hazard assessment exercises must be done with a great deal of equanimity. It is perhaps time that we confess that this flamboyant emperor does not wear invisible but otherwise ostentatiously splendid clothes, but that he is in fact naked. The explanatory power of probability theory is puny in revealing the working of earthquake occurrence and therefore the hazard it represents. Its ambit, even when bedecked with a farrago of mathematical artifices, is just too short. Seismic-hazard estimation with the use of probabilistic tools, unlike much else in the scientific endeavor as we know it, is neither true nor false because none of it is testable. It cannot be debunked (except in the self-evident case) or confirmed. It is just a useful tool in the interest of preparedness and societal well-being. It is also, for now, the only way to go. But that way is covered with mines. Probabilistic format or not, a single case of promises not delivered for the so-called 10/50 earthquake causing insufferable damage in an asset of an owner with deep pockets may spell the end of hazard assessment if that owner happens to retain a steely-eyed New York attorney who argues his case in front of a complaisant judge. That combination would destroy the shaky castle of performance-based engineering including its principal pillar, PSHA.

CREDITS

I have made use of freely accessible material in preparing this article. It has enabled me to express my thoughts in a more academically correct way. Most have been extracted from Wikipedia at various times under the following headings: “Falsifiability,” “Karl Popper,” “Inductive Reasoning,” and “Scientific Method.”

The discerning reader will notice my abject emulation of the erudite literary style of Eric Hobsbawm, the historian. His books that have inspired a number of quotes and metaphors in this article are included below.   

1. A word that the venerable Oxford English Dictionary has not yet admitted exists. Merriam-Webster does list it.

REFERENCES

Anderson, J. G. (2001). Precautionary principle: Applications to seismic hazard analysis, Seismol. Res. Lett. 72, no. 3, 319–322.

Anderson, J. G., J. N. Brune, A. Anooshehpour, and M. D. Purvance (2005). Data needs for improved seismic hazard analysis, in Directions in Ground Motion Instrumentation, P. Gülkan and J. G. Anderson (Editors), Springer, Dordrecht, The Netherlands, 303 pp.

Hanks, T. C., N. A. Abrahamson, M. Board, D. M. Boore, J. N. Brune, and C. A. Cornell (2005). Observed ground motions, extreme ground motions and physical limits to ground motions, in Directions in Ground Motion Instrumentation, P. Gülkan and J. G. Anderson (Editors), Springer, Dordrecht, The Netherlands, 303 pp.

Heaton, T. H. (2007). Will performance-based earthquake engineering break the power law? Seismol. Res. Lett. 78, no. 2, 183–185.

Hobsbawm, E. (1987). The Age of Empire, Vintage, New York, 404 pp.

Hobsbawm, E. (1994). The Age of Extremes, Vintage, New York, 627 pp.

Jordan, T. H. (1997). Is the study of earthquakes a basic science? Seismol. Res. Lett. 68, no. 2, 259–261.

Jordan, T. H. (2013). Lessons of L’Aquila for operational earthquake forecasting, Seismol. Res. Lett. 84, no. 1, 4–7.

Kagan, Y. Y., D. D. Jackson, and R. J. Geller (2012). Characteristic earthquake model, 1884–2011, R.I.P., Seismol. Res. Lett. 83, no. 6, 951– 953.

Lomnitz, C. (1999). The end of earthquake hazard, Seismol. Res. Lett. 70, no. 4, 387–388.

Mulargia, F. (2010). Extending the usefulness of seismic hazard studies, Seismol. Res. Lett. 81, no. 3, 423–424.

Stein, S. (2006). Limitations of a young science, Seismol. Res. Lett. 77, no. 3, 351–353.

Stein, S., R. J. Geller, and M. Liu (2011). Bad assumptions or bad luck: Why earthquake hazard maps need objective testing, Seismol. Res. Lett. 82, no. 5, 623–626.

Stein, S., M. Liu, E. Calais, and Q. Li (2009). Mid-continent earthquakes as a complex system, Seismol. Res. Lett. 80, no. 4, 551–553.

Stirling, M. W. (2012). Earthquake hazard maps and objective testing: The hazard mapper’s point of view, Seismol. Res. Lett. 83, no. 2, 231–232.

Whiteside, L. S. (1998). Earthquake prediction is possible, Seismol. Res. Lett. 69, no. 4, 287–288.

Polat Gülkan* Department of Civil Engineering Çankaya University Ankara 06810, Turkey polatgulkan [at] cankaya [dot] edu [dot] tr

* The author is President of the International Association for Earthquake Engineering.


To send a letter to the editor regarding this opinion or to write your own opinion, you may contact the SRL editor by sending e-mail to
<srled [at] seismosoc [dot] org>.



[Back]

 

Posted: 3 May 2013