OPINION

March/April 2007

Will Performance-based Earthquake Engineering Break the Power Law?

It seems that the entire community of earthquake professionals was stunned by the number of fatalities (approximately 300,000 dead or missing and presumed dead) in the 2004 Sumatran–Andaman earthquake and tsunami. It took us by surprise and seemed so out of proportion with anything that occurred in the decades prior. It was a rare confluence of circumstances that led to such massive loss. If, through our earthquake studies, we had been able to prevent just 5% of those deaths, then we would have saved more lives than have been lost in all other tsunamis for many decades. One clear lesson stands out from this tragedy: We must do a better job on tsunami hazard mitigation efforts for very large earthquakes (M > 9). While these events are rare, they account for most of the total hazard.

One clear lesson stands out from this tragedy: We must do a better job on tsunami hazard mitigation efforts for very large earthquakes (M > 9). While these events are rare, they account for most of the total hazard.

But what about earthquake hazards in general? Will most of our losses come from only a few of the deadliest earthquakes? I did a global search of 20th-century earthquake deaths as listed in the Significant Earthquake Database maintained by the National Geophysical Data Center and discovered that the seven deadliest events accounted for half of the 2.7 million total deaths. While the earthquakes with the largest magnitudes (1960 Chile M 9.5; 1964 Alaska M 9.2) were not the deadliest, all of the deadliest were M > 7.5. It is clear, however, that most earthquake deaths in the 20th century resulted from the most infrequent events. In fact, the number of events NE with deaths exceeding ND is given by NE ≈ 8 × 104 ND−0.86 for the collection of 500 earthquakes with more than 1,000 deaths apiece. This can be restated as a kind of Gutenberg-Richter law, logNEa − b log ND, where the a-value is 4.9 and the b-value is 0.86. This raises an interesting question. If we could run the same experiment for present-day California, what would we get? Would it be a power law and what would be the a- and b-values? There are clear differences between this hypothetical experiment and the global 20th-century data; California earthquakes are a small fraction of the global total and California buildings are clearly more earthquake resistant than those buildings that have collapsed and caused most of the 20th-century earthquake deaths. Still it is more than an academic question to ask whether our greatest threat comes from the collection of Northridge and Loma Prieta-sized events, or does it primarily come from a repeat of the 1906 earthquake, or maybe something even worse—something like a 1952 Tehachapi earthquake centered in the Los Angeles Basin.

John Doyle, a colleague of mine in engineering here at Caltech, has studied the reliability of complex systems such as power grids, the Internet, etc. He has shown me that the failure of many different types of engineered systems can also be described by power laws (often referred to as a Pareto probability model). It is often the case that the most important system failures are also the most infrequent. John tells me that the fact that systems have power-law failure statistics seems to be related to the interconnectivity of the different elements; cascading failures are a common underlying factor for phenomena with power-law statistics. John also tells me that it is very difficult to change the b-value by re-engineering a system; the b-value is built into the overall geometry of the problem. Of course, good engineering is its own reward and it inevitably results in a lowered overall failure rate, that is, a lower a-value. He also tells me that it is difficult, if not impossible, to predict the mean failure rate for a power-law process.

As an example, consider our previous problem where the number of 20th-century global earthquakes NE with deaths exceeding ND is given by log NE ≈ a − b log ND. Then the total number of deaths ND−total for all events having deaths with values between ND−min and ND−max is given by

[an equation]

if b = 1. In our particular example, this means that the total number of expected deaths should grow as ND−total ≈ 5×105 N0.14D−max. If the largest number of deaths possible in an event grows by a factor of 10, then the expected number of deaths would increase by 38%. To make matters worse, the actual number of deaths depends on a small number of very deadly events. That is, we cannot estimate the variance on this number.

As a separate example that is more familiar to seismologists, consider earthquake moment statistics, which are described by a variation of the Gutenberg-Richter law as log(N > M0) ≈ a2/3 logM0. In this case the expected total moment M0−total for any given time period grows as

[an equation]

where M0 max is the largest possible seismic moment and Mmax is its corresponding magnitude. This means that, for a fixed a-value, the overall moment rate doubles when the maximum possible magnitude increases by 0.6 units.

No one purposefully designs buildings with recognizable flaws. It was the 1971 San Fernando earthquake that revealed the flaws in steel reinforced concrete, and it was the 1994 Northridge earthquake that revealed the flaws in welded-steel moment connections.

Tsunami statistics are even more tail-heavy. Tsunami energy ET scales as ET ∼ (uplift)2 (area uplifted) ∼ M04/3. This means that ET ∼ 102M and ET−total ∼ 10Mmax. That is, increasing Mmax by 0.3 units increases the total expected rate of tsunami energy by a factor of 2. Regrettably, I am unsure how to relate tsunami energy to the expected deaths from a tsunami, but based on what we saw from the 2004 Sumatran earthquake, it seems clear that expected deaths vs. frequency of tsunamis is also very tail-heavy. The point is that if you want to know the expected cumulative loss for a set of events described by a power law, then you need to know the size of the largest event for which that power law applies.

So what has all of this got to do with performance-based earthquake engineering (PBEE)? In PBEE the attributes of a building are chosen such that the expected life-time cost is some agreed-upon value. For a given design, the expected failure rate is determined by the joint probability of different levels of ground shaking (usually referred to as probabilistic seismic hazard analysis, PSHA) combined with the corresponding probability of failure of the building for that shaking (referred to as building fragility analysis). Now if either of these two elements is fundamentally described by a power law, then we are faced with the problem that the things we know least about (infrequent occurrences) may be very important in the overall performance.

Is it reasonable to describe the probability of ground shaking with a power law? This is somewhat of a trick question. We all know of the Gutenberg-Richter law, which is a power law that describes the frequency of the amplitude of seismograms. In PHSA, however, things are more complicated. We want to know about the size and location of events, which is not so easy to answer. After U.S. Geological Survey seismologist Lucy Jones stated at a press conference that the 1994 Northridge earthquake was on an unrecognized fault, one insightful reporter asked, “Just how many unknown faults are there?”

We also want to know the level of ground shaking for a given magnitude of event. It is this question of shaking magnitude that potentially transforms the power-law statistics of Gutenberg-Richter into a more manageable log-normal statistical problem. In particular, it seems clear that near-source high-frequency ground motion amplitudes saturate with increasing magnitude. For example, peak ground accelerations are lognormally distributed about 0.5 g (plus or minus a factor of two) for sites within 10 km of the rupture surface of earthquakes larger than M 6, independent of the magnitude. If we are worried about peak acceleration, then this saturation phenomenon removes the heavy tails of the Gutenberg-Richter law.

However, if we are worried about peak ground displacement, then there is no corresponding saturation with increasing magnitude. Instead, we are faced with the problem of estimating the probability of slip amplitude on faults. That is, the peak ground displacement in Los Angeles is closely related to the peak slip that can occur on faults (both known and unknown) beneath the basin. Can we determine the peak slip on these faults? While high-frequency motions may be described by log-normal statistics, it seems likely that the statistics of long-period motions may be best described with heavy-tailed power laws.

Is it reasonable to describe building fragilities with a power law? This is certainly a trick question. If we subjected a large collection of buildings to the same ground motion, would their failure statistics be described by a normal distribution or by a power law? If all of those buildings were nominally “identical,” then we would expect the failures to be normally distributed about a mean with variations due to variations in the actual construction. Without real full-scale testing, however, even this conclusion may be questionable. There have been notable examples in which nominally identical adjacent buildings experienced completely different outcomes (e.g., the three steel-frame towers of the 1985 Mexico City M 8.1 earthquake: one collapsed, one was permanently deformed, and the other appeared undamaged).

If on the other hand we simply collected all buildings of all designs throughout the world, then I suspect that certain designs would account for most of the failures. To ask the question another way, what would be the statistics of failure of six-story buildings in California for a given ground motion? I don’t know the answer to this question, but it seems unlikely that it would be a normal distribution. That’s because there are a wide variety of types of six-story buildings: unreinforced masonry, nonductile moment frames, ductile moment frames, concrete shear walls, etc. One of our great fears is that strong shaking will strike in a region with a high density of nonductile concrete frame buildings. This could result in tremendous losses, thereby providing a new point out on the tail of a power law that describes earthquake losses.

One of the main reasons for such a variety of building fragilities is that we have changed our understanding of structural engineering with time. No one purposefully designs buildings with recognizable flaws. It was the 1971 San Fernando earthquake that revealed the flaws in steel reinforced concrete, and it was the 1994 Northridge earthquake that revealed the flaws in welded-steel moment connections. If we are to achieve fragilities that are normally distributed about a predictable mean, then we must assure that there will be no future “surprising” lessons about structural design.

It seems to me that the design of long-period buildings may present the greatest challenge. Large long-period ground motions occur only during infrequent, very large earthquakes, and we have not really had the opportunity to fully test how long-period buildings will behave in such a situation. Furthermore, recent end-to-end simulations of the effect of large earthquakes on high-rise buildings suggest that collapse is a real possibility with ground motions similar to those that we think occurred during the great San Andreas fault earthquakes of 1857 and 1906. It is clear that if a future large earthquake causes multiple collapses of high-rise buildings, then building codes will change as a result.

We just might end up with a different set of buildings if we acknowledged the nature of the uncertainties in this problem.

Will performance-based earthquake engineering break the power law? This question, which doubles as the title of this article, is intentionally ambiguous. If by this question I mean, “Will the frequency vs. loss statistics continue to be described by a power law even after we implement PBEE?” then I think the answer is probably yes, the loss statistics for events will be described by a power law. The fact that the statistics of frequency vs. population size for cities are described by a power law suggests that overall loss statistics will be described by a power law. A direct hit of a large earthquake to a large city will do far more total damage than a similar event striking a small town. However, if I mean “Infrequent great earthquakes comprise much or most of the threat to a particular building,” then I think that the jury is still out. There are reasons, however, to suspect that short-period ground motions (less than 0.5 sec) and the responses of short-period buildings are more likely to behave like normal distributions (i.e., Gaussian distributions) than are long-period buildings. That is, PBEE may provide reasonable failure-rate estimates for short-period buildings.

Ironically, while short-period ground motions may be more reliably predicted by PSHA in the sense that peak ground accelerations are normally distributed about a mean, there is less need to utilize these values in building design. Short, stiff buildings typically are designed using a set of prescriptive rules (the building code) based on past performance in actual earthquakes. That is, the building code for high-frequency buildings has been developed independent of any characterization of ground motions. Given the complex dynamic characteristics of these buildings, and given that there have been far more shaken buildings than strong-motion records, the current procedure for designing short-period structures based on past performance seems appropriate. It is certainly inappropriate to imply that new probabilistic predictions of high-frequency ground motion amplitudes should be used to modify existing design procedures that were developed based on the actual performance of buildings in earthquakes.

What’s the best design philosophy when faced with a power law? The design philosophy of PBEE can be summarized in three steps: (1) architects define the geometry of a building, (2) geotechnical engineers specify the probabilistic hazard, and (3) structural engineers determine the attributes of structural elements that satisfy statistical limits. All of this should work very well, unless something important was overlooked.

I would suggest that something is missing. If we were truly able to quantify a building’s deformation characteristics, then it should be possible to provide examples of ground motions that the structural engineer is assuming will not happen. At that point, the designers should have a discussion with earth scientists about the confidence that such a motion will not occur. I am sure that many of you are thinking that if we did that, then nothing would ever get built. I seriously doubt that; we all need buildings. However, we just might end up with a different set of buildings if we acknowledged the nature of the uncertainties in this problem.

My office is in Caltech’s Thomas Laboratory, a three-story reinforced concrete shear-wall building built in the 1930s. However, when Caltech built the Thomas Laboratory, there were no strong-motion records or any understanding of PSHA. Nevertheless, the Thomas Laboratory is still considered a robust building when it comes to seismic vulnerability. How could such ignorance of seismic hazards have produced a functional building that is still considered sound 70 years after its conception? The design philosophy was straightforward and effective: Choose an affordable design least vulnerable to unknown factors and that also meets functional requirements. When I asked George Housner, the father of modern earthquake engineering, about the design of the Thomas Laboratory, he told me: “It’s a simple concrete box. … There’s not much you can do to a box.” When I asked him about the design philosophy he used when he advised the Caltech Administration about campus buildings from the 1950s through the 1990s, he told me, “I kind of knew what I didn’t know.”

To really achieve PBEE implies that we have an adequate characterization of the expected ground motions and their corresponding building responses. If either ground motions or building responses have characteristics of power laws, then it is especially important that we understand the tails of the statistical distributions. It troubles me that existing procedures formally calculate the reliability of long-period buildings when there is still great debate about what might actually happen to these buildings in some future great earthquake.

It seems that implementing PBEE tells society that we can estimate structural reliability independent of the nature of the building. That is, it doesn’t matter whether it’s a high-rise building or a low-rise shear-wall building, we can design each with the same expected failure rate. I suspect, however, that a highrise building’s integrity depends more on what we don’t yet know about large earthquakes, while the low-rise building’s integrity depends on these unknowns quite a bit less.

Thomas H. Heaton
Email- heaton_t [at] caltech.edu

 


To send a letter to the editor regarding this opinion or to write your own opinion, you may contact the SRL editor by sending e-mail to lastiz [at] ucsd.edu.



[Back]

 

HOME

Posted: 02 March 2007
Updated: 05 April 2007