July/August 2005

The Challenge of Earthquake Risk Assessment

We have seen over the last few decades the development of seismic hazard assessment. It is now fairly routine to set up source models for point, line, and area seismic sources and to combine these with attenuation models to produce assessments of hazard that are specific to given locations. Source models take into account earthquake mechanisms and recurrence intervals for active faults, and strong-motion attenuation functions incorporate site conditions. Future developments will no doubt improve our assessment of hazard, perhaps even including time-dependence, as our understanding of the dynamics of the generation of earthquakes improves. But there is another development that is arguably more pressing: risk assessment.

While hazard assessment combines source and attenuation modeling, risk assessment goes one step further, to estimate likely losses to structures by modeling their vulnerability. This results in probabilistic estimates of losses for specific portfolios of assets. Seismology has traditionally been close to the engineering profession, and this has resulted in the development of procedures for earthquake hazard assessment that are useful for engineering design, but it has not been as close to the insurance industry or to other risk management sectors, so techniques for risk assessment are not so well developed.

Earthquake risk assessment is a fertile area of research which needs more input from seismologists and engineers. In December 2004 I attended the Annual Meeting of the Society for Risk Analysis, in Palm Springs, California. I found a society devoted to developing procedures for analyzing risk and presenting risk assessments in ways that are useful to risk managers. SRA has traditionally had a strong interest in biological risks, so toxicology, exposure, contamination, and ecological risks featured prominently. A new focus has recently been added with the risk of terrorism and issues of homeland security. The papers offered were wide-ranging, and with nine parallel sessions there was always something interesting to listen to. But where were the seismologists? In fact, the whole field of natural hazards was poorly represented. This is an area in which SRA would like to strengthen its activities. Seismologists and engineers may well find in SRA the collaborations they need to develop the science and practice of earthquake risk assessment.

Incorporating engineering expertise to include the vulnerability of structures is something that ought to be straightforward for seismologists, given the close associations we have with the engineering profession. But while the design of structures is familiar territory for engineers, assessing their vulnerability, i.e., the likely damage that will be inflicted on them, is an area of ongoing research. Initiatives such as HAZUS have made a substantial start on assessment of vulnerability, using the predicted ground-motion spectrum to estimate the amount of damage that is likely to be inflicted on buildings of known design in a given earthquake scenario. A simpler approach is currently being used in New Zealand, where a large amount of insurance-derived data expresses damage ratios in terms of MM intensities. For the modeling, ground motions are estimated in intensities directly and converted to damage ratios with appropriate probability distributions. Both approaches have shortcomings, and much work needs to be done to improve vulnerability assessments.

The procedure for risk assessment involves considering all earthquake sources likely to affect a given portfolio of assets, and estimating the ground motion and hence the amount of damage to each structure in each event. The degree of correlation between damage levels at separate locations tends to reduce with increasing separation between the locations. So you need to work at the event level and accumulate losses across the entire portfolio. The procedure thus involves a lot more than estimating losses in selected scenario events. The combination of damage cost and frequency of occurrence information is what the insurance industry calls the EP curve (Exceedance Probability). It is a cumulative distribution, which gives the annual probability that any given level of loss is likely to be equaled or exceeded. The probability drops from 1.0 for losses greater than or equal to zero, to 0.0 at some maximum value of loss. Several measures of risk can be derived from the EP curve.

Risk assessment is all about risk management. The only reason you do an assessment is because somebody has to make a risk-management decision. An asset owner needs to decide how much insurance cover to purchase. An insurance company wants to know how much premium to charge. A city risk manager is faced with strengthening important buildings and facilities to protect against earthquake damage and wonders what level of protection is appropriate. Or should he instead spend the money enhancing the city's flood protection system? Risk assessments can provide information to assist these risk managers in their decision-making, but the risk analyst needs to know the nature of the decision, and the constraints under which the decision-maker is working, in order to express the results in the most helpful way.

Take the issue of the degree to which a building should be strengthened. What the risk analyst must do is assess the risk as it is at present, then model it under the various mitigation strategies that are proposed. The cost of each strategy can then be compared with the reduction in the risk that would result from implementing the strategy. So we need a measure of the risk. In particular, we need a measure of the change in the EP curve. One mitigation strategy may result in reduction of risk for frequent events but essentially no change for the rare (and larger) events. Or it could be the other way around. The risk analyst needs to present the risk measure in a way that is helpful to the risk manager.

One measure of the overall risk is the Average Annual Loss (AAL). Statistically, this is the expected value of the loss distribution. We could model how the AAL changes under various mitigation proposals and use that to make the risk management decision. But the AAL is a very limited and often not very useful measure, as a consideration of insurance risk shows. When an insurance company writes domestic fire insurance policies, for instance, it wants to know the AAL because that will enable it to set premiums. Let's say it is $200 per year for houses like mine. But for me, as the purchaser of insurance, that figure is useless. I am more worried about the possibility of substantial loss, even total loss. So for the seller of insurance the AAL is a useful measure, but for the buyer it is not. I need other measures. I need to know the maximum loss that I am exposed to, the probability that it will occur, and the cost of protection against that loss. For catastrophe insurance, such as for earthquakes, the situation becomes more complicated. Unlike fire insurance, for which losses are largely uncorrelated, for earthquake insurance a large number of policyholders claim at the same time. So in order to protect itself the company becomes the buyer as it purchases reinsurance; it needs other measures of the risk to its whole portfolio, because AAL is now not a useful measure. It needs to know how much it might lose, and with what probability. The reinsurance company, on the other hand, seeks to spread its risk by writing business internationally. The situation was summarized neatly by Kaplan and Garrick in 1981: "A single number is not a big enough concept to communicate the idea of risk." And when we come to other ways of managing risk, the risk manager is like the buyer of insurance. The consequences of rare events are probably more important than the annualized loss.

Probable Maximum Loss (PML) is another measure that is used in the insurance industry, and this is relevant to the buyer. Unfortunately, the adjective "probable" is often not well defined. Losses could go higher, depending on what is meant by "probable maximum." Another measure that has been suggested is the Conditional Expected Value. This is the expected value of the loss for those events whose probability of occurrence lies in a given range (or, equivalently, for losses in a given range).

I have proposed that three measures be used to characterize changes in the EP curve: the 10-year event, 100-year event, and 1,000-year event. I define these as the conditional expected losses for events with annual probabilities in the ranges 0.032 to 0.32, 0.0032 to 0.032, and 0.00032 to 0.0032, respectively. These limits are chosen because they represent the midpoints, on a log scale, of the intervals with central points 0.1, 0.01, and 0.001. The scheme could obviously be extended to lower probability measures if necessary (e.g., for nuclear power plants). Unlike the AAL, which is of limited value to the risk manager, these are rather like scenario events that can be readily envisioned. Together they give a coarse discretization of the EP curve, and they can be derived from it readily. The method for decision-making needs to be developed further, but it seems to me that this technique for providing information to the risk manager has considerable merit.

The actual risk-management decision will be made by weighing multiple criteria. Nothing will be simple, because as well as economic issues there will be social and no doubt political issues affecting the decision. Multiple criteria decision making is a topic that features quite often in the publications of the Society for Risk Analysis. However the decision is made, the better the objective information that is supplied, and the more specific it is to the situation at hand, the better that decision is likely to be.

Risk assessment is an area that needs more input from seismologists and other natural hazards scientists, together with engineers. But it needs wide collaboration. We have in my institute a research program that is seeking to assess risks across a variety of natural hazards. We propose to estimate the 10-, 100- and 1,000-year events for each hazard, as applied initially in pilot studies to small communities and regions that are affected by earthquake, flood, tsunami, storm, and volcanic ash deposition. These hazards are of course all modeled in different ways, and we are collaborating with a sister institute to use their expertise to model storm, flooding, and tsunami risks. Local risk managers are calling for a decision support model to assist them in their risk management decisions. Providing it will be quite a challenge.

Warwick Smith
Institute of Geological and Nuclear Sciences
P.O. Box 30-368
Lower Hutt
New Zealand

To send a letter to the editor regarding this opinion or to write your own opinion, contact the SRL editor by e-mail.



Posted: 23 June 2005