Electronic Supplement to
Analysis of the Severe Damage Online Felt Reports for the Canterbury (New Zealand) 2011 Aftershocks on 22 February Mw 6.2, 13 June Mw 6.0 and 23 December Mw 6.0

by T. Goded, W. J. Cousins, and K. F. Fenaughty

The review of the online felt reports from the Canterbury (New Zealand) 2010-2012 earthquake sequence has been very useful to identify possible improvements in the current questionnaire version. These are currently being revised for implementation in future versions. The manual review of the felt reports for the February, June and December 2011 shocks has also enabled us to select criteria and develop a method for automatically assigning damage grades to GeoNet’s felt reports, to be used for future earthquakes. Both of these are described in this electronic supplement.


Algorithm to Assign Damage Grades in the New Zealand MMI Scale

A simple algorithm has been developed to automatically assign damage grades to the online questionnaires, based on the questions from stage 4. The purpose of it is to reduce the time spent manually analyzing each of the questionnaires. The most relevant set of questions and answers were chosen from the experience gained during the manual assignment of damage grades using the MMI and EMS-98 scales. These include questions FR4-3, FR4-6, FR4-8 and FR4-9, the most suitable questions for assigning a damage grade. This algorithm, developed in Visual Basic, consists of conditional “if” statements that basically follow an upside-down pyramidal method where higher observed damage answers are required to assign a higher damage grade. It starts looking at questions FR4-6 (damage to exterior walls) and FR4-8 (damage to entire building) dividing the damage levels according to the answers to these two questions. Then, it adds the answers to other questions, mainly FR4-4 (damage to chimneys) and FR4-9 (other types of damage, taking especial attention on the existence of major cracks on interior walls) to make a final decision on the damage grade. If there is an inconsistency between the answers to these questions (e.g., if there are only hairline cracks or no damage on exterior walls but the building is considered “severely distorted”) then a manual review of that report is carried out.

The algorithm has been tested with data from the February, June and December earthquakes. For this, all the reports with damage grades assigned manually have been used, even the ones which haven’t had a building type assigned, in order to retain the highest amount of data possible. So the total of MMI≥8 reports for the June (41) and December (10) earthquakes have been used, together with 166 reports for the February earthquake. This number corresponds to the total number of reports (341) minus the ones rejected for being duplicated reports or reports from big buildings. Reports corresponding to missing entries in building databases have also been included, as the damage grade has still been manually assigned, in case a building type will be able to be assigned in the future.

For the February earthquake (Figure 1), damage grades 1 and 2 seem to have been adequately reproduced using the automatic algorithm, with very similar results (6% manual and 7% automatic for grade 1, and 42% manual and 45% automatic for grade 2). For damage grade 3, the algorithm seems to be underestimating the amount of reports with that damage, with 32% automatic vs 43% manual reports. The opposite occurs for damage grade 4 reports: the automatic assignment overestimates the manual results, with values of 15% and 9%, respectively. This could indicate an overestimation of the damage in the automatic procedure, by assigning more level 4 damage and less level 3 damage. Nevertheless, looking at each individual report (Figure 1c) a total of 69% of the reports have been correctly assigned the same damage grade as with the manual procedure, 17% and 20% have been assigned one grade above and below the manual assignments, respectively, and 3% and 1% have had 2 grades above and below the manual results. This indicates an overall similar overestimation and underestimation of the damage levels, of about 20-21% in both cases. A relevant conclusion could be that the algorithm is not yet dealing with the complications of the human decisions involved in manual assignments, where all the answers to the questions are considered together to finally come up with a damage level.

The comparison between the automatic and manual damage grade assignments for the June event is shown in Figure 2. The automatic procedure is producing a significantly higher amount of damage grade 3 reports (22% vs 7%) and a slightly lower amount of damage grades 1 (5% vs 2%) and 2 (73% vs 80%) than the manual results. Comparing individual reports, 73% of them have been correctly assigned a damage level, 24% have been overestimated by one grade, and 2% have been underestimated by one grade, with no difference of 2 or more levels between the two procedures. The plots for the December event (Figure 3) show the same distribution of damage in both methods, with 50% level 2 and 50% level 3 damage. A total of 70% reports have successfully attributed the same damage grade as in the manual assignment, and 20% have been overestimated by one grade and 10% underestimated by one grade.

Although the algorithm seems to be providing consistent damage grade values for the three earthquakes (with an efficiency of 59-73%), it is overestimating the damage, providing a higher percentage of damage grades 3 (moderate) and 4 (heavy) and a lower percentage of damage grades 1 (slight) and 2 (minor). This attempt has thus shown the difficulty inherent in automating an assignment process that involves so many human decisions and considerations, and the fact that further work needs to be carried out.


Improvements in the Questionnaire and the New Zealand MMI Scale

Apart from assigning intensity values, the review of the above 350 felt reports with MMI=8 from the February, June and December 2011 events identified advantages and disadvantages of the 2004 version of the questionnaire. These included three changes in the questions (to be better understood and avoid confusion); two questions where changes to the answers have been suggested (e.g., for question FR2-3, “What were you doing when the earthquake occurred”, answer “Sleeping” being split into two answers, “Sleeping and slept through it” and “Sleeping and was woken up”); changes in the order of the answers for three of the questions (from lower to higher levels of damage, to be consistent throughout the questionnaire); and the addition of five new questions (e.g., question FR2-5, “What was your reaction?”; FR2-6, “How was the earthquake felt by other people?”; or FR3-3, “Did doors and/or windows rattle?”, the latter being a classic test for intensity 4 or above, Dowrick, 1996; Musson, 2006, Musson et al., 2010). The usability of the web interface is also being reviewed to remedy common mistakes made by the users and to adopt concepts common in other online questionnaires. Information will be provided to clarify issues such as why certain questions are being asked, what each possible answer means and why the answers to each question will help to assign an intensity value. These improvements will lead to a more comprehensible tool where end-users will find it easier to answer the questionnaire in a more reliable way. All of the proposed improvements are being considered for implementation in the next version of GeoNet’s online questionnaire, as a result of lessons learned from the Canterbury earthquake sequence and the public reporting of the felt effects (Goded et al., 2012).

Never before in New Zealand has there been such a large number of high damage level reports to analyse and provide us with the opportunity to check the efficiency of the New Zealand MMI scale to provide adequate intensity values above MMI=7. The following relevant conclusions have been obtained:

These changes will turn the New Zealand MMI scale into a more robust scale and will make it easier to assign intensity values in future earthquakes as well as being more aligned with other international scales (Euro-Mediterranean Seismological Centre, EMSC; European Seismological Commission, ESC; USA “Did You Feel It” questionnaire, Wald et al., 1999; British Geological Survey, BGS). Work in this area is currently being undertaken within our group (Pondard et al., 2012).


Figures

Figure S1. Manual (a), automatic (b), and automatic-manual (c) assignments of damage grades to the online felt reports from the 22 February 2011 Mw 6.2 earthquake.

Figure S2. Manual (a), automatic (b), and automatic-manual (c) assignments of damage grades to the online felt reports from the 13 June 2011 Mw 6.2 earthquake.

FigureS3. Manual (a), automatic (b), and automatic-manual (c) assignments of damage grades to the online felt reports from the 23 December 2011 Mw 6.2 earthquake.


References

BGS. British Geological Survey Earthquake Questionnaire. http://www.earthquakes.bgs.ac.uk/questionnaire/EqQuestIntro.html

Dowrick, D.J. (1996). The modified Mercalli earthquake intensity scale; revisions arising from recent studies of New Zealand earthquakes, Bulletin of the New Zealand National Society for Earthquake Engineering 29, 92–106.

EMSC. Euro-Mediterranean Seismological Centre online questionnaire. http://www.emsc-csem.org/Earthquake/Contribute/?lang=en

ESC. European Seismological Commission. Standard Internet Macroseismic Questionnaire. Draft version. October 2010. http://seismologist.co.uk/ESC_internet_macroseismology.html

Goded, T., K.F. Fenaughty, and R.J. Michell (2012). Lessons from the Canterbury events: preliminary improvements to the online felt reports. Proceedings of the New Zealand Society of Earthquake Engineering Technical Conference 2012. Christchurch (New Zealand), April 2012, Paper 049, 8 pp.

Grünthal, G. (editor) (1998). European Macroseismic Scale 1998 (EMS-98). Cahiers du Centre Européen de Géodynamique et de Séismologie 15. Luxembourg, 99 pp.

Musson, R.M.W. (2006). Automatic assessment of EMS-98 intensities. British Geological Survey. Internal Report IR/06/048, 16 pp.

Musson, R.M.W., G. Grünthal and M. Stucchi (2010). The comparison of macroseismic intensity scales. Journal of Seismology 14, 413-428.

Pondard, N., S. Giovinazzi, S. Uma, S. Lin, T. Goded and M. Nayyerloo (2012). Acquiring and Applying Building Damage and Loss Data from Buildings during the Canterbury Earthquake Sequence. New Zealand Platform Negotiable round 2012.

Wald, D.J., L.A. Dengler, L.A., and J.W. Dewey (1999). Utilization of the internet for rapid community intensity maps, Seismological Research Letters 70, 680-697.

[ Back ]