This paper examines the consistency of seismicity and ground motion models, used for seismic hazard analysis in New Zealand, with the observations in the Canterbury earthquakes. An overview is first given of seismicity and ground motion modelling as inputs of probabilistic seismic hazard analysis, whose results form the basis for elastic response spectra in NZS1170.5:2004. The magnitude of earthquakes in the Canterbury earthquake sequence are adequately allowed for in the current NZ seismicity model, however the consideration of ‘background’ earthquakes as point sources at a minimum depth of 10km results in up to a 60% underestimation of the ground motions that such events produce. The ground motion model used in conventional NZ seismic hazard analysis is shown to provide biased predictions of response spectra (over-prediction near T=0.2s , and under-predictions at moderate-to-large vibration periods). Improved ground motion prediction can be achieved using more recent NZ-specific models.
This presentation discusses recent empirical ground motion modelling efforts in New Zealand. Firstly, the active shallow crustal and subduction interface and slab ground motion prediction equations (GMPEs) which are employed in the 2010 update of the national seismic hazard model (NSHM) are discussed. Other NZ-specific GMPEs developed, but not incorporated in the 2010 update are then discussed, in particular, the active shallow crustal model of Bradley (2010). A brief comparison of the NZ-specific GMPEs with the near-source ground motions recorded in the Canterbury earthquakes is then presented, given that these recordings collectively provide a significant increase in observed strong motions in the NZ catalogue. The ground motion prediction expert elicitation process that was undertaken following the Canterbury earthquakes for active shallow crustal earthquakes is then discussed. Finally, ongoing GMPE-related activities are discussed including: ground motion and metadata database refinement, improved site characterization of strong motion station, and predictions for subduction zone earthquakes.
Natural catastrophes are increasing worldwide. They are becoming more frequent but also more severe and impactful on our built environment leading to extensive damage and losses. Earthquake events account for the smallest part of natural events; nevertheless seismic damage led to the most fatalities and significant losses over the period 1981-2016 (Munich Re). Damage prediction is helpful for emergency management and the development of earthquake risk mitigation projects. Recent design efforts focused on the application of performance-based design engineering where damage estimation methodologies use fragility and vulnerability functions. However, the approach does not explicitly specify the essential criteria leading to economic losses. There is thus a need for an improved methodology that finds the critical building elements related to significant losses. The here presented methodology uses data science techniques to identify key building features that contribute to the bulk of losses. It uses empirical data collected on site during earthquake reconnaissance mission to train a machine learning model that can further be used for the estimation of building damage post-earthquake. The first model is developed for Christchurch. Empirical building damage data from the 2010-2011 earthquake events is analysed to find the building features that contributed the most to damage. Once processed, the data is used to train a machine-learning model that can be applied to estimate losses in future earthquake events.
Despite over a century of study, the relationship between lunar cycles and earthquakes remains controversial and difficult to quantitatively investigate. Perhaps as a consequence, major earthquakes around the globe are frequently followed by 'prediction' claims, using lunar cycles, that generate media furore and pressure scientists to provide resolute answers. The 2010-2011 Canterbury earthquakes in New Zealand were no exception; significant media attention was given to lunarderived earthquake predictions by non-scientists, even though the predictions were merely 'opinions' and were not based on any statistically robust temporal or causal relationships. This thesis provides a framework for studying lunisolar earthquake temporal relationships by developing replicable statistical methodology based on peer reviewed literature. Notable in the methodology is a high accuracy ephemeris, called ECLPSE, designed specifically by the author for use on earthquake catalogs, and a model for performing phase angle analysis. The statistical tests were carried out on two 'declustered' seismic catalogs, one containing the aftershocks from the Mw7.1 earthquake in Canterbury, and the other containing Australian seismicity from the past two decades. Australia is an intraplate setting far removed from active plate boundaries and Canterbury is proximal to a plate boundary, thus allowing for comparison based on tectonic regime and corresponding tectonic loading rate. No strong, conclusive, statistical correlations were found at any level of the earthquake catalogs, looking at large events, onshore events, offshore events, and the fault type of some events. This was concluded using Schuster's test of significance with α=5% and analysis of standard deviations. A few weak correlations, with p-5-10% of rejecting the null hypothesis, and anomalous standard deviations were found, but these are difficult to interpret. The results invalidate the statistical robustness of 'earthquake predictions' using lunisolar parameters in this instance. An ambitious researcher could improve on the quality of the results and on the range of parameters analyzed. The conclusions of the thesis raise more questions than answers, but the thesis provides an adaptable methodology that can be used to further investigation the problem.
Background Liquefaction induced land damage has been identified in more than 13 notable New Zealand earthquakes within the past 150 years, as presented on the timeline below. Following the 2010-2011 Canterbury Earthquake Sequence (CES), the consequences of liquefaction were witnessed first-hand in the city of Christchurch and as a result the demand for understanding this phenomenon was heightened. Government, local councils, insurers and many other stakeholders are now looking to research and understand their exposure to this natural hazard.
This paper presents on-going challenges in the present paradigm shift of earthquakeinduced ground motion prediction from empirical to physics-based simulation methods. The 2010-2011 Canterbury and 2016 Kaikoura earthquakes are used to illustrate the predictive potential of the different methods. On-going efforts on simulation validation and theoretical developments are then presented, as well as the demands associated with the need for explicit consideration of modelling uncertainties. Finally, discussion is also given to the tools and databases needed for the efficient utilization of simulated ground motions both in specific engineering projects as well as for near-real-time impact assessment.
This paper presents a critical evaluation of vertical ground motions observed in the Canterbury earthquake sequence. The abundance of strong near-source ground-motion recordings provides an opportunity to comprehensively review the estimation of vertical ground motions via the New Zealand Standard for earthquake loading, NZS1170.5:2004, and empirical ground motion prediction equations (GMPEs). An in-depth review of current GMPEs is carried out to determine the existing trends and characteristics present in the empirical models. Results illustrate that vertical ground motion amplitudes estimated based on NZS1170.5:2004 are significantly unconservative at short periods and near-source distances. While conventional GMPEs provide an improved prediction, in many instances they too underpredict vertical ground motion accelerations at short periods and near-source distances.
The purpose of this thesis is to conduct a detailed examination of the forward-directivity characteristics of near-fault ground motions produced in the 2010-11 Canterbury earthquakes, including evaluating the efficacy of several existing empirical models which form the basis of frameworks for considering directivity in seismic hazard assessment. A wavelet-based pulse classification algorithm developed by Baker (2007) is firstly used to identify and characterise ground motions which demonstrate evidence of forward-directivity effects from significant events in the Canterbury earthquake sequence. The algorithm fails to classify a large number of ground motions which clearly exhibit an early-arriving directivity pulse due to: (i) incorrect pulse extraction resulting from the presence of pulse-like features caused by other physical phenomena; and (ii) inadequacy of the pulse indicator score used to carry out binary pulse-like/non-pulse-like classification. An alternative ‘manual’ approach is proposed to ensure 'correct' pulse extraction and the classification process is also guided by examination of the horizontal velocity trajectory plots and source-to-site geometry. Based on the above analysis, 59 pulse-like ground motions are identified from the Canterbury earthquakes , which in the author's opinion, are caused by forward-directivity effects. The pulses are also characterised in terms of their period and amplitude. A revised version of the B07 algorithm developed by Shahi (2013) is also subsequently utilised but without observing any notable improvement in the pulse classification results. A series of three chapters are dedicated to assess the predictive capabilities of empirical models to predict the: (i) probability of pulse occurrence; (ii) response spectrum amplification caused by the directivity pulse; (iii) period and amplitude (peak ground velocity, PGV) of the directivity pulse using observations from four significant events in the Canterbury earthquakes. Based on the results of logistic regression analysis, it is found that the pulse probability model of Shahi (2013) provides the most improved predictions in comparison to its predecessors. Pulse probability contour maps are developed to scrutinise observations of pulses/non-pulses with predicted probabilities. A direct comparison of the observed and predicted directivity amplification of acceleration response spectra reveals the inadequacy of broadband directivity models, which form the basis of the near-fault factor in the New Zealand loadings standard, NZS1170.5:2004. In contrast, a recently developed narrowband model by Shahi & Baker (2011) provides significantly improved predictions by amplifying the response spectra within a small range of periods. The significant positive bias demonstrated by the residuals associated with all models at longer vibration periods (in the Mw7.1 Darfield and Mw6.2 Christchurch earthquakes) is likely due to the influence of basin-induced surface waves and non-linear soil response. Empirical models for the pulse period notably under-predict observations from the Darfield and Christchurch earthquakes, inferred as being a result of both the effect of nonlinear site response and influence of the Canterbury basin. In contrast, observed pulse periods from the smaller magnitude June (Mw6.0) and December (Mw5.9) 2011 earthquakes are in good agreement with predictions. Models for the pulse amplitude generally provide accurate estimates of the observations at source-to-site distances between 1 km and 10 km. At longer distances, observed PGVs are significantly under-predicted due to their slower apparent attenuation. Mixed-effects regression is employed to develop revised models for both parameters using the latest NGA-West2 pulse-like ground motion database. A pulse period relationship which accounts for the effect of faulting mechanism using rake angle as a continuous predictor variable is developed. The use of a larger database in model development, however does not result in improved predictions of pulse period for the Darfield and Christchurch earthquakes. In contrast, the revised model for PGV provides a more appropriate attenuation of the pulse amplitude with distance, and does not exhibit the bias associated with previous models. Finally, the effects of near-fault directivity are explicitly included in NZ-specific probabilistic seismic hazard analysis (PSHA) using the narrowband directivity model of Shahi & Baker (2011). Seismic hazard analyses are conducted with and without considering directivity for typical sites in Christchurch and Otira. The inadequacy of the near-fault factor in the NZS1170.5: 2004 is apparent based on a comparison with the directivity amplification obtained from PSHA.
The 2010-2011 Canterbury earthquakes were recorded over a dense strong motion network in the near-source region, yielding significant observational evidence of seismic complexities, and a basis for interpretation of multi-disciplinary datasets and induced damage to the natural and built environment. This paper provides an overview of observed strong motions from these events and retrospective comparisons with both empirical and physics-based ground motion models. Both empirical and physics-based methods provide good predictions of observations at short vibration periods in an average sense. However, observed ground motion amplitudes at specific locations, such as Heathcote Valley, are seen to systematically depart from ‘average’ empirical predictions as a result of near surface stratigraphic and topographic features which are well modelled via sitespecific response analyses. Significant insight into the long period bias in empirical predictions is obtained from the use of hybrid broadband ground motion simulation. The comparison of both empirical and physics-based simulations against a set of 10 events in the sequence clearly illustrates the potential for simulations to improve ground motion and site response prediction, both at present, and further in the future.
In this paper, the characteristics of near-fault ground motions recorded during the Mw7.1 Darfield and Mw 6.2 Christchurch earthquakes are examined and compared with existing empirical models. The characteristics of forward-directivity effects are first examined using a wavelet-based pulse-classification algorithm. This is followed by an assessment of the adequacy of empirical models which aim to capture the effect of directivity effects on amplifying the acceleration response spectra; and the period and peak velocity of the forward-directivity pulse. It is illustrated that broadband directivity models developed by Somerville et al. (1997) and Abrahamson (2000) generally under-predict the observed amplification of response spectral ordinates at longer vibration periods. In contrast, a recently developed narrowband model by Shahi and Baker (2011) provides significantly improved predictions by amplifying the response spectra within a small range of periods surrounding the directivity pulse period. Although the empirical predictions of the pulse period are generally favourable for the Christchurch earthquake, the observations from the Darfield earthquake are significantly under-predicted. The elongation in observed pulse periods is inferred as being a result of the soft sedimentary soils of the Canterbury basin. However, empirical predictions of the observed peak velocity associated with the directivity pulse are generally adequate for both events.
As a consequence of the 2010 – 2011 Canterbury earthquake sequence, Christchurch experienced widespread liquefaction, vertical settlement and lateral spreading. These geological processes caused extensive damage to both housing and infrastructure, and increased the need for geotechnical investigation substantially. Cone Penetration Testing (CPT) has become the most common method for liquefaction assessment in Christchurch, and issues have been identified with the soil behaviour type, liquefaction potential and vertical settlement estimates, particularly in the north-western suburbs of Christchurch where soils consist mostly of silts, clayey silts and silty clays. The CPT soil behaviour type often appears to over-estimate the fines content within a soil, while the liquefaction potential and vertical settlement are often calculated higher than those measured after the Canterbury earthquake sequence. To investigate these issues, laboratory work was carried out on three adjacent CPT/borehole pairs from the Groynes Park subdivision in northern Christchurch. Boreholes were logged according to NZGS standards, separated into stratigraphic layers, and laboratory tests were conducted on representative samples. Comparison of these results with the CPT soil behaviour types provided valuable information, where 62% of soils on average were specified by the CPT at the Groynes Park subdivision as finer than what was actually present, 20% of soils on average were specified as coarser than what was actually present, and only 18% of soils on average were correctly classified by the CPT. Hence the CPT soil behaviour type is not accurately describing the stratigraphic profile at the Groynes Park subdivision, and it is understood that this is also the case in much of northwest Christchurch where similar soils are found. The computer software CLiq, by GeoLogismiki, uses assessment parameter constants which are able to be adjusted with each CPT file, in an attempt to make each more accurate. These parameter changes can in some cases substantially alter the results for liquefaction analysis. The sensitivity of the overall assessment method, raising and lowering the water table, lowering the soil behaviour type index, Ic, liquefaction cutoff value, the layer detection option, and the weighting factor option, were analysed by comparison with a set of ‘base settings’. The investigation confirmed that liquefaction analysis results can be very sensitive to the parameters selected, and demonstrated the dependency of the soil behaviour type on the soil behaviour type index, as the tested assessment parameters made very little to no changes to the soil behaviour type plots. The soil behaviour type index, Ic, developed by Robertson and Wride (1998) has been used to define a soil’s behaviour type, which is defined according to a set of numerical boundaries. In addition to this, the liquefaction cutoff point is defined as Ic > 2.6, whereby it is assumed that any soils with an Ic value above this will not liquefy due to clay-like tendencies (Robertson and Wride, 1998). The method has been identified in this thesis as being potentially unsuitable for some areas of Christchurch as it was developed for mostly sandy soils. An alternative methodology involving adjustment of the Robertson and Wride (1998) soil behaviour type boundaries is proposed as follows: Ic < 1.31 – Gravelly sand to dense sand 1.31 < Ic < 1.90 – Sands: clean sand to silty sand 1.90 < Ic < 2.50 – Sand mixtures: silty sand to sandy silt 2.50 < Ic < 3.20 – Silt mixtures: clayey silt to silty clay 3.20 < Ic < 3.60 – Clays: silty clay to clay Ic > 3.60 – Organics soils: peats. When the soil behaviour type boundary changes were applied to 15 test sites throughout Christchurch, 67% showed an improved change of soil behaviour type, while the remaining 33% remained unchanged, because they consisted almost entirely of sand. Within these boundary changes, the liquefaction cutoff point was moved from Ic > 2.6 to Ic > 2.5 and altered the liquefaction potential and vertical settlement to more realistic ii values. This confirmed that the overall soil behaviour type boundary changes appear to solve both the soil behaviour type issues and reduce the overestimation of liquefaction potential and vertical settlement. This thesis acts as a starting point towards researching the issues discussed. In particular, future work which would be useful includes investigation of the CLiq assessment parameter adjustments, and those which would be most suitable for use in clay-rich soils such as those in Christchurch. In particular consideration of how the water table can be better assessed when perched layers of water exist, with the limitation that only one elevation can be entered into CLiq. Additionally, a useful investigation would be a comparison of the known liquefaction and settlements from the Canterbury earthquake sequence with the liquefaction and settlement potentials calculated in CLiq for equivalent shaking conditions. This would enable the difference between the two to be accurately defined, and a suitable adjustment applied. Finally, inconsistencies between the Laser-Sizer and Hydrometer should be investigated, as the Laser-Sizer under-estimated the fines content by up to one third of the Hydrometer values.
The Canterbury Earthquake Sequence (CES), induced extensive damage in residential buildings and led to over NZ$40 billion in total economic losses. Due to the unique insurance setting in New Zealand, up to 80% of the financial losses were insured. Over the CES, the Earthquake Commission (EQC) received more than 412,000 insurance claims for residential buildings. The 4 September 2010 earthquake is the event for which most of the claims have been lodged with more than 138,000 residential claims for this event only. This research project uses EQC claim database to develop a seismic loss prediction model for residential buildings in Christchurch. It uses machine learning to create a procedure capable of highlighting critical features that affected the most buildings loss. A future study of those features enables the generation of insights that can be used by various stakeholders, for example, to better understand the influence of a structural system on the building loss or to select appropriate risk mitigation measures. Previous to the training of the machine learning model, the claim dataset was supplemented with additional data sourced from private and open access databases giving complementary information related to the building characteristics, seismic demand, liquefaction occurrence and soil conditions. This poster presents results of a machine learning model trained on a merged dataset using residential claims from the 4 September 2010.
Ground motion observations from the most significant 10 events in the 2010-2011 Canterbury earthquake sequence at near-source sites are utilized to scrutinize New Zealand (NZ)-specific pseudo-spectral acceleration (SA) empirical ground motion prediction equations (GMPE) (Bradley 2010, Bradley 2013, McVerry et al. 2006). Region-specific modification factors based on relaxing the conventional ergodic assumption in GMPE development were developed for the Bradley (2010) model. Because of the observed biases with magnitude and source-to-site distance for the McVerry et al. (2006) model it is not possible to develop region-specific modification factors in a reliable manner. The theory of non-ergodic empirical ground motion prediction is then outlined, and applied to this 10 event dataset to determine systematic effects in the between- and within-event residuals which lead to modifications in the predicted median and standard deviation of the GMPE. By examining these systematic effects over sub-regions containing a total of 20 strong motion stations within the Canterbury area, modification factors for use in region-specific ground motion prediction are proposed. These modification factors, in particular, are suggested for use with the Bradley et al. (2010) model in Canterbury-specific probabilistic seismic hazard analysis (PSHA) to develop revised design response, particularly for long vibration periods.
© 2017 The Royal Society of New Zealand. This paper discusses simulated ground motion intensity, and its underlying modelling assumptions, for great earthquakes on the Alpine Fault. The simulations utilise the latest understanding of wave propagation physics, kinematic earthquake rupture descriptions and the three-dimensional nature of the Earth's crust in the South Island of New Zealand. The effect of hypocentre location is explicitly examined, which is found to lead to significant differences in ground motion intensities (quantified in the form of peak ground velocity, PGV) over the northern half and southwest of the South Island. Comparison with previously adopted empirical ground motion models also illustrates that the simulations, which explicitly model rupture directivity and basin-generated surface waves, lead to notably larger PGV amplitudes than the empirical predictions in the northern half of the South Island and Canterbury. The simulations performed in this paper have been adopted, as one possible ground motion prediction, in the ‘Project AF8’ Civil Defence Emergency Management exercise scenario. The similarity of the modelled ground motion features with those observed in recent worldwide earthquakes as well as similar simulations in other regions, and the notably higher simulated amplitudes than those from empirical predictions, may warrant a re-examination of regional impact assessments for major Alpine Fault earthquakes.
The 2010 Darfield and 2011 Christchurch Earthquakes triggered extensive liquefaction-induced lateral spreading proximate to streams and rivers in the Christchurch area, causing significant damage to structures and lifelines. A case study in central Christchurch is presented and compares field observations with predicted displacements from the widely adopted empirical model of Youd et al. (2002). Cone penetration testing (CPT), with measured soil gradation indices (fines content and median grain size) on typical fluvial deposits along the Avon River were used to determine the required geotechnical parameters for the model input. The method presented attempts to enable the adoption of the extensive post-quake CPT test records in place of the lower quality and less available Standard Penetration Test (SPT) data required by the original Youd model. The results indicate some agreement between the Youd model predictions and the field observations, while the majority of computed displacements error on the side of over-prediction by more than a factor of two. A sensitivity analysis was performed with respect to the uncertainties used as model input, illustrating the model’s high sensitivity to the input parameters, with median grain size and fines content among the most influential, and suggesting that the use of CPT data to quantify these parameters may lead to variable results.
The recent instances of seismic activity in Canterbury (2010/11) and Kaikōura (2016) in New Zealand have exposed an unexpected level of damage to non-structural components, such as buried pipelines and building envelope systems. The cost of broken buried infrastructure, such as pipeline systems, to the Christchurch Council was excessive, as was the cost of repairing building envelopes to building owners in both Christchurch and Wellington (due to the Kaikōura earthquake), which indicates there are problems with compliance pathways for both of these systems. Councils rely on product testing and robust engineering design practices to provide compliance certification on the suitability of product systems, while asset and building owners rely on the compliance as proof of an acceptable design. In addition, forensic engineers and lifeline analysts rely on the same product testing and design techniques to analyse earthquake-related failures or predict future outcomes pre-earthquake, respectively. The aim of this research was to record the actual field-observed damage from the Canterbury and Kaikōura earthquakes of seismic damage to buried pipeline and building envelope systems, develop suitable testing protocols to be able to test the systems’ seismic resilience, and produce prediction design tools that deliver results that reflect the collected field observations with better accuracy than the present tools used by forensic engineers and lifeline analysts. The main research chapters of this thesis comprise of four publications that describe the gathering of seismic damage to pipes (Publication 1 of 4) and building envelopes (Publication 2 of 4). Experimental testing and the development of prediction design tools for both systems are described in Publications 3 and 4. The field observation (discussed in Publication 1 of 4) revealed that segmented pipe joints, such as those used in thick-walled PVC pipes, were particularly unsatisfactory with respect to the joint’s seismic resilience capabilities. Once the joint was damaged, silt and other deleterious material were able to penetrate the pipeline, causing blockages and the shutdown of key infrastructure services. At present, the governing Standards for PVC pipes are AS/NZS 1477 (pressure systems) and AS/NZS 1260 (gravity systems), which do not include a protocol for evaluating the PVC pipes for joint seismic resilience. Testing methodologies were designed to test a PVC pipe joint under various different simultaneously applied axial and transverse loads (discussed in Publication 3 of 4). The goal of the laboratory experiment was to establish an easy to apply testing protocol that could fill the void in the mentioned standards and produce boundary data that could be used to develop a design tool that could predict the observed failures given site-specific conditions surrounding the pipe. A tremendous amount of building envelope glazing system damage was recorded in the CBDs of both Christchurch and Wellington, which included gasket dislodgement, cracked glazing, and dislodged glazing. The observational research (Publication 2 of 4) concluded that the glazing systems were a good indication of building envelope damage as the glazing had consistent breaking characteristics, like a ballistic fuse used in forensic blast analysis. The compliance testing protocol recognised in the New Zealand Building Code, Verification Method E2/VM1, relies on the testing method from the Standard AS/NZS 4284 and stipulates the inclusion of typical penetrations, such as glazing systems, to be included in the test specimen. Some of the building envelope systems that failed in the recent New Zealand earthquakes were assessed with glazing systems using either the AS/NZS 4284 or E2/VM1 methods and still failed unexpectedly, which suggests that improvements to the testing protocols are required. An experiment was designed to mimic the observed earthquake damage using bi-directional loading (discussed in Publication 4 of 4) and to identify improvements to the current testing protocol. In a similar way to pipes, the observational and test data was then used to develop a design prediction tool. For both pipes (Publication 3 of 4) and glazing systems (Publication 4 of 4), experimentation suggests that modifying the existing testing Standards would yield more realistic earthquake damage results. The research indicates that including a specific joint testing regime for pipes and positioning the glazing system in a specific location in the specimen would improve the relevant Standards with respect to seismic resilience of these systems. Improving seismic resilience in pipe joints and glazing systems would improve existing Council compliance pathways, which would potentially reduce the liability of damage claims against the government after an earthquake event. The developed design prediction tool, for both pipe and glazing systems, uses local data specific to the system being scrutinised, such as local geology, dimensional characteristics of the system, actual or predicted peak ground accelerations (both vertically and horizontally) and results of product-specific bi-directional testing. The design prediction tools would improve the accuracy of existing techniques used by forensic engineers examining the cause of failure after an earthquake and for lifeline analysts examining predictive earthquake damage scenarios.
Since the early 1980s seismic hazard assessment in New Zealand has been based on Probabilistic Seismic Hazard Analysis (PSHA). The most recent version of the New Zealand National Seismic Hazard Model, a PSHA model, was published by Stirling et al, in 2012. This model follows standard PSHA principals and combines a nation-wide model of active faults with a gridded point-source model based on the earthquake catalogue since 1840. These models are coupled with the ground-motion prediction equation of McVerry et al (2006). Additionally, we have developed a time-dependent clustering-based PSHA model for the Canterbury region (Gerstenberger et al, 2014) in response to the Canterbury earthquake sequence. We are now in the process of revising that national model. In this process we are investigating several of the fundamental assumptions in traditional PSHA and in how we modelled hazard in the past. For this project, we have three main focuses: 1) how do we design an optimal combination of multiple sources of information to produce the best forecast of earthquake rates in the next 50 years: can we improve upon a simple hybrid of fault sources and background sources, and can we better handle the uncertainties in the data and models (e.g., fault segmentation, frequency-magnitude distributions, time-dependence & clustering, low strain-rate areas, and subduction zone modelling)? 2) developing revised and new ground-motion predictions models including better capturing of epistemic uncertainty – a key focus in this work is developing a new strong ground motion catalogue for model development; and 3) how can we best quantify if changes we have made in our modelling are truly improvements? Throughout this process we are working toward incorporating numerical modelling results from physics based synthetic seismicity and ground-motion models.
This thesis presents the application of data science techniques, especially machine learning, for the development of seismic damage and loss prediction models for residential buildings. Current post-earthquake building damage evaluation forms are developed for a particular country in mind. The lack of consistency hinders the comparison of building damage between different regions. A new paper form has been developed to address the need for a global universal methodology for post-earthquake building damage assessment. The form was successfully trialled in the street ‘La Morena’ in Mexico City following the 2017 Puebla earthquake. Aside from developing a framework for better input data for performance based earthquake engineering, this project also extended current techniques to derive insights from post-earthquake observations. Machine learning (ML) was applied to seismic damage data of residential buildings in Mexico City following the 2017 Puebla earthquake and in Christchurch following the 2010-2011 Canterbury earthquake sequence (CES). The experience showcased that it is readily possible to develop empirical data only driven models that can successfully identify key damage drivers and hidden underlying correlations without prior engineering knowledge. With adequate maintenance, such models have the potential to be rapidly and easily updated to allow improved damage and loss prediction accuracy and greater ability for models to be generalised. For ML models developed for the key events of the CES, the model trained using data from the 22 February 2011 event generalised the best for loss prediction. This is thought to be because of the large number of instances available for this event and the relatively limited class imbalance between the categories of the target attribute. For the CES, ML highlighted the importance of peak ground acceleration (PGA), building age, building size, liquefaction occurrence, and soil conditions as main factors which affected the losses in residential buildings in Christchurch. ML also highlighted the influence of liquefaction on the buildings losses related to the 22 February 2011 event. Further to the ML model development, the application of post-hoc methodologies was shown to be an effective way to derive insights for ML algorithms that are not intrinsically interpretable. Overall, these provide a basis for the development of ‘greybox’ ML models.
A major hazard accompanying earthquake shaking in areas of steep topography is the detachment of rocks from bedrock outcrops that subsequently slide, roll, or bounce downslope (i.e. rockfalls). The 2010-2011 Canterbury earthquake sequence caused recurrent and severe rockfall in parts of southern Christchurch. Coseismic rockfall caused five fatalities and significant infrastructural damage during the 2011 Mw 6.2 Christchurch earthquake. Here we examine a rockfall site in southern Christchurch in detail using geomorphic mapping, lidar analysis, geochronology (cosmogenic 3He dating, radiocarbon dating, optically stimulated luminescence (OSL) from quartz, infrared stimulated luminescence from K-feldspar), numerical modeling of rockfall boulder trajectories, and ground motion prediction equations (GMPEs). Rocks fell from the source cliff only in earthquakes with interpolated peak ground velocities exceeding ~10 cm/s; hundreds of smaller earthquakes did not produce rockfall. On the basis of empirical observations, GMPEs and age chronologies we attribute paleo-rockfalls to strong shaking in prehistoric earthquakes. We conclude that earthquake shaking of comparable intensity to the strongest contemporary earthquakes in Christchurch last occurred at this site approximately 5000 to 7000 years ago, and that in some settings, rockfall deposits provide useful proxies for past strong ground motions.
The seismic performance and parameter identification of the base isolated Christchurch Women’s Hospital (CWH) building are investigated using the recorded seismic accelerations during the two large earthquakes in Christchurch. A four degrees of freedom shear model is applied to characterize the dynamic behaviour of the CWH building during these earthquakes. A modified Gauss-Newton method is employed to identify the equivalent stiffness and Rayleigh damping coefficients of the building. The identification method is first validated using a simulated example structure and finally applied to the CWH building using recorded measurements from the Mw 6.0 and Mw 5.8 Christchurch earthquakes on December 23, 2011. The estimated response and recorded response for both earthquakes are compared with the cross correlation coefficients and the mean absolute percentage errors reported. The results indicate that the dynamic behaviour of the superstructure and base isolator was essentially within elastic range and the proposed shear linear model is sufficient for the prediction of the structural response of the CWH Hospital during these events.
The 2010–2011 Canterbury earthquake sequence began with the 4 September 2010, Mw7.1 Darfield earthquake and includes up to ten events that induced liquefaction. Most notably, widespread liquefaction was induced by the Darfield and Mw6.2 Christchurch earthquakes. The combination of well-documented liquefaction response during multiple events, densely recorded ground motions for the events, and detailed subsurface characterization provides an unprecedented opportunity to add well-documented case histories to the liquefaction database. This paper presents and applies 50 high-quality cone penetration test (CPT) liquefaction case histories to evaluate three commonly used, deterministic, CPT-based simplified liquefaction evaluation procedures. While all the procedures predicted the majority of the cases correctly, the procedure proposed by Idriss and Boulanger (2008) results in the lowest error index for the case histories analyzed, thus indicating better predictions of the observed liquefaction response.
This paper provides a brief discussion of observed strong ground motions from the 14 November 2016 Mw7.8 Kaikoura earthquake. Specific attention is given to examining observations in the near-source region where several ground motions exceeding 1.0g horizontal are recorded, as well as up to 2.7g in the vertical direction at one location. Ground motion response spectra in the near-source, North Canterbury, Marlborough and Wellington regions are also examined and compared with design levels. Observed spectral amplitudes are also compared with predictions from empirical and physics-based ground motion modelling.
Heathcote Valley school strong motion station (HVSC) consistently recorded ground motions with higher intensities than nearby stations during the 2010-2011 Canterbury earthquakes. For example, as shown in Figure 1, for the 22 February 2011 Christchurch earthquake, peak ground acceleration at HVSC reached 1.4 g (horizontal) and 2 g (vertical), the largest ever recorded in New Zealand. Strong amplification of ground motions is expected at Heathcote Valley due to: 1) the high impedance contrast at the soil-rock interface, and 2) the interference of incident and surface waves within the valley. However, both conventional empirical ground motion prediction equations (GMPE) and the physics-based large scale ground motions simulations (with empirical site response) are ineffective in predicting such amplification due to their respective inherent limitations.
Observations of out-of-plane (OOP) instability in the 2010 Chile earthquake and in the 2011 Christchurch earthquake resulted in concerns about the current design provisions of structural walls. This mode of failure was previously observed in the experimental response of some wall specimens subjected to in-plane loading. Therefore, the postulations proposed for prediction of the limit states corresponding to OOP instability of rectangular walls are generally based on stability analysis under in-plane loading only. These approaches address stability of a cracked wall section when subjected to compression, thereby considering the level of residual strain developed in the reinforcement as the parameter that prevents timely crack closure of the wall section and induces stability failure. The New Zealand code requirements addressing the OOP instability of structural walls are based on the assumptions used in the literature and the analytical methods proposed for mathematical determination of the critical strain values. In this study, a parametric study is conducted using a numerical model capable of simulating OOP instability of rectangular walls to evaluate sensitivity of the OOP response of rectangular walls to variation of different parameters identified to be governing this failure mechanism. The effects of wall slenderness (unsupported height-to-thickness) ratio, longitudinal reinforcement ratio of the boundary regions and length on the OOP response of walls are evaluated. A clear trend was observed regarding the influence of these parameters on the initiation of OOP displacement, based on which simple equations are proposed for prediction of OOP instability in rectangular walls.
Semi-empirical models based on in-situ geotechnical tests have become the standard of practice for predicting soil liquefaction. Since the inception of the “simplified” cyclic-stress model in 1971, variants based on various in-situ tests have been developed, including the Cone Penetration Test (CPT). More recently, prediction models based soley on remotely-sensed data were developed. Similar to systems that provide automated content on earthquake impacts, these “geospatial” models aim to predict liquefaction for rapid response and loss estimation using readily-available data. This data includes (i) common ground-motion intensity measures (e.g., PGA), which can either be provided in near-real-time following an earthquake, or predicted for a future event; and (ii) geospatial parameters derived from digital elevation models, which are used to infer characteristics of the subsurface relevent to liquefaction. However, the predictive capabilities of geospatial and geotechnical models have not been directly compared, which could elucidate techniques for improving the geospatial models, and which would provide a baseline for measuring improvements. Accordingly, this study assesses the realtive efficacy of liquefaction models based on geospatial vs. CPT data using 9,908 case-studies from the 2010-2016 Canterbury earthquakes. While the top-performing models are CPT-based, the geospatial models perform relatively well given their simplicity and low cost. Although further research is needed (e.g., to improve upon the performance of current models), the findings of this study suggest that geospatial models have the potential to provide valuable first-order predictions of liquefaction occurence and consequence. Towards this end, performance assessments of geospatial vs. geotechnical models are ongoing for more than 20 additional global earthquakes.
1. Background and Objectives This poster presents results from ground motion simulations of small-to-moderate magnitude (3.5≤Mw≤5.0) earthquake events in the Canterbury, New Zealand region using the Graves and Pitarka (2010,2015) methodology. Subsequent investigation of systematic ground motion effects highlights the prediction bias in the simulations which are also benchmarked against empirical ground motion models (e.g. Bradley (2013)). In this study, 144 earthquake ruptures, modelled as point sources, are considered with 1924 quality-assured ground motions recorded across 45 strong motion stations throughout the Canterbury region, as shown in Figure 1. The majority of sources are Mw≥4.0 and have centroid depth (CD) 10km or shallower. Earthquake source descriptions were obtained from the GeoNet New Zealand earthquake catalogue. The ground motion simulations were performed within a computational domain of 140km x 120km x 46km with a finite difference grid spacing of 0.1km. The low-frequency (LF) simulations utilize the 3D Canterbury Velocity Model while the high-frequency (HF) simulations utilize a generic regional 1D velocity model. In the LF simulations, a minimum shear wave velocity of 500m/s is enforced, yielding a maximum frequency of 1.0Hz.
The focus of the study presented herein is an assessment of the relative efficacy of recent Cone Penetration Test (CPT) and small strain shear wave velocity (Vs) based variants of the simplified procedure. Towards this end Receiver Operating Characteristic (ROC) analyses were performed on the CPT- and Vs-based procedures using the field case history databases from which the respective procedures were developed. The ROC analyses show that Factors of Safety (FS) against liquefaction computed using the most recent Vs-based simplified procedure is better able to separate the “liquefaction” from the “no liquefaction” case histories in the Vs liquefaction database than the CPT-based procedure is able to separate the “liquefaction” from the “no liquefaction” case histories in the CPT liquefaction database. However, this finding somewhat contradicts the assessed predictive capabilities of the CPT- and Vs-based procedures as quantified using select, high quality liquefaction case histories from the 20102011 Canterbury, New Zealand, Earthquake Sequence (CES), wherein the CPT-based procedure was found to yield more accurate predictions. The dichotomy of these findings may result from the fact that different liquefaction field case history databases were used in the respective ROC analyses for Vs and CPT, while the same case histories were used to evaluate both the CPT- and Vs-based procedures.
© 2019, Springer-Verlag GmbH Germany, part of Springer Nature. Prediction of building collapse due to significant seismic motion is a principle objective of earthquake engineers, particularly after a major seismic event when the structure is damaged and decisions may need to be made rapidly concerning the safe occupation of a building or surrounding areas. Traditional model-based pushover analyses are effective, but only if the structural properties are well understood, which is not the case after an event when that information is most useful. This paper combines hysteresis loop analysis (HLA) structural health monitoring (SHM) and incremental dynamic analysis (IDA) methods to identify and then analyse collapse capacity and the probability of collapse for a specific structure, at any time, a range of earthquake excitations to ensure robustness. This nonlinear dynamic analysis enables constant updating of building performance predictions following a given and subsequent earthquake events, which can result in difficult to identify deterioration of structural components and their resulting capacity, all of which is far more difficult using static pushover analysis. The combined methods and analysis provide near real-time updating of the collapse fragility curves as events progress, thus quantifying the change of collapse probability or seismic induced losses very soon after an earthquake for decision-making. Thus, this combination of methods enables a novel, higher-resolution analysis of risk that was not previously available. The methods are not computationally expensive and there is no requirement for a validated numerical model, thus providing a relatively simpler means of assessing collapse probability immediately post-event when such speed can provide better information for critical decision-making. Finally, the results also show a clear need to extend the area of SHM toward creating improved predictive models for analysis of subsequent events, where the Christchurch series of 2010–2011 had significant post-event aftershocks.
In this paper, we perform hybrid broadband (0-10 Hz) ground motion simulations for the ten most significant events (Mw 4.7-7.1) in the 2010-2011 Canterbury earthquake sequence. Taking advantage of having repeated recordings at same stations, we validate our simulations using both recordings and an empirically-developed ground motion prediction equation (GMPE). The simulation clearly captures the sedimentary basin amplification and the rupture directivity effects. Quantitative comparisons of the simulations with both recordings and the GMPE, as well as analyses of the total residuals (indicating model bias) show that simulations perform better than the empirical GMPE, especially for long period. To scrutinize the ground motion variability, we partitioned the total residuals into different components. The total residual appears to be unbiased, and the use of a 3D velocity structure reduces the long period systematic bias particularly for stations located close to the Banks Peninsula volcanic area.
This paper presents a methodology by which both site-specific and spatially distributed ground motion intensity can be obtained immediately following an earthquake event. The methodology makes use of both prediction models for ground motion intensity and its correlation over spatial distances. A key benefit of the methodology is that the ground motion intensity at a given location is not a single value but a distribution of values. The distribution is comprised of both a mean and also standard deviation, with the standard deviation being a function of the distance to nearby strong motion stations. The methodology is illustrated for two applications. Firstly, maps of conditional peak ground acceleration (PGA) have been developed for the major events in the Canterbury earthquake sequence. It is illustrated how these conditional maps can be used for post-event evaluation of liquefaction triggering criteria which have been adopted by the Department of Building and Housing (DBH). Secondly, the conditional distribution of response spectral ordinates is obtained at a specific location for the purposes of determining appropriate ground motion records for use in seismic response analyses of important structures at locations where direct recordings are absent.