The 2010-2011 Canterbury earthquakes were recorded over a dense strong motion network in the near-source region, yielding significant observational evidence of seismic complexities, and a basis for interpretation of multi-disciplinary datasets and induced damage to the natural and built environment. This paper provides an overview of observed strong motions from these events and retrospective comparisons with both empirical and physics-based ground motion models. Both empirical and physics-based methods provide good predictions of observations at short vibration periods in an average sense. However, observed ground motion amplitudes at specific locations, such as Heathcote Valley, are seen to systematically depart from ‘average’ empirical predictions as a result of near surface stratigraphic and topographic features which are well modelled via sitespecific response analyses. Significant insight into the long period bias in empirical predictions is obtained from the use of hybrid broadband ground motion simulation. The comparison of both empirical and physics-based simulations against a set of 10 events in the sequence clearly illustrates the potential for simulations to improve ground motion and site response prediction, both at present, and further in the future.
This paper examines the consistency of seismicity and ground motion models, used for seismic hazard analysis in New Zealand, with the observations in the Canterbury earthquakes. An overview is first given of seismicity and ground motion modelling as inputs of probabilistic seismic hazard analysis, whose results form the basis for elastic response spectra in NZS1170.5:2004. The magnitude of earthquakes in the Canterbury earthquake sequence are adequately allowed for in the current NZ seismicity model, however the consideration of ‘background’ earthquakes as point sources at a minimum depth of 10km results in up to a 60% underestimation of the ground motions that such events produce. The ground motion model used in conventional NZ seismic hazard analysis is shown to provide biased predictions of response spectra (over-prediction near T=0.2s , and under-predictions at moderate-to-large vibration periods). Improved ground motion prediction can be achieved using more recent NZ-specific models.
This presentation discusses recent empirical ground motion modelling efforts in New Zealand. Firstly, the active shallow crustal and subduction interface and slab ground motion prediction equations (GMPEs) which are employed in the 2010 update of the national seismic hazard model (NSHM) are discussed. Other NZ-specific GMPEs developed, but not incorporated in the 2010 update are then discussed, in particular, the active shallow crustal model of Bradley (2010). A brief comparison of the NZ-specific GMPEs with the near-source ground motions recorded in the Canterbury earthquakes is then presented, given that these recordings collectively provide a significant increase in observed strong motions in the NZ catalogue. The ground motion prediction expert elicitation process that was undertaken following the Canterbury earthquakes for active shallow crustal earthquakes is then discussed. Finally, ongoing GMPE-related activities are discussed including: ground motion and metadata database refinement, improved site characterization of strong motion station, and predictions for subduction zone earthquakes.
The purpose of this thesis is to conduct a detailed examination of the forward-directivity characteristics of near-fault ground motions produced in the 2010-11 Canterbury earthquakes, including evaluating the efficacy of several existing empirical models which form the basis of frameworks for considering directivity in seismic hazard assessment. A wavelet-based pulse classification algorithm developed by Baker (2007) is firstly used to identify and characterise ground motions which demonstrate evidence of forward-directivity effects from significant events in the Canterbury earthquake sequence. The algorithm fails to classify a large number of ground motions which clearly exhibit an early-arriving directivity pulse due to: (i) incorrect pulse extraction resulting from the presence of pulse-like features caused by other physical phenomena; and (ii) inadequacy of the pulse indicator score used to carry out binary pulse-like/non-pulse-like classification. An alternative ‘manual’ approach is proposed to ensure 'correct' pulse extraction and the classification process is also guided by examination of the horizontal velocity trajectory plots and source-to-site geometry. Based on the above analysis, 59 pulse-like ground motions are identified from the Canterbury earthquakes , which in the author's opinion, are caused by forward-directivity effects. The pulses are also characterised in terms of their period and amplitude. A revised version of the B07 algorithm developed by Shahi (2013) is also subsequently utilised but without observing any notable improvement in the pulse classification results. A series of three chapters are dedicated to assess the predictive capabilities of empirical models to predict the: (i) probability of pulse occurrence; (ii) response spectrum amplification caused by the directivity pulse; (iii) period and amplitude (peak ground velocity, PGV) of the directivity pulse using observations from four significant events in the Canterbury earthquakes. Based on the results of logistic regression analysis, it is found that the pulse probability model of Shahi (2013) provides the most improved predictions in comparison to its predecessors. Pulse probability contour maps are developed to scrutinise observations of pulses/non-pulses with predicted probabilities. A direct comparison of the observed and predicted directivity amplification of acceleration response spectra reveals the inadequacy of broadband directivity models, which form the basis of the near-fault factor in the New Zealand loadings standard, NZS1170.5:2004. In contrast, a recently developed narrowband model by Shahi & Baker (2011) provides significantly improved predictions by amplifying the response spectra within a small range of periods. The significant positive bias demonstrated by the residuals associated with all models at longer vibration periods (in the Mw7.1 Darfield and Mw6.2 Christchurch earthquakes) is likely due to the influence of basin-induced surface waves and non-linear soil response. Empirical models for the pulse period notably under-predict observations from the Darfield and Christchurch earthquakes, inferred as being a result of both the effect of nonlinear site response and influence of the Canterbury basin. In contrast, observed pulse periods from the smaller magnitude June (Mw6.0) and December (Mw5.9) 2011 earthquakes are in good agreement with predictions. Models for the pulse amplitude generally provide accurate estimates of the observations at source-to-site distances between 1 km and 10 km. At longer distances, observed PGVs are significantly under-predicted due to their slower apparent attenuation. Mixed-effects regression is employed to develop revised models for both parameters using the latest NGA-West2 pulse-like ground motion database. A pulse period relationship which accounts for the effect of faulting mechanism using rake angle as a continuous predictor variable is developed. The use of a larger database in model development, however does not result in improved predictions of pulse period for the Darfield and Christchurch earthquakes. In contrast, the revised model for PGV provides a more appropriate attenuation of the pulse amplitude with distance, and does not exhibit the bias associated with previous models. Finally, the effects of near-fault directivity are explicitly included in NZ-specific probabilistic seismic hazard analysis (PSHA) using the narrowband directivity model of Shahi & Baker (2011). Seismic hazard analyses are conducted with and without considering directivity for typical sites in Christchurch and Otira. The inadequacy of the near-fault factor in the NZS1170.5: 2004 is apparent based on a comparison with the directivity amplification obtained from PSHA.
Background Liquefaction induced land damage has been identified in more than 13 notable New Zealand earthquakes within the past 150 years, as presented on the timeline below. Following the 2010-2011 Canterbury Earthquake Sequence (CES), the consequences of liquefaction were witnessed first-hand in the city of Christchurch and as a result the demand for understanding this phenomenon was heightened. Government, local councils, insurers and many other stakeholders are now looking to research and understand their exposure to this natural hazard.
Natural catastrophes are increasing worldwide. They are becoming more frequent but also more severe and impactful on our built environment leading to extensive damage and losses. Earthquake events account for the smallest part of natural events; nevertheless seismic damage led to the most fatalities and significant losses over the period 1981-2016 (Munich Re). Damage prediction is helpful for emergency management and the development of earthquake risk mitigation projects. Recent design efforts focused on the application of performance-based design engineering where damage estimation methodologies use fragility and vulnerability functions. However, the approach does not explicitly specify the essential criteria leading to economic losses. There is thus a need for an improved methodology that finds the critical building elements related to significant losses. The here presented methodology uses data science techniques to identify key building features that contribute to the bulk of losses. It uses empirical data collected on site during earthquake reconnaissance mission to train a machine learning model that can further be used for the estimation of building damage post-earthquake. The first model is developed for Christchurch. Empirical building damage data from the 2010-2011 earthquake events is analysed to find the building features that contributed the most to damage. Once processed, the data is used to train a machine-learning model that can be applied to estimate losses in future earthquake events.
The Canterbury Earthquake Sequence (CES), induced extensive damage in residential buildings and led to over NZ$40 billion in total economic losses. Due to the unique insurance setting in New Zealand, up to 80% of the financial losses were insured. Over the CES, the Earthquake Commission (EQC) received more than 412,000 insurance claims for residential buildings. The 4 September 2010 earthquake is the event for which most of the claims have been lodged with more than 138,000 residential claims for this event only. This research project uses EQC claim database to develop a seismic loss prediction model for residential buildings in Christchurch. It uses machine learning to create a procedure capable of highlighting critical features that affected the most buildings loss. A future study of those features enables the generation of insights that can be used by various stakeholders, for example, to better understand the influence of a structural system on the building loss or to select appropriate risk mitigation measures. Previous to the training of the machine learning model, the claim dataset was supplemented with additional data sourced from private and open access databases giving complementary information related to the building characteristics, seismic demand, liquefaction occurrence and soil conditions. This poster presents results of a machine learning model trained on a merged dataset using residential claims from the 4 September 2010.
Since the early 1980s seismic hazard assessment in New Zealand has been based on Probabilistic Seismic Hazard Analysis (PSHA). The most recent version of the New Zealand National Seismic Hazard Model, a PSHA model, was published by Stirling et al, in 2012. This model follows standard PSHA principals and combines a nation-wide model of active faults with a gridded point-source model based on the earthquake catalogue since 1840. These models are coupled with the ground-motion prediction equation of McVerry et al (2006). Additionally, we have developed a time-dependent clustering-based PSHA model for the Canterbury region (Gerstenberger et al, 2014) in response to the Canterbury earthquake sequence. We are now in the process of revising that national model. In this process we are investigating several of the fundamental assumptions in traditional PSHA and in how we modelled hazard in the past. For this project, we have three main focuses: 1) how do we design an optimal combination of multiple sources of information to produce the best forecast of earthquake rates in the next 50 years: can we improve upon a simple hybrid of fault sources and background sources, and can we better handle the uncertainties in the data and models (e.g., fault segmentation, frequency-magnitude distributions, time-dependence & clustering, low strain-rate areas, and subduction zone modelling)? 2) developing revised and new ground-motion predictions models including better capturing of epistemic uncertainty – a key focus in this work is developing a new strong ground motion catalogue for model development; and 3) how can we best quantify if changes we have made in our modelling are truly improvements? Throughout this process we are working toward incorporating numerical modelling results from physics based synthetic seismicity and ground-motion models.
This paper presents a critical evaluation of vertical ground motions observed in the Canterbury earthquake sequence. The abundance of strong near-source ground-motion recordings provides an opportunity to comprehensively review the estimation of vertical ground motions via the New Zealand Standard for earthquake loading, NZS1170.5:2004, and empirical ground motion prediction equations (GMPEs). An in-depth review of current GMPEs is carried out to determine the existing trends and characteristics present in the empirical models. Results illustrate that vertical ground motion amplitudes estimated based on NZS1170.5:2004 are significantly unconservative at short periods and near-source distances. While conventional GMPEs provide an improved prediction, in many instances they too underpredict vertical ground motion accelerations at short periods and near-source distances.
In this paper, the characteristics of near-fault ground motions recorded during the Mw7.1 Darfield and Mw 6.2 Christchurch earthquakes are examined and compared with existing empirical models. The characteristics of forward-directivity effects are first examined using a wavelet-based pulse-classification algorithm. This is followed by an assessment of the adequacy of empirical models which aim to capture the effect of directivity effects on amplifying the acceleration response spectra; and the period and peak velocity of the forward-directivity pulse. It is illustrated that broadband directivity models developed by Somerville et al. (1997) and Abrahamson (2000) generally under-predict the observed amplification of response spectral ordinates at longer vibration periods. In contrast, a recently developed narrowband model by Shahi and Baker (2011) provides significantly improved predictions by amplifying the response spectra within a small range of periods surrounding the directivity pulse period. Although the empirical predictions of the pulse period are generally favourable for the Christchurch earthquake, the observations from the Darfield earthquake are significantly under-predicted. The elongation in observed pulse periods is inferred as being a result of the soft sedimentary soils of the Canterbury basin. However, empirical predictions of the observed peak velocity associated with the directivity pulse are generally adequate for both events.
Ground motion observations from the most significant 10 events in the 2010-2011 Canterbury earthquake sequence at near-source sites are utilized to scrutinize New Zealand (NZ)-specific pseudo-spectral acceleration (SA) empirical ground motion prediction equations (GMPE) (Bradley 2010, Bradley 2013, McVerry et al. 2006). Region-specific modification factors based on relaxing the conventional ergodic assumption in GMPE development were developed for the Bradley (2010) model. Because of the observed biases with magnitude and source-to-site distance for the McVerry et al. (2006) model it is not possible to develop region-specific modification factors in a reliable manner. The theory of non-ergodic empirical ground motion prediction is then outlined, and applied to this 10 event dataset to determine systematic effects in the between- and within-event residuals which lead to modifications in the predicted median and standard deviation of the GMPE. By examining these systematic effects over sub-regions containing a total of 20 strong motion stations within the Canterbury area, modification factors for use in region-specific ground motion prediction are proposed. These modification factors, in particular, are suggested for use with the Bradley et al. (2010) model in Canterbury-specific probabilistic seismic hazard analysis (PSHA) to develop revised design response, particularly for long vibration periods.
Despite over a century of study, the relationship between lunar cycles and earthquakes remains controversial and difficult to quantitatively investigate. Perhaps as a consequence, major earthquakes around the globe are frequently followed by 'prediction' claims, using lunar cycles, that generate media furore and pressure scientists to provide resolute answers. The 2010-2011 Canterbury earthquakes in New Zealand were no exception; significant media attention was given to lunarderived earthquake predictions by non-scientists, even though the predictions were merely 'opinions' and were not based on any statistically robust temporal or causal relationships. This thesis provides a framework for studying lunisolar earthquake temporal relationships by developing replicable statistical methodology based on peer reviewed literature. Notable in the methodology is a high accuracy ephemeris, called ECLPSE, designed specifically by the author for use on earthquake catalogs, and a model for performing phase angle analysis. The statistical tests were carried out on two 'declustered' seismic catalogs, one containing the aftershocks from the Mw7.1 earthquake in Canterbury, and the other containing Australian seismicity from the past two decades. Australia is an intraplate setting far removed from active plate boundaries and Canterbury is proximal to a plate boundary, thus allowing for comparison based on tectonic regime and corresponding tectonic loading rate. No strong, conclusive, statistical correlations were found at any level of the earthquake catalogs, looking at large events, onshore events, offshore events, and the fault type of some events. This was concluded using Schuster's test of significance with α=5% and analysis of standard deviations. A few weak correlations, with p-5-10% of rejecting the null hypothesis, and anomalous standard deviations were found, but these are difficult to interpret. The results invalidate the statistical robustness of 'earthquake predictions' using lunisolar parameters in this instance. An ambitious researcher could improve on the quality of the results and on the range of parameters analyzed. The conclusions of the thesis raise more questions than answers, but the thesis provides an adaptable methodology that can be used to further investigation the problem.
As a consequence of the 2010 – 2011 Canterbury earthquake sequence, Christchurch experienced widespread liquefaction, vertical settlement and lateral spreading. These geological processes caused extensive damage to both housing and infrastructure, and increased the need for geotechnical investigation substantially. Cone Penetration Testing (CPT) has become the most common method for liquefaction assessment in Christchurch, and issues have been identified with the soil behaviour type, liquefaction potential and vertical settlement estimates, particularly in the north-western suburbs of Christchurch where soils consist mostly of silts, clayey silts and silty clays. The CPT soil behaviour type often appears to over-estimate the fines content within a soil, while the liquefaction potential and vertical settlement are often calculated higher than those measured after the Canterbury earthquake sequence. To investigate these issues, laboratory work was carried out on three adjacent CPT/borehole pairs from the Groynes Park subdivision in northern Christchurch. Boreholes were logged according to NZGS standards, separated into stratigraphic layers, and laboratory tests were conducted on representative samples. Comparison of these results with the CPT soil behaviour types provided valuable information, where 62% of soils on average were specified by the CPT at the Groynes Park subdivision as finer than what was actually present, 20% of soils on average were specified as coarser than what was actually present, and only 18% of soils on average were correctly classified by the CPT. Hence the CPT soil behaviour type is not accurately describing the stratigraphic profile at the Groynes Park subdivision, and it is understood that this is also the case in much of northwest Christchurch where similar soils are found. The computer software CLiq, by GeoLogismiki, uses assessment parameter constants which are able to be adjusted with each CPT file, in an attempt to make each more accurate. These parameter changes can in some cases substantially alter the results for liquefaction analysis. The sensitivity of the overall assessment method, raising and lowering the water table, lowering the soil behaviour type index, Ic, liquefaction cutoff value, the layer detection option, and the weighting factor option, were analysed by comparison with a set of ‘base settings’. The investigation confirmed that liquefaction analysis results can be very sensitive to the parameters selected, and demonstrated the dependency of the soil behaviour type on the soil behaviour type index, as the tested assessment parameters made very little to no changes to the soil behaviour type plots. The soil behaviour type index, Ic, developed by Robertson and Wride (1998) has been used to define a soil’s behaviour type, which is defined according to a set of numerical boundaries. In addition to this, the liquefaction cutoff point is defined as Ic > 2.6, whereby it is assumed that any soils with an Ic value above this will not liquefy due to clay-like tendencies (Robertson and Wride, 1998). The method has been identified in this thesis as being potentially unsuitable for some areas of Christchurch as it was developed for mostly sandy soils. An alternative methodology involving adjustment of the Robertson and Wride (1998) soil behaviour type boundaries is proposed as follows: Ic < 1.31 – Gravelly sand to dense sand 1.31 < Ic < 1.90 – Sands: clean sand to silty sand 1.90 < Ic < 2.50 – Sand mixtures: silty sand to sandy silt 2.50 < Ic < 3.20 – Silt mixtures: clayey silt to silty clay 3.20 < Ic < 3.60 – Clays: silty clay to clay Ic > 3.60 – Organics soils: peats. When the soil behaviour type boundary changes were applied to 15 test sites throughout Christchurch, 67% showed an improved change of soil behaviour type, while the remaining 33% remained unchanged, because they consisted almost entirely of sand. Within these boundary changes, the liquefaction cutoff point was moved from Ic > 2.6 to Ic > 2.5 and altered the liquefaction potential and vertical settlement to more realistic ii values. This confirmed that the overall soil behaviour type boundary changes appear to solve both the soil behaviour type issues and reduce the overestimation of liquefaction potential and vertical settlement. This thesis acts as a starting point towards researching the issues discussed. In particular, future work which would be useful includes investigation of the CLiq assessment parameter adjustments, and those which would be most suitable for use in clay-rich soils such as those in Christchurch. In particular consideration of how the water table can be better assessed when perched layers of water exist, with the limitation that only one elevation can be entered into CLiq. Additionally, a useful investigation would be a comparison of the known liquefaction and settlements from the Canterbury earthquake sequence with the liquefaction and settlement potentials calculated in CLiq for equivalent shaking conditions. This would enable the difference between the two to be accurately defined, and a suitable adjustment applied. Finally, inconsistencies between the Laser-Sizer and Hydrometer should be investigated, as the Laser-Sizer under-estimated the fines content by up to one third of the Hydrometer values.
This paper presents on-going challenges in the present paradigm shift of earthquakeinduced ground motion prediction from empirical to physics-based simulation methods. The 2010-2011 Canterbury and 2016 Kaikoura earthquakes are used to illustrate the predictive potential of the different methods. On-going efforts on simulation validation and theoretical developments are then presented, as well as the demands associated with the need for explicit consideration of modelling uncertainties. Finally, discussion is also given to the tools and databases needed for the efficient utilization of simulated ground motions both in specific engineering projects as well as for near-real-time impact assessment.
© 2017 The Royal Society of New Zealand. This paper discusses simulated ground motion intensity, and its underlying modelling assumptions, for great earthquakes on the Alpine Fault. The simulations utilise the latest understanding of wave propagation physics, kinematic earthquake rupture descriptions and the three-dimensional nature of the Earth's crust in the South Island of New Zealand. The effect of hypocentre location is explicitly examined, which is found to lead to significant differences in ground motion intensities (quantified in the form of peak ground velocity, PGV) over the northern half and southwest of the South Island. Comparison with previously adopted empirical ground motion models also illustrates that the simulations, which explicitly model rupture directivity and basin-generated surface waves, lead to notably larger PGV amplitudes than the empirical predictions in the northern half of the South Island and Canterbury. The simulations performed in this paper have been adopted, as one possible ground motion prediction, in the ‘Project AF8’ Civil Defence Emergency Management exercise scenario. The similarity of the modelled ground motion features with those observed in recent worldwide earthquakes as well as similar simulations in other regions, and the notably higher simulated amplitudes than those from empirical predictions, may warrant a re-examination of regional impact assessments for major Alpine Fault earthquakes.
The Canterbury Earthquake Sequence 2010-2011 (CES) induced widespread liquefaction in many parts of Christchurch city. Liquefaction was more commonly observed in the eastern suburbs and along the Avon River where the soils were characterised by thick sandy deposits with a shallow water table. On the other hand, suburbs to the north, west and south of the CBD (e.g. Riccarton, Papanui) exhibited less severe to no liquefaction. These soils were more commonly characterised by inter-layered liquefiable and non-liquefiable deposits. As part of a related large-scale study of the performance of Christchurch soils during the CES, detailed borehole data including CPT, Vs and Vp have been collected for 55 sites in Christchurch. For this subset of Christchurch sites, predictions of liquefaction triggering using the simplified method (Boulanger & Idriss, 2014) indicated that liquefaction was over-predicted for 94% of sites that did not manifest liquefaction during the CES, and under-predicted for 50% of sites that did manifest liquefaction. The focus of this study was to investigate these discrepancies between prediction and observation. To assess if these discrepancies were due to soil-layer interaction and to determine the effect that soil stratification has on the develop-ment of liquefaction and the system response of soil deposits.
The 2010 Darfield and 2011 Christchurch Earthquakes triggered extensive liquefaction-induced lateral spreading proximate to streams and rivers in the Christchurch area, causing significant damage to structures and lifelines. A case study in central Christchurch is presented and compares field observations with predicted displacements from the widely adopted empirical model of Youd et al. (2002). Cone penetration testing (CPT), with measured soil gradation indices (fines content and median grain size) on typical fluvial deposits along the Avon River were used to determine the required geotechnical parameters for the model input. The method presented attempts to enable the adoption of the extensive post-quake CPT test records in place of the lower quality and less available Standard Penetration Test (SPT) data required by the original Youd model. The results indicate some agreement between the Youd model predictions and the field observations, while the majority of computed displacements error on the side of over-prediction by more than a factor of two. A sensitivity analysis was performed with respect to the uncertainties used as model input, illustrating the model’s high sensitivity to the input parameters, with median grain size and fines content among the most influential, and suggesting that the use of CPT data to quantify these parameters may lead to variable results.
The overarching goal of this dissertation is to improve predictive capabilities of geotechnical seismic site response analyses by incorporating additional salient physical phenomena that influence site effects. Specifically, multidimensional wave-propagation effects that are neglected in conventional 1D site response analyses are incorporated by: (1) combining results of 3D regional-scale simulations with 1D nonlinear wave-propagation site response analysis, and (2) modelling soil heterogeneity in 2D site response analyses using spatially-correlated random fields to perturb soil properties. A method to combine results from 3D hybrid physics-based ground motion simulations with site-specific nonlinear site response analyses was developed. The 3D simulations capture 3D ground motion phenomena on a regional scale, while the 1D nonlinear site response, which is informed by detailed site-specific soil characterization data, can capture site effects more rigorously. Simulations of 11 moderate-to-large earthquakes from the 2010-2011 Canterbury Earthquake Sequence (CES) at 20 strong motion stations (SMS) were used to validate simulations with observed ground motions. The predictions were compared to those from an empirically-based ground motion model (GMM), and from 3D simulations with simplified VS30- based site effects modelling. By comparing all predictions to observations at seismic recording stations, it was found that the 3D physics-based simulations can predict ground motions with comparable bias and uncertainty as the GMM, albeit, with significantly lower bias at long periods. Additionally, the explicit modelling of nonlinear site-response improves predictions significantly compared to the simplified VS30-based approach for soft-soil or atypical sites that exhibit exceptionally strong site effects. A method to account for the spatial variability of soils and wave scattering in 2D site response analyses was developed and validated against a database of vertical array sites in California. The inputs required to run the 2D analyses are nominally the same as those required for 1D analyses (except for spatial correlation parameters), enabling easier adoption in practice. The first step was to create the platform and workflow, and to perform a sensitivity study involving 5,400 2D model realizations to investigate the influence of random field input parameters on wave scattering and site response. Boundary conditions were carefully assessed to understand their effect on the modelled response and select appropriate assumptions for use on a 2D model with lateral heterogeneities. Multiple ground-motion intensity measures (IMs) were analyzed to quantify the influence from random field input parameters and boundary conditions. It was found that this method is capable of scattering seismic waves and creating spatially-varying ground motions at the ground surface. The redistribution of ground-motion energy across wider frequency bands, and the scattering attenuation of high-frequency waves in 2D analyses, resemble features observed in empirical transfer functions (ETFs) computed in other studies. The developed 2D method was subsequently extended to more complicated multi-layer soil profiles and applied to a database of 21 vertical array sites in California to test its appropriate- ness for future predictions. Again, different boundary condition and input motion assumptions were explored to extend the method to the in-situ conditions of a vertical array (with a sensor embedded in the soil). ETFs were compared to theoretical transfer functions (TTFs) from conventional 1D analyses and 2D analyses with heterogeneity. Residuals of transfer-function- based IMs, and IMs of surface ground motions, were also used as validation metrics. The spatial variability of transfer-function-based IMs was estimated from 2D models and compared to the event-to-event variability from ETFs. This method was found capable of significantly improving predictions of median ETF amplification factors, especially for sites that display higher event-to-event variability. For sites that are well represented by 1D methods, the 2D approach can underpredict amplification factors at higher modes, suggesting that the level of heterogeneity may be over-represented by the 2D random field models used in this study.
Observations of out-of-plane (OOP) instability in the 2010 Chile earthquake and in the 2011 Christchurch earthquake resulted in concerns about the current design provisions of structural walls. This mode of failure was previously observed in the experimental response of some wall specimens subjected to in-plane loading. Therefore, the postulations proposed for prediction of the limit states corresponding to OOP instability of rectangular walls are generally based on stability analysis under in-plane loading only. These approaches address stability of a cracked wall section when subjected to compression, thereby considering the level of residual strain developed in the reinforcement as the parameter that prevents timely crack closure of the wall section and induces stability failure. The New Zealand code requirements addressing the OOP instability of structural walls are based on the assumptions used in the literature and the analytical methods proposed for mathematical determination of the critical strain values. In this study, a parametric study is conducted using a numerical model capable of simulating OOP instability of rectangular walls to evaluate sensitivity of the OOP response of rectangular walls to variation of different parameters identified to be governing this failure mechanism. The effects of wall slenderness (unsupported height-to-thickness) ratio, longitudinal reinforcement ratio of the boundary regions and length on the OOP response of walls are evaluated. A clear trend was observed regarding the influence of these parameters on the initiation of OOP displacement, based on which simple equations are proposed for prediction of OOP instability in rectangular walls.
Background This study examines the performance of site response analysis via nonlinear total-stress 1D wave-propagation for modelling site effects in physics-based ground motion simulations of the 2010-2011 Canterbury, New Zealand earthquake sequence. This approach allows for explicit modeling of 3D ground motion phenomena at the regional scale, as well as detailed nonlinear site effects at the local scale. The approach is compared to a more commonly used empirical VS30 (30 m time-averaged shear wave velocity)-based method for computing site amplification as proposed by Graves and Pitarka (2010, 2015), and to empirical ground motion prediction via a ground motion model (GMM).
This paper provides a brief discussion of observed strong ground motions from the 14 November 2016 Mw7.8 Kaikoura earthquake. Specific attention is given to examining observations in the near-source region where several ground motions exceeding 1.0g horizontal are recorded, as well as up to 2.7g in the vertical direction at one location. Ground motion response spectra in the near-source, North Canterbury, Marlborough and Wellington regions are also examined and compared with design levels. Observed spectral amplitudes are also compared with predictions from empirical and physics-based ground motion modelling.
Semi-empirical models based on in-situ geotechnical tests have become the standard of practice for predicting soil liquefaction. Since the inception of the “simplified” cyclic-stress model in 1971, variants based on various in-situ tests have been developed, including the Cone Penetration Test (CPT). More recently, prediction models based soley on remotely-sensed data were developed. Similar to systems that provide automated content on earthquake impacts, these “geospatial” models aim to predict liquefaction for rapid response and loss estimation using readily-available data. This data includes (i) common ground-motion intensity measures (e.g., PGA), which can either be provided in near-real-time following an earthquake, or predicted for a future event; and (ii) geospatial parameters derived from digital elevation models, which are used to infer characteristics of the subsurface relevent to liquefaction. However, the predictive capabilities of geospatial and geotechnical models have not been directly compared, which could elucidate techniques for improving the geospatial models, and which would provide a baseline for measuring improvements. Accordingly, this study assesses the realtive efficacy of liquefaction models based on geospatial vs. CPT data using 9,908 case-studies from the 2010-2016 Canterbury earthquakes. While the top-performing models are CPT-based, the geospatial models perform relatively well given their simplicity and low cost. Although further research is needed (e.g., to improve upon the performance of current models), the findings of this study suggest that geospatial models have the potential to provide valuable first-order predictions of liquefaction occurence and consequence. Towards this end, performance assessments of geospatial vs. geotechnical models are ongoing for more than 20 additional global earthquakes.
Heathcote Valley school strong motion station (HVSC) consistently recorded ground motions with higher intensities than nearby stations during the 2010-2011 Canterbury earthquakes. For example, as shown in Figure 1, for the 22 February 2011 Christchurch earthquake, peak ground acceleration at HVSC reached 1.4 g (horizontal) and 2 g (vertical), the largest ever recorded in New Zealand. Strong amplification of ground motions is expected at Heathcote Valley due to: 1) the high impedance contrast at the soil-rock interface, and 2) the interference of incident and surface waves within the valley. However, both conventional empirical ground motion prediction equations (GMPE) and the physics-based large scale ground motions simulations (with empirical site response) are ineffective in predicting such amplification due to their respective inherent limitations.
© 2019, Springer-Verlag GmbH Germany, part of Springer Nature. Prediction of building collapse due to significant seismic motion is a principle objective of earthquake engineers, particularly after a major seismic event when the structure is damaged and decisions may need to be made rapidly concerning the safe occupation of a building or surrounding areas. Traditional model-based pushover analyses are effective, but only if the structural properties are well understood, which is not the case after an event when that information is most useful. This paper combines hysteresis loop analysis (HLA) structural health monitoring (SHM) and incremental dynamic analysis (IDA) methods to identify and then analyse collapse capacity and the probability of collapse for a specific structure, at any time, a range of earthquake excitations to ensure robustness. This nonlinear dynamic analysis enables constant updating of building performance predictions following a given and subsequent earthquake events, which can result in difficult to identify deterioration of structural components and their resulting capacity, all of which is far more difficult using static pushover analysis. The combined methods and analysis provide near real-time updating of the collapse fragility curves as events progress, thus quantifying the change of collapse probability or seismic induced losses very soon after an earthquake for decision-making. Thus, this combination of methods enables a novel, higher-resolution analysis of risk that was not previously available. The methods are not computationally expensive and there is no requirement for a validated numerical model, thus providing a relatively simpler means of assessing collapse probability immediately post-event when such speed can provide better information for critical decision-making. Finally, the results also show a clear need to extend the area of SHM toward creating improved predictive models for analysis of subsequent events, where the Christchurch series of 2010–2011 had significant post-event aftershocks.
Probabilistic Structural Fire Engineering (PSFE) has been introduced to overcome the limitations of current conventional approaches used for the design of fire-exposed structures. Current structural fire design investigates worst-case fire scenarios and include multiple thermal and structural analyses. PSFE permits buildings to be designed to a level of life safety or economic loss that may occur in future fire events with the help of a probabilistic approach. This thesis presents modifications to the adoption of a Performance-Based Earthquake Engineering (PBEE) framework in Probabilistic Structural Fire Engineering (PSFE). The probabilistic approach runs through a series of interrelationships between different variables, and successive convolution integrals of these interrelationships result in probabilities of different measures. The process starts with the definition of a fire severity measure (FSM), which best relates fire hazard intensity with structural response. It is identified by satisfying efficiency and sufficiency criteria as described by the PBEE framework. The relationship between a fire hazard and corresponding structural response is established by analysis methods. One method that has been used to quantify this relationship in PSFE is Incremental Fire Analysis (IFA). The existing IFA approach produces unrealistic fire scenarios, as fire profiles may be scaled to wide ranges of fire severity levels, which may not physically represent any real fires. Two new techniques are introduced in this thesis to limit extensive scaling. In order to obtain an annual rate of exceedance of fire hazard and structural response for an office building, an occurrence model and an attenuation model for office fires are generated for both Christchurch city and New Zealand. The results show that Christchurch city is 15% less likely to experience fires that have the potential to cause structural failures in comparison to all of New Zealand. In establishing better predictive relationships between fires and structural response, cumulative incident radiation (a fire hazard property) is found to be the most appropriate fire severity measure. This research brings together existing research on various sources of uncertainty in probabilistic structural fire engineering, such as elements affecting post-flashover fire development factors (fuel load, ventilation, surface lining and compartment geometry), fire models, analysis methods and structural reliability. Epistemic uncertainty and aleatory uncertainty are investigated in the thesis by examining the uncertainty associated with modelling and the factors that influence post-flashover development of fires. A survey of 12 buildings in Christchurch in combination with recent surveys in New Zealand produced new statistical data on post-flashover development factors in office buildings in New Zealand. The effects of these parameters on temperature-time profiles are evaluated. The effects of epistemic uncertainty due to fire models in the estimation of structural response is also calculated. Parametric fires are found to have large uncertainty in the prediction of post-flashover fires, while the BFD curves have large uncertainties in prediction of structural response. These uncertainties need to be incorporated into failure probability calculations. Uncertainty in structural modelling shows that the choices that are made during modelling have a large influence on realistic predictions of structural response.
The 2010–2011 Canterbury earthquake sequence began with the 4 September 2010, Mw7.1 Darfield earthquake and includes up to ten events that induced liquefaction. Most notably, widespread liquefaction was induced by the Darfield and Mw6.2 Christchurch earthquakes. The combination of well-documented liquefaction response during multiple events, densely recorded ground motions for the events, and detailed subsurface characterization provides an unprecedented opportunity to add well-documented case histories to the liquefaction database. This paper presents and applies 50 high-quality cone penetration test (CPT) liquefaction case histories to evaluate three commonly used, deterministic, CPT-based simplified liquefaction evaluation procedures. While all the procedures predicted the majority of the cases correctly, the procedure proposed by Idriss and Boulanger (2008) results in the lowest error index for the case histories analyzed, thus indicating better predictions of the observed liquefaction response.
People aged 65 years and older are the fastest growing age group in New Zealand. By the mid-2070s, there are predictions that this age group is likely to comprise approximately one third of the population. Older people are encouraged to stay in their own homes within their community for as long as possible with support to encourage the extension of ageing in place. Currently around 14% of those aged 75 years or older, make the move into retirement villages. This is expected to increase. Little is known by retirement villages about the wellbeing and health of those who decide to live independently in these facilities. Predicting the need for a continuum of care is challenging. This research measured the wellbeing and health of older adults. It was situated in a critical realist paradigm, overlaid with an empathetic axiology. A focused literature review considered the impact on wellbeing from the aspects of living place, age, gender, health status and the 2010/2011 Canterbury earthquakes. Longitudinal studies used the Enlightenment Scale and the interRAI Community Health Assessment (CHA) to measure the wellbeing and health of one group of residents (n=120) living independently in one retirement village in Canterbury, New Zealand. The research was extended to incorporate two cross-section studies when initial results for wellbeing were found to be higher than anticipated. These additional studies included participants living independently from other retirement villages (n=115) and those living independently within the community (n=354). A total of 589 participants, aged 65 – 97 years old, completed the Enlightenment Scale across the four studies. Across the living places, wellbeing continued to significantly improve with age. The Enlightenment Scale was a useful measure of wellbeing with older adults. Participants in the longitudinal studies largely maintained a relatively good health status, showing little change over the study period of 15 months. Predictions for the need for a move to supportive care were not able to be made using the CHA. The health status of participants did not influence their level of wellbeing. The key finding of note is that the wellbeing score of older adults increases by 1.27 points per year, using the Enlightenment Scale, irrespective of where they live.
This paper presents an examination of ground motion observations from 20 near-source strong motion stations during the most significant 10 events in the 2010-2011 Canterbury earthquake to examine region-specific systematic effects based on relaxing the conventional ergodic assumption. On the basis of similar site-to-site residuals, surfical geology, and geographical proximity, 15 of the 20 stations are grouped into four sub-regions: the Central Business District; and Western, Eastern, and Northern suburbs. Mean site-to-site residuals for these sub-regions then allows for the possibility of non-ergodic ground motion prediction over these sub-regions of Canterbury, rather than only at strong motion station locations. The ratio of the total non-ergodic vs. ergodic standard deviation is found to be, on average, consistent with previous studies, however it is emphasized that on a site-by-site basis the non-ergodic standard deviation can easily vary by ±20%.
The seismic performance and parameter identification of the base isolated Christchurch Women’s Hospital (CWH) building are investigated using the recorded seismic accelerations during the two large earthquakes in Christchurch. A four degrees of freedom shear model is applied to characterize the dynamic behaviour of the CWH building during these earthquakes. A modified Gauss-Newton method is employed to identify the equivalent stiffness and Rayleigh damping coefficients of the building. The identification method is first validated using a simulated example structure and finally applied to the CWH building using recorded measurements from the Mw 6.0 and Mw 5.8 Christchurch earthquakes on December 23, 2011. The estimated response and recorded response for both earthquakes are compared with the cross correlation coefficients and the mean absolute percentage errors reported. The results indicate that the dynamic behaviour of the superstructure and base isolator was essentially within elastic range and the proposed shear linear model is sufficient for the prediction of the structural response of the CWH Hospital during these events.
In this paper, we perform hybrid broadband (0-10 Hz) ground motion simulations for the ten most significant events (Mw 4.7-7.1) in the 2010-2011 Canterbury earthquake sequence. Taking advantage of having repeated recordings at same stations, we validate our simulations using both recordings and an empirically-developed ground motion prediction equation (GMPE). The simulation clearly captures the sedimentary basin amplification and the rupture directivity effects. Quantitative comparisons of the simulations with both recordings and the GMPE, as well as analyses of the total residuals (indicating model bias) show that simulations perform better than the empirical GMPE, especially for long period. To scrutinize the ground motion variability, we partitioned the total residuals into different components. The total residual appears to be unbiased, and the use of a 3D velocity structure reduces the long period systematic bias particularly for stations located close to the Banks Peninsula volcanic area.