Search

found 24984 results

Research papers, University of Canterbury Library

The structure and geomorphology of active orogens evolves on time scales ranging from a single earthquake to millions of years of tectonic deformation. Analysis of crustal deformation using new and established remote sensing techniques, and integration of these data with field mapping, geochronology and the sedimentary record, create new opportunities to understand orogenic evolution over these timescales. Timor Leste (East Timor) lies on the northern collisional boundary between continental crust from the Australian Plate and the Banda volcanic arc. GPS studies have indicated that the island of Timor is actively shortening. Field mapping and fault kinematic analysis of an emergent Pliocene marine sequence identifies gentle folding, overprinted by a predominance of NW-SE oriented dextral-normal faults and NE-SW oriented sinistral-normal faults that collectively bound large (5-20km2) bedrock massifs throughout the island. These fault systems intersect at non-Andersonian conjugate angles of approximately 120° and accommodate an estimated 20 km of orogen-parallel extension. Folding of Pliocene rocks in Timor may represent an early episode of contraction but the overall pattern of deformation is one of lateral crustal extrusion sub-parallel to the Banda Arc. Stratigraphic relationships suggest that extrusion began prior to 5.5 Ma, during and after initial uplift of the orogen. Sedimentological, geochemical and Nd isotope data indicate that the island of Timor was emergent and shedding terrigenous sediment into carbonate basins prior to 4.5 Ma. Synorogenic tectonic and sedimentary phases initiated almost synchronously across much of Timor Leste and <2 Myr before similar events in West Timor. An increase in plate coupling along this obliquely converging boundary, due to subduction of an outlying continental plateau at the Banda Trench, is proposed as a mechanism for uplift that accounts for orogen-parallel extension and early uplift of Timor Leste. Rapid bathymetric changes around Timor are likely to have played an important role in evolution of the Indonesian Seaway. The 2010 Mw 7.1 Darfield (Canterbury) earthquake in New Zealand was complex, involving multiple faults with strike-slip, reverse and normal displacements. Multi-temporal cadastral surveying and airborne light detection and ranging (LiDAR) surveys allowed surface deformation at the junction of three faults to be analyzed in this study in unprecedented detail. A nested, localized restraining stepover with contractional bulging was identified in an area with the overall fault structure of a releasing bend, highlighting the surface complexities that may develop in fault interaction zones during a single earthquake sequence. The earthquake also caused river avulsion and flooding in this area. Geomorphic investigations of these rivers prior to the earthquake identify plausible precursory patterns, including channel migration and narrowing. Comparison of the pre and post-earthquake geomorphology of the fault rupture also suggests that a subtle scarp or groove was present along much of the trace prior to the Darfield earthquake. Hydrogeology and well logs support a hypothesis of extended slip history and suggests that that the Selwyn River fan may be infilling a graben that has accumulated late Quaternary vertical slip of <30 m. Investigating fault behavior, geomorphic and sedimentary responses over a multitude of time-scales and at different study sites provides insights into fault interactions and orogenesis during single earthquakes and over millions of years of plate boundary deformation.

Research papers, The University of Auckland Library

A non-destructive hardness testing method has been developed to investigate the amount of plastic strain demand in steel elements subjected to cyclic loading. The focus of this research is on application to the active links of eccentrically braced frames (EBFs), which are a commonly used seismic-resisting system in modern steel framed buildings. The 2010/2011 Christchurch earthquake series, especially the very intense February 22 shaking, which was the first earthquake worldwide to push complete EBF systems fully into their inelastic state, generating a moderate to high level of plastic strain in EBF active links, for a range of buildings from 3 to 23 storeys in height. This raised two important questions: 1) what was the extent of plastic deformation in active links; and 2) what effect does that have to post-earthquake steel properties? This project comprised determining a robust relationship between hardness and plastic strain in order to be able to answer the first question and provide the necessary input into answering the second question. A non-destructive Leeb (portable) hardness tester (model TH170) has been used to measure the hardness, in order to determine the plastic strain, in hot rolled steel universal sections and steel plates. A bench top Rockwell B was used to compare and validated the hardness measured by the portable hardness tester. Hardness was measured from monotonically strained tensile test specimens to identify the relationship between hardness and plastic strain demand. Test results confirmed a good relationship between hardness and the amount of monotonically induced plastic strain. Surface roughness was identified as an important parameter in obtaining reliable hardness readings from a portable hardness reader. A proper surface preparation method was established by using three different cleaning methods, finished with hand sanding to achieve surface roughness coefficients sufficiently low not to distort the results. This work showed that a test surface roughness (Ra) is not more than 1.6 micron meter (μm) is required for accurate readings from the TH170 tester. A case study on an earthquake affected building was carried out to identify the relationship between hardness and amount of plastic strain demand in cyclically deformed active links. Hardness was carried out from active links shown visually to have been the most affected during one of the major earthquake events. Onsite hardness test results were then compared with laboratory hardness test results. A good relationship between hardness from onsite and laboratory was observed between the test methods; Rockwell B bench top and portable Leeb tester TH170. Manufacturing induced plastic strain in the top and bottom of the webs of hot rolled sections were discovered from this research, an important result which explains why visual effects of earthquake induced active link yielding (eg cracked or flaking paint) was typically more prevalent over the middle half depth of the active link. The extent of this was quantified. It was also evident that the hardness readings from the portable hardness tester are influenced by geometry, mass effects and rigidity of the links. The final experimental stage was application of the method to full scale cyclic inelastic tested nominally identical active links subjected to loading regimes comprising constant and variable plastic strain demands. The links were cyclically loaded to achieve different plastic strain level. A novel Digital Image Correlation (DIC) technique was incorporated during the tests of this scale, to confirm the level of plastic strain achieved. Tensile test specimens were water jet cut from cyclically deformed webs to analyse the level of plastic strain. Test results show clear evidence that cyclically deformed structural steel elements show good correlation between hardness and the amount of plastic strain demand. DIC method was found to be reliable and accurate to check the level of plastic strain within cyclically deformed structural steel elements.

Research papers, The University of Auckland Library

The supply of water following disasters has always been of significant concern to communities. Failure of water systems not only causes difficulties for residents and critical users but may also affect other hard and soft infrastructure and services. The dependency of communities and other infrastructure on the availability of safe and reliable water places even more emphasis on the resilience of water supply systems. This thesis makes two major contributions. First, it proposes a framework for measuring the multifaceted resilience of water systems, focusing on the significance of the characteristics of different communities for the resilience of water supply systems. The proposed framework, known as the CARE framework, consists of eight principal activities: (1) developing a conceptual framework; (2) selecting appropriate indicators; (3) refining the indicators based on data availability; (4) correlation analysis; (5) scaling the indicators; (6) weighting the variables; (7) measuring the indicators; and (8) aggregating the indicators. This framework allows researchers to develop appropriate indicators in each dimension of resilience (i.e., technical, organisational, social, and economic), and enables decision makers to more easily participate in the process and follow the procedure for composite indicator development. Second, it identifies the significant technical, social, organisational and economic factors, and the relevant indicators for measuring these factors. The factors and indicators were gathered through a comprehensive literature review. They were then verified and ranked through a series of interviews with water supply and resilience specialists, social scientists and economists. Vulnerability, redundancy and criticality were identified as the most significant technical factors affecting water supply system robustness, and consequently resilience. These factors were tested for a scenario earthquake of Mw 7.6 in Pukerua Bay in New Zealand. Four social factors and seven indicators were identified in this study. The social factors are individual demands and capacities, individual involvement in the community, violence level in the community, and trust. The indicators are the Giving Index, homicide rate, assault rate, inverse trust in army, inverse trust in police, mean years of school, and perception of crime. These indicators were tested in Chile and New Zealand, which experienced earthquakes in 2010 and 2011 respectively. The social factors were also tested in Vanuatu following TC Pam, which hit the country in March 2015. Interestingly, the organisational dimension contributed the largest number of factors and indicators for measuring water supply resilience to disasters. The study identified six organisational factors and 17 indicators that can affect water supply resilience to disasters. The factors are: disaster precaution; predisaster planning; data availability, data accessibility and information sharing; staff, parts, and equipment availability; pre-disaster maintenance; and governance. The identified factors and their indicators were tested for the case of Christchurch, New Zealand, to understand how organisational capacity affected water supply resilience following the earthquake in February 2011. Governance and availability of critical staff following the earthquake were the strongest organisational factors for the Christchurch City Council, while the lack of early warning systems and emergency response planning were identified as areas that needed to be addressed. Economic capacity and quick access to finance were found to be the main economic factors influencing the resilience of water systems. Quick access to finance is most important in the early stages following a disaster for response and restoration, but its importance declines over time. In contrast, the economic capacity of the disaster struck area and the water sector play a vital role in the subsequent reconstruction phase rather than in the response and restoration period. Indicators for these factors were tested for the case of the February 2011 earthquake in Christchurch, New Zealand. Finally, a new approach to measuring water supply resilience is proposed. This approach measures the resilience of the water supply system based on actual water demand following an earthquake. The demand-based method calculates resilience based on the difference between water demand and system capacity by measuring actual water shortage (i.e., the difference between water availability and demand) following an earthquake.

Research papers, University of Canterbury Library

Structural engineering is facing an extraordinarily challenging era. These challenges are driven by the increasing expectations of modern society to provide low-cost, architecturally appealing structures which can withstand large earthquakes. However, being able to avoid collapse in a large earthquake is no longer enough. A building must now be able to withstand a major seismic event with negligible damage so that it is immediately occupiable following such an event. As recent earthquakes have shown, the economic consequences of not achieving this level of performance are not acceptable. Technological solutions for low-damage structural systems are emerging. However, the goal of developing a low-damage building requires improving the performance of both the structural skeleton and the non-structural components. These non-structural components include items such as the claddings, partitions, ceilings and contents. Previous research has shown that damage to such items contributes a disproportionate amount to the overall economic losses in an earthquake. One such non-structural element that has a history of poor performance is the external cladding system, and this forms the focus of this research. Cladding systems are invariably complicated and provide a number of architectural functions. Therefore, it is important than when seeking to improve their seismic performance that these functions are not neglected. The seismic vulnerability of cladding systems are determined in this research through a desktop background study, literature review, and postearthquake reconnaissance survey of their performance in the 2010 – 2011 Canterbury earthquake sequence. This study identified that precast concrete claddings present a significant life-safety risk to pedestrians, and that the effect they have upon the primary structure is not well understood. The main objective of this research is consequently to better understand the performance of precast concrete cladding systems in earthquakes. This is achieved through an experimental campaign and numerical modelling of a range of precast concrete cladding systems. The experimental campaign consists of uni-directional, quasi static cyclic earthquake simulation on a test frame which represents a single-storey, single-bay portion of a reinforced concrete building. The test frame is clad with various precast concrete cladding panel configurations. A major focus is placed upon the influence the connection between the cladding panel and structural frame has upon seismic performance. A combination of experimental component testing, finite element modelling and analytical derivation is used to develop cladding models of the cladding systems investigated. The cyclic responses of the models are compared with the experimental data to evaluate their accuracy and validity. The comparison shows that the cladding models developed provide an excellent representation of real-world cladding behaviour. The cladding models are subsequently applied to a ten-storey case-study building. The expected seismic performance is examined with and without the cladding taken into consideration. The numerical analyses of the case-study building include modal analyses, nonlinear adaptive pushover analyses, and non-linear dynamic seismic response (time history) analyses to different levels of seismic hazard. The clad frame models are compared to the bare frame model to investigate the effect the cladding has upon the structural behaviour. Both the structural performance and cladding performance are also assessed using qualitative damage states. The results show a poor performance of precast concrete cladding systems is expected when traditional connection typologies are used. This result confirms the misalignment of structural and cladding damage observed in recent earthquake events. Consequently, this research explores the potential of an innovative cladding connection. The outcomes from this research shows that the innovative cladding connection proposed here is able to achieve low-damage performance whilst also being cost comparable to a traditional cladding connection. It is also theoretically possible that the connection can provide a positive value to the seismic performance of the structure by adding addition strength, stiffness and damping. Finally, the losses associated with both the traditional and innovative cladding systems are compared in terms of tangible outcomes, namely: repair costs, repair time and casualties. The results confirm that the use of innovative cladding technology can substantially reduce the overall losses that result from cladding damage.

Research papers, University of Canterbury Library

The Canterbury Earthquake Sequence (CES) of 2010-2011 produced large seismic moments up to Mw 7.1. These large, near-to-surface (<15 km) ruptures triggered >6,000 rockfall boulders on the Port Hills of Christchurch, many of which impacted houses and affected the livelihoods of people within the impacted area. From these disastrous and unpredicted natural events a need arose to be able to assess the areas affected by rockfall events in the future, where it is known that a rockfall is possible from a specific source outcrop but the potential boulder runout and dynamics are not understood. The distribution of rockfall deposits is largely constrained by the physical properties and processes of the boulder and its motion such as block density, shape and size, block velocity, bounce height, impact and rebound angle, as well as the properties of the substrate. Numerical rockfall models go some way to accounting for all the complex factors in an algorithm, commonly parameterised in a user interface where site-specific effects can be calibrated. Calibration of these algorithms requires thorough field checks and often experimental practises. The purpose of this project, which began immediately following the most destructive rupture of the CES (February 22, 2011), is to collate data to characterise boulder falls, and to use this information, supplemented by a set of anthropogenic boulder fall data, to perform an in-depth calibration of the three-dimensional numerical rockfall model RAMMS::Rockfall. The thesis covers the following topics: • Use of field data to calibrate RAMMS. Boulder impact trails in the loess-colluvium soils at Rapaki Bay have been used to estimate ranges of boulder velocities and bounce heights. RAMMS results replicate field data closely; it is concluded that the model is appropriate for analysing the earthquake-triggered boulder trails at Rapaki Bay, and that it can be usefully applied to rockfall trajectory and hazard assessment at this and similar sites elsewhere. • Detailed analysis of dynamic rockfall processes, interpreted from recorded boulder rolling experiments, and compared to RAMMS simulated results at the same site. Recorded rotational and translational velocities of a particular boulder show that the boulder behaves logically and dynamically on impact with different substrate types. Simulations show that seasonal changes in soil moisture alter rockfall dynamics and runout predictions within RAMMS, and adjustments are made to the calibration to reflect this; suggesting that in hazard analysis a rockfall model should be calibrated to dry rather than wet soil conditions to anticipate the most serious outcome. • Verifying the model calibration for a separate site on the Port Hills. The results of the RAMMS simulations show the effectiveness of calibration against a real data set, as well as the effectiveness of vegetation as a rockfall barrier/retardant. The results of simulations are compared using hazard maps, where the maximum runouts match well the mapped CES fallen boulder maximum runouts. The results of the simulations in terms of frequency distribution of deposit locations on the slope are also compared with those of the CES data, using the shadow angle tool to apportion slope zones. These results also replicate real field data well. Results show that a maximum runout envelope can be mapped, as well as frequency distribution of deposited boulders for hazard (and thus risk) analysis purposes. The accuracy of the rockfall runout envelope and frequency distribution can be improved by comprehensive vegetation and substrate mapping. The topics above define the scope of the project, limiting the focus to rockfall processes on the Port Hills, and implications for model calibration for the wider scientific community. The results provide a useful rockfall analysis methodology with a defensible and replicable calibration process, that has the potential to be applied to other lithologies and substrates. Its applications include a method of analysis for the selection and positioning of rockfall countermeasure design; site safety assessment for scaling and demolition works; and risk analysis and land planning for future construction in Christchurch.

Research papers, University of Canterbury Library

The recent Canterbury earthquake sequence in 2010-2011 highlighted a uniquely severe level of structural damage to modern buildings, while confirming the high vulnerability and life threatening of unreinforced masonry and inadequately detailed reinforced concrete buildings. Although the level of damage of most buildings met the expected life-safety and collapse prevention criteria, the structural damage to those building was beyond economic repair. The difficulty in the post-event assessment of a concrete or steel structure and the uneconomical repairing costs are the big drivers of the adoption of low damage design. Among several low-damage technologies, post-tensioned rocking systems were developed in the 1990s with applications to precast concrete members and later extended to structural steel members. More recently the technology was extended to timber buildings (Pres-Lam system). This doctoral dissertation focuses on the experimental investigation and analytical and numerical prediction of the lateral load response of dissipative post-tensioned rocking timber wall systems. The first experimental stages of this research consisted of component testing on both external replaceable devices and internal bars. The component testing was aimed to further investigate the response of these devices and to provide significant design parameters. Post-tensioned wall subassembly testing was then carried out. Firstly, quasi-static cyclic testing of two-thirds scale post-tensioned single wall specimens with several reinforcement layouts was carried out. Then, an alternative wall configuration to limit displacement incompatibilities in the diaphragm was developed and tested. The system consisted of a Column-Wall-Column configuration, where the boundary columns can provide the support to the diaphragm with minimal uplifting and also provide dissipation through the coupling to the post-tensioned wall panel with dissipation devices. Both single wall and column-wall-column specimens were subjected to drifts up to 2% showing excellent performance, limiting the damage to the dissipating devices. One of the objectives of the experimental program was to assess the influence of construction detailing, and the dissipater connection in particular proved to have a significant influence on the wall’s response. The experimental programs on dissipaters and wall subassemblies provided exhaustive data for the validation and refinement of current analytical and numerical models. The current moment-rotation iterative procedure was refined accounting for detailed response parameters identified in the initial experimental stage. The refined analytical model proved capable of fitting the experimental result with good accuracy. A further stage in this research was the validation and refinement of numerical modelling approaches, which consisted in rotational spring and multi-spring models. Both the modelling approaches were calibrated versus the experimental results on post-tensioned walls subassemblies. In particular, the multi-spring model was further refined and implemented in OpenSEES to account for the full range of behavioural aspects of the systems. The multi-spring model was used in the final part of the dissertation to validate and refine current lateral force design procedures. Firstly, seismic performance factors in accordance to a Force-Based Design procedure were developed in accordance to the FEMA P-695 procedure through extensive numerical analyses. This procedure aims to determine the seismic reduction factor and over-strength factor accounting for the collapse probability of the building. The outcomes of this numerical analysis were also extended to other significant design codes. Alternatively, Displacement-Based Design can be used for the determination of the lateral load demand on a post-tensioned multi-storey timber building. The current DBD procedure was used for the development of a further numerical analysis which aimed to validate the procedure and identify the necessary refinements. It was concluded that the analytical and numerical models developed throughout this dissertation provided comprehensive and accurate tools for the determination of the lateral load response of post-tensioned wall systems, also allowing the provision of design parameters in accordance to the current standards and lateral force design procedures.

Research papers, University of Canterbury Library

A series of undrained cyclic direct simple shear (DSS) tests on specimens of sandy silty soils are used to evaluate the effects of fines content, fabric and layered structure on the liquefaction response of sandy soils containing non-plastic fines. Test soils originate from shallow deposits in Christchurch, New Zealand, where severe and damaging manifestations of liquefaction occurred during the 2010-2011 Canterbury earthquakes. A procedure for reconstituting specimens by water sedimentation is employed. This specimen preparation technique involves first pluviation of soil through a water column, and then application of gentle vibrations to the mould (tapping) to prepare specimens with different initial densities. This procedure is applied to prepare uniform specimens, and layered specimens with a silt layer atop a sand layer. Cyclic DSS tests are performed on water-sedimented specimens of two sands, a silt, and sand-silt mixtures with different fines contents. Through this testing program, effects of density, time of vibration during preparation, fines content, and layered structure on cyclic behaviour and liquefaction resistance are investigated. Additional information necessary to characterise soil behaviour is provided by particle size distribution analyses, index void ratio testing, and Scanning Electronic Microscope imaging. The results of cyclic DSS tests show that, for all tested soils, specimens vibrated for longer period of time have lower void ratios, higher relative density, and greater liquefaction resistance. One of the tested sands undergoes significant increase in relative density and liquefaction resistance following prolonged vibration. The other sand exhibits lower increase in relative density and in liquefaction resistance when vibrated for the same period of time. Liquefaction resistance of sand-silt mixtures prepared using this latter sand shows a correlation with relative density irrespective of fines content. In general, however, magnitudes of changes in liquefaction resistance for given variations in vibration time, relative density, or void ratio vary depending on soils under consideration. Characterization based on maximum and minimum void ratios indicates that tested soils develop different structures as fines are added to their respective host sands. These structures influence initial specimen density, strains during consolidation, cyclic liquefaction resistance, and undrained cyclic response of each soil. The different structures are the outcome of differences in particle size distributions, average particle sizes, and particle shapes of the two host sands and of the different relationships between these properties and those of the silt. Fines content alone does not provide an effective characterization of the effects of these factors. Monotonic DSS tests are also performed on specimens prepared by water sedimentation, and on specimens prepared by moist tamping, to identify the critical state lines of tested soils. These critical state lines provide the basis for an alternative interpretation of cyclic DSS tests results within the critical state framework. It is shown that test results imply general consistency between observed cyclic and monotonic DSS soil response. The effects of specimen layering are scrutinised by comparing DSS test results for uniform and layered specimens of the same soils. In this case, only a limited number of tests is performed, and the range of densities considered for the layered specimens is also limited. Caution is therefore required in interpretation of their results. The liquefaction resistance of layered specimens appears to be influenced by the bottom sand layer, irrespective of the global fines content of the specimen. The presence of a layered structure does not result in significant differences in terms of liquefaction response with respect to uniform sand specimens. Cyclic triaxial data for Christchurch sandy silty soils available from previous studies are used to comparatively examine the behaviour observed in the tests of this study. The cyclic DSS liquefaction resistance of water-sedimented specimens is consistent with cyclic triaxial tests on undisturbed specimens performed by other investigators. The two data sets result in similar liquefaction triggering relationships for these soils. However, stress-strain response characteristics for the two types of specimens are different, and undisturbed triaxial specimen exhibit a slower rate of increase in shear strains compared to water-sedimented DSS specimens. This could be due to the greater influence of fabric of the undisturbed specimens.

Research papers, University of Canterbury Library

The increase of the world's population located near areas prone to natural disasters has given rise to new ‘mega risks’; the rebuild after disasters will test the governments’ capabilities to provide appropriate responses to protect the people and businesses. During the aftermath of the Christchurch earthquakes (2010-2012) that destroyed much of the inner city, the government of New Zealand set up a new partnership between the public and private sector to rebuild the city’s infrastructure. The new alliance, called SCIRT, used traditional risk management methods in the many construction projects. And, in hindsight, this was seen as one of the causes for some of the unanticipated problems. This study investigated the risk management practices in the post-disaster recovery to produce a specific risk management model that can be used effectively during future post-disaster situations. The aim was to develop a risk management guideline for more integrated risk management and fill the gap that arises when the traditional risk management framework is used in post-disaster situations. The study used the SCIRT alliance as a case study. The findings of the study are based on time and financial data from 100 rebuild projects, and from surveying and interviewing risk management professionals connected to the infrastructure recovery programme. The study focussed on post-disaster risk management in construction as a whole. It took into consideration the changes that happened to the people, the work and the environment due to the disaster. System thinking, and system dynamics techniques have been used due to the complexity of the recovery and to minimise the effect of unforeseen consequences. Based on an extensive literature review, the following methods were used to produce the model. The analytical hierarchical process and the relative importance index have been used to identify the critical risks inside the recovery project. System theory methods and quantitative graph theory have been used to investigate the dynamics of risks between the different management levels. Qualitative comparative analysis has been used to explore the critical success factors. And finally, causal loop diagrams combined with the grounded theory approach has been used to develop the model itself. The study identified that inexperienced staff, low management competency, poor communication, scope uncertainty, and non-alignment of the timing of strategic decisions with schedule demands, were the key risk factors in recovery projects. Among the critical risk groups, it was found that at a strategic management level, financial risks attracted the highest level of interest, as the client needs to secure funding. At both alliance-management and alliance-execution levels, the safety and environmental risks were given top priority due to a combination of high levels of emotional, reputational and media stresses. Risks arising from a lack of resources combined with the high volume of work and the concern that the cost could go out of control, alongside the aforementioned funding issues encouraged the client to create the recovery alliance model with large reputable construction organisations to lock in the recovery cost, at a time when the scope was still uncertain. This study found that building trust between all parties, clearer communication and a constant interactive flow of information, established a more working environment. Competent and clear allocation of risk management responsibilities, cultural shift, risk prioritisation, and staff training were crucial factors. Finally, the post-disaster risk management (PDRM) model can be described as an integrated risk management model that considers how the changes which happened to the environment, the people and their work, caused them to think differently to ease the complexity of the recovery projects. The model should be used as a guideline for recovery systems, especially after an earthquake, looking in detail at all the attributes and the concepts, which influence the risk management for more effective PDRM. The PDRM model is represented in Causal Loops Diagrams (CLD) in Figure 8.31 and based on 10 principles (Figure 8.32) and 26 concepts (Table 8.1) with its attributes.

Research papers, University of Canterbury Library

The overarching goal of this dissertation is to improve predictive capabilities of geotechnical seismic site response analyses by incorporating additional salient physical phenomena that influence site effects. Specifically, multidimensional wave-propagation effects that are neglected in conventional 1D site response analyses are incorporated by: (1) combining results of 3D regional-scale simulations with 1D nonlinear wave-propagation site response analysis, and (2) modelling soil heterogeneity in 2D site response analyses using spatially-correlated random fields to perturb soil properties. A method to combine results from 3D hybrid physics-based ground motion simulations with site-specific nonlinear site response analyses was developed. The 3D simulations capture 3D ground motion phenomena on a regional scale, while the 1D nonlinear site response, which is informed by detailed site-specific soil characterization data, can capture site effects more rigorously. Simulations of 11 moderate-to-large earthquakes from the 2010-2011 Canterbury Earthquake Sequence (CES) at 20 strong motion stations (SMS) were used to validate simulations with observed ground motions. The predictions were compared to those from an empirically-based ground motion model (GMM), and from 3D simulations with simplified VS30- based site effects modelling. By comparing all predictions to observations at seismic recording stations, it was found that the 3D physics-based simulations can predict ground motions with comparable bias and uncertainty as the GMM, albeit, with significantly lower bias at long periods. Additionally, the explicit modelling of nonlinear site-response improves predictions significantly compared to the simplified VS30-based approach for soft-soil or atypical sites that exhibit exceptionally strong site effects. A method to account for the spatial variability of soils and wave scattering in 2D site response analyses was developed and validated against a database of vertical array sites in California. The inputs required to run the 2D analyses are nominally the same as those required for 1D analyses (except for spatial correlation parameters), enabling easier adoption in practice. The first step was to create the platform and workflow, and to perform a sensitivity study involving 5,400 2D model realizations to investigate the influence of random field input parameters on wave scattering and site response. Boundary conditions were carefully assessed to understand their effect on the modelled response and select appropriate assumptions for use on a 2D model with lateral heterogeneities. Multiple ground-motion intensity measures (IMs) were analyzed to quantify the influence from random field input parameters and boundary conditions. It was found that this method is capable of scattering seismic waves and creating spatially-varying ground motions at the ground surface. The redistribution of ground-motion energy across wider frequency bands, and the scattering attenuation of high-frequency waves in 2D analyses, resemble features observed in empirical transfer functions (ETFs) computed in other studies. The developed 2D method was subsequently extended to more complicated multi-layer soil profiles and applied to a database of 21 vertical array sites in California to test its appropriate- ness for future predictions. Again, different boundary condition and input motion assumptions were explored to extend the method to the in-situ conditions of a vertical array (with a sensor embedded in the soil). ETFs were compared to theoretical transfer functions (TTFs) from conventional 1D analyses and 2D analyses with heterogeneity. Residuals of transfer-function- based IMs, and IMs of surface ground motions, were also used as validation metrics. The spatial variability of transfer-function-based IMs was estimated from 2D models and compared to the event-to-event variability from ETFs. This method was found capable of significantly improving predictions of median ETF amplification factors, especially for sites that display higher event-to-event variability. For sites that are well represented by 1D methods, the 2D approach can underpredict amplification factors at higher modes, suggesting that the level of heterogeneity may be over-represented by the 2D random field models used in this study.

Research papers, Victoria University of Wellington

The Canterbury earthquake sequence (2010-2011) was the most devastating catastrophe in New Zealand‘s modern history. Fortunately, in 2011 New Zealand had a high insurance penetration ratio, with more than 95% of residences being insured for these earthquakes. This dissertation sheds light on the functions of disaster insurance schemes and their role in economic recovery post-earthquakes.  The first chapter describes the demand and supply for earthquake insurance and provides insights about different public-private partnership earthquake insurance schemes around the world.  In the second chapter, we concentrate on three public earthquake insurance schemes in California, Japan, and New Zealand. The chapter examines what would have been the outcome had the system of insurance in Christchurch been different in the aftermath of the Canterbury earthquake sequence (CES). We focus on the California Earthquake Authority insurance program, and the Japanese Earthquake Reinsurance scheme. Overall, the aggregate cost of the earthquake to the New Zealand public insurer (the Earthquake Commission) was USD 6.2 billion. If a similar-sized disaster event had occurred in Japan and California, homeowners would have received only around USD 1.6 billion and USD 0.7 billion from the Japanese and Californian schemes, respectively. We further describe the spatial and distributive aspects of these scenarios and discuss some of the policy questions that emerge from this comparison.  The third chapter measures the longer-term effect of the CES on the local economy, using night-time light intensity measured from space, and focus on the role of insurance payments for damaged residential property during the local recovery process. Uniquely for this event, more than 95% of residential housing units were covered by insurance and almost all incurred some damage. However, insurance payments were staggered over 5 years, enabling us to identify their local impact. We find that night-time luminosity can capture the process of recovery; and that insurance payments contributed significantly to the process of local economic recovery after the earthquake. Yet, delayed payments were less affective in assisting recovery and cash settlement of claims were more effective than insurance-managed repairs.  After the Christchurch earthquakes, the government declared about 8000 houses as Red Zoned, prohibiting further developments in these properties, and offering the owners to buy them out. The government provided two options for owners: the first was full payment for both land and dwelling at the 2007 property evaluation, the second was payment for land, and the rest to be paid by the owner‘s insurance. Most people chose the second option. Using data from LINZ combined with data from Stats NZ, the fourth chapter empirically investigates what led people to choose this second option, and how peer effect influenced the homeowners‘ choices.  Due to climate change, public disclosure of coastal hazard information through maps and property reports have been used more frequently by local government. This is expected to raise awareness about disaster risks in local community and help potential property owners to make informed locational decision. However, media outlets and business sector argue that public hazard disclosure will cause a negative effect on property value. Despite this opposition, some district councils in New Zealand have attempted to implement improved disclosure. Kapiti Coast district in the Wellington region serves as a case study for this research. In the fifth chapter, we utilize the residential property sale data and coastal hazard maps from the local district council. This study employs a difference-in-difference hedonic property price approach to examine the effect of hazard disclosure on coastal property values. We also apply spatial hedonic regression methods, controlling for coastal amenities, as our robustness check. Our findings suggest that hazard designation has a statistically and economically insignificant impact on property values. Overall, the risk perception about coastal hazards should be more emphasized in communities.

Research papers, University of Canterbury Library

This dissertation addresses several fundamental and applied aspects of ground motion selection for seismic response analyses. In particular, the following topics are addressed: the theory and application of ground motion selection for scenario earthquake ruptures; the consideration of causal parameter bounds in ground motion selection; ground motion selection in the near-fault region where directivity effect is significant; and methodologies for epistemic uncertainty consideration and propagation in the context of ground motion selection and seismic performance assessment. The paragraphs below outline each contribution in more detail. A scenario-based ground motion selection method is presented which considers the joint distribution of multiple intensity measure (IM) types based on the generalised conditional intensity measure (GCIM) methodology (Bradley, 2010b, 2012c). The ground motion selection algorithm is based on generating realisations of the considered IM distributions for a specific rupture scenario and then finding the prospective ground motions which best fit the realisations using an optimal amplitude scaling factor. In addition, using different rupture scenarios and site conditions, two important aspects of the GCIM methodology are scrutinised: (i) different weight vectors for the various IMs considered; and (ii) quantifying the importance of replicate selections for ensembles with different numbers of desired ground motions. As an application of the developed scenario-based ground motion selection method, ground motion ensembles are selected to represent several major earthquake scenarios in New Zealand that pose a significant seismic hazard, namely, Alpine, Hope and Porters Pass ruptures for Christchurch city; and Wellington, Ohariu, and Wairarapa ruptures for Wellington city. A rigorous basis is developed, and sensitivity analyses performed, for the consideration of bounds on causal parameters (e.g., magnitude, source-to-site distance, and site condition) for ground motion selection. The effect of causal parameter bound selection on both the number of available prospective ground motions from an initial empirical as-recorded database, and the statistical properties of IMs of selected ground motions are examined. It is also demonstrated that using causal parameter bounds is not a reliable approach to implicitly account for ground motion duration and cumulative effects when selection is based on only spectral acceleration (SA) ordinates. Specific causal parameter bounding criteria are recommended for general use as a ‘default’ bounding criterion with possible adjustments from the analyst based on problem-specific preferences. An approach is presented to consider the forward directivity effects in seismic hazard analysis, which does not separate the hazard calculations for pulse-like and non-pulse-like ground motions. Also, the ability of ground motion selection methods to appropriately select records containing forward directivity pulse motions in the near-fault region is examined. Particular attention is given to ground motion selection which is explicitly based on ground motion IMs, including SA, duration, and cumulative measures; rather than a focus on implicit parameters (i.e., distance, and pulse or non-pulse classifications) that are conventionally used to heuristically distinguish between the near-fault and far-field records. No ad hoc criteria, in terms of the number of directivity ground motions and their pulse periods, are enforced for selecting pulse-like records. Example applications are presented with different rupture characteristics, source-to-site geometry, and site conditions. It is advocated that the selection of ground motions in the near-fault region based on IM properties alone is preferred to that in which the proportion of pulse-like motions and their pulse periods are specified a priori as strict criteria for ground motion selection. Three methods are presented to propagate the effect of seismic hazard and ground motion selection epistemic uncertainties to seismic performance metrics. These methods differ in their level of rigor considered to propagate the epistemic uncertainty in the conditional distribution of IMs utilised in ground motion selection, selected ground motion ensembles, and the number of nonlinear response history analyses performed to obtain the distribution of engineering demand parameters. These methods are compared for an example site where it is observed that, for seismic demand levels below the collapse limit, epistemic uncertainty in ground motion selection is a smaller uncertainty contributor relative to the uncertainty in the seismic hazard itself. In contrast, uncertainty in ground motion selection process increases the uncertainty in the seismic demand hazard for near-collapse demand levels.

Research papers, University of Canterbury Library

This dissertation addresses a diverse range of topics in the area of physics-based ground motion simulation with particular focus on the Canterbury, New Zealand region. The objectives achieved provide the means to perform hybrid broadband ground motion simulation and subsequently validates the simulation methodology employed. In particu- lar, the following topics are addressed: the development of a 3D seismic velocity model of the Canterbury region for broadband ground motion simulation; the development of a 3D geologic model of the interbedded Quaternary formations to provide insight on observed ground motions; and the investigation of systematic effects through ground motion sim- ulation of small-to-moderate magnitude earthquakes. The paragraphs below outline each contribution in more detail. As a means to perform hybrid broadband ground motion simulation, a 3D model of the geologic structure and associated seismic velocities in the Canterbury region is devel- oped utilising data from depth-converted seismic reflection lines, petroleum and water well logs, cone penetration tests, and implicitly guided by existing contour maps and geologic cross sections in data sparse subregions. The model explicitly characterises five significant and regionally recognisable geologic surfaces that mark the boundaries between geologic units with distinct lithology and age, including the Banks Peninsula volcanics, which are noted to strongly influence seismic wave propagation. The Basement surface represents the base of the Canterbury sedimentary basin, where a large impedance contrast exists re- sulting in basin-generated waves. Seismic velocities for the lithological units between the geologic surfaces are derived from well logs, seismic reflection surveys, root mean square stacking velocities, empirical correlations, and benchmarked against a regional crustal model, thus providing the necessary information for a Canterbury velocity model for use in broadband seismic wave propagation. A 3D high-resolution model of the Quaternary geologic stratigraphic sequence in the Canterbury region is also developed utilising datasets of 527 high-quality water well logs, and 377 near-surface cone penetration test records. The model, developed using geostatistical Kriging, represents the complex interbedded regional Quaternary geology by characterising the boundaries between significant interbedded geologic formations as 3D surfaces including explicit modelling of the formation unconformities resulting from the Banks Peninsula volcanics. The stratigraphic layering present can result in complex wave propagation. The most prevalent trend observed in the surfaces was the downward dip from inland to the eastern coastline as a result of the dominant fluvial depositional environment of the terrestrial gravel formations. The developed model provides a benefi- cial contribution towards developing a comprehensive understanding of recorded ground motions in the region and also providing the necessary information for future site char- acterisation and site response analyses. To highlight the practicality of the model, an example illustrating the role of the model in constraining surface wave analysis-based shear wave velocity profiling is illustrated along with the calculation of transfer functions to quantify the effect of the interbedded geology on wave propagation. Lastly, an investigation of systematic biases in the (Graves and Pitarka, 2010, 2015) ground motion simulation methodology and the specific inputs used for the Canterbury region is presented considering 144 small-to-moderate magnitude earthquakes. In the simulation of these earthquakes, the 3D Canterbury Velocity Model, developed as a part of this dissertation, is used for the low-frequency simulation, and a regional 1D velocity model for the high-frequency simulation. Representative results for individual earthquake sources are first presented to highlight the characteristics of the small-to-moderate mag- nitude earthquake simulations through waveforms, intensity measure scaling with source- to-site distance, and spectral bias of the individual events. Subsequently, a residual de- composition is performed to examine the between- and within-event residuals between observed data, and simulated and empirical predictions. By decomposing the residuals into between- and within-event residuals, the biases in source, path and site effects, and their causes, can be inferred. The residuals are comprehensively examined considering their aggregated characteristics, dependence on predictor variables, spatial distribution, and site-specific effects. The results of the simulation are also benchmarked against empir- ical ground motion models, where their similarities manifest from common components in their prediction. Ultimately, suggestions to improve the predictive capability of the simulations are presented as a result of the analysis.

Research papers, University of Canterbury Library

Recent surface-rupturing earthquakes in New Zealand have highlighted significant exposure and vulnerability of the road network to fault displacement. Understanding fault displacement hazard and its impact on roads is crucial for mitigating risks and enhancing resilience. There is a need for regional-scale assessments of fault displacement to identify vulnerable areas within the road network for the purposes of planning and prioritising site-specific investigations. This thesis employs updated analysis of data from three historical surface-rupturing earthquakes (Edgecumbe 1987, Darfield 2010, and Kaikoūra 2016) to develop an empirical model that addresses the gap in regional fault displacement hazard analysis. The findings contribute to understanding of • How to use seismic hazard model inputs for regional fault displacement hazard analysis • How faulting type and sediment cover affects the magnitude and spatial distribution of fault displacement • How the distribution of displacement and regional fault displacement hazard is impacted by secondary faulting • The inherent uncertainties and limitations associated with employing an empirical approach at a regional scale • Which sections of New Zealand’s roading network are most susceptible to fault displacement hazard and warrant site-specific investigations • Which regions should prioritise updating emergency management plans to account for post-event disruptions to roading. I used displacement data from the aforementioned historical ruptures to generate displacement versus distance-to-fault curves for various displacement components, fault types, and geological characteristics. Using those relationships and established relationships for along-strike displacement, displacement contours were generated surrounding active faults within the NZ Community Fault Model. Next, I calculated a new measure of 1D strain along roads as well as relative hazard, which integrated 1D strain and normalised slip rate data. Summing these values at the regional level identified areas of heightened relative hazard across New Zealand, and permits an assessment of the susceptibility of road networks using geomorphon land classes as proxies for vulnerability. The results reveal that fault-parallel displacements tend to localise near the fault plane, while vertical and fault-perpendicular displacements sustain over extended distances. Notably, no significant disparities were observed in off-fault displacement between the hanging wall and footwall sides of the fault, or among different surface geology types, potentially attributed to dataset biases. The presence of secondary faulting in the dataset contributes to increased levels of tectonic displacement farther from the fault, highlighting its significance in hazard assessments. Furthermore, fault displacement contours delineate broader zones around dip-slip faults compared to strike-slip faults, with correlations identified between fault length and displacement width. Road ‘strain’ values are higher around dip-slip faults, with notable examples observed in the Westland and Buller Districts. As expected, relative hazard analysis revealed elevated values along faults with high slip rates, notably along the Alpine Fault. A regional-scale analysis of hazard and exposure reveals heightened relative hazard in specific regions, including Wellington, Southern Hawke’s Bay, Central Bay of Plenty, Central West Coast, inland Canterbury, and the Wairau Valley of Marlborough. Notably, the Central West Coast exhibits the highest summed relative hazard value, attributed to the fast-slipping Alpine Fault. The South Island generally experiences greater relative hazard due to larger and faster-slipping faults compared to the North Island, despite having fewer roads. Central regions of New Zealand face heightened risk compared to Southern or Northern regions. Critical road links intersecting high-slipping faults, such as State Highways 6, 73, 1, and 2, necessitate prioritisation for site-specific assessments, emergency management planning and targeted mitigation strategies. Roads intersecting with the Alpine Fault are prone to large parallel displacements, requiring post-quake repair efforts. Mitigation strategies include future road avoidance of nearby faults, modification of road fill and surface material, and acknowledgement of inherent risk, leading to prioritised repair efforts of critical roads post-quake. Implementing these strategies enhances emergency response efforts by improving accessibility to isolated regions following a major surface-rupturing event, facilitating faster supply delivery and evacuation assistance. This thesis contributes to the advancement of understanding fault displacement hazard by introducing a novel regional, empirical approach. The methods and findings highlight the importance of further developing such analyses and extending them to other critical infrastructure types exposed to fault displacement hazard in New Zealand. Enhancing our comprehension of the risks associated with fault displacement hazard offers valuable insights into various mitigation strategies for roading infrastructure and informs emergency response planning, thereby enhancing both national and global infrastructure resilience against geological hazards.

Research papers, University of Canterbury Library

One of the most controversial issues highlighted by the 2010-2011 Christchurch earthquake series and more recently the 2016 Kaikoura earthquake, has been the evident difficulty and lack of knowledge and guidelines for: a) evaluation of the residual capacity damaged buildings to sustain future aftershocks; b) selection and implementation of a series of reliable repairing techniques to bring back the structure to a condition substantially the same as prior to the earthquake; and c) predicting the cost (or cost-effectiveness) of such repair intervention, when compared to fully replacement costs while accounting for potential aftershocks in the near future. As a result of such complexity and uncertainty (i.e., risk), in combination with the possibility (unique in New Zealand when compared to most of the seismic-prone countries) to rely on financial support from the insurance companies, many modern buildings, in a number exceeding typical expectations from past experiences at an international level, have ended up being demolished. This has resulted in additional time and indirect losses prior to the full reconstruction, as well as in an increase in uncertainty on the actual relocation of the investment. This research project provides the main end-users and stakeholders (practitioner engineers, owners, local and government authorities, insurers, and regulatory agencies) with comprehensive evidence-based information to assess the residual capacity of damage reinforced concrete buildings, and to evaluate the feasibility of repairing techniques, in order to support their delicate decision-making process of repair vs. demolition or replacement. Literature review on effectiveness of epoxy injection repairs, as well as experimental tests on full-scale beam-column joints shows that repaired specimens have a reduced initial stiffness compared with the undamaged specimen, with no apparent strength reduction, sometimes exhibiting higher displacement ductility capacities. Although the bond between the steel and concrete is only partially restored, it still allows the repaired specimen to dissipate at least the same amount of hysteretic energy. Experimental tests on buildings subjected to earthquake loading demonstrate that even for severe damage levels, the ability of the epoxy injection to restore the initial stiffness of the structure is significant. Literature review on damage assessment and repair guidelines suggests that there is consensus within the international community that concrete elements with cracks less than 0.2 mm wide only require cosmetic repairs; epoxy injection repairs of cracks less and 2.0 mm wide and concrete patching of spalled cover concrete (i.e., minor to moderate damage) is an appropiate repair strategy; and for severe damaged components (e.g., cracks greater than 2.0 mm wide, crushing of the concrete core, buckling of the longitudinal reinforcement) local replacement of steel and/or concrete in addition to epoxy crack injection is more appropriate. In terms of expected cracking patterns, non-linear finite element investigations on well-designed reinforced concrete beam-to-column joints, have shown that lower number of cracks but with wider openings are expected to occur for larger compressive concrete strength, f’c, and lower reinforcement content, ρs. It was also observed that the tensile concrete strength, ft, strongly affects the expected cracking pattern in the beam-column joints, the latter being more uniformly distributed for lower ft values. Strain rate effects do not seem to play an important role on the cracking pattern. However, small variations in the cracking pattern were observed for low reinforcement content as it approaches to the minimum required as per NZS 3101:2006. Simple equations are proposed in this research project to relate the maximum and residual crack widths with the steel strain at peak displacement, with or without axial load. A literature review on fracture of reinforcing steel due to low-cycle fatigue, including recent research using steel manufactured per New Zealand standards is also presented. Experimental results describing the influence of the cyclic effect on the ultimate strain capacity of the steel are also discussed, and preliminary equations to account for that effect are proposed. A literature review on the current practice to assess the seismic residual capacity of structures is also presented. The various factors affecting the residual fatigue life at a component level (i.e., plastic hinge) of well-designed reinforced concrete frames are discussed, and equations to quantify each of them are proposed, as well as a methodology to incorporate them into a full displacement-based procedure for pre-earthquake and post-earthquake seismic assessment.

Research papers, University of Canterbury Library

In September 2010 and February 2011, the Canterbury region experienced devastating earthquakes with an estimated economic cost of over NZ$40 billion (Parker and Steenkamp, 2012; Timar et al., 2014; Potter et al., 2015). The insurance market played an important role in rebuilding the Canterbury region after the earthquakes. Homeowners, insurance and reinsurance markets and New Zealand government agencies faced a difficult task to manage the rebuild process. From an empirical and theoretic research viewpoint, the Christchurch disaster calls for an assessment of how the insurance market deals with such disasters in the future. Previous studies have investigated market responses to losses in global catastrophes by focusing on the insurance supply-side. This study investigates both demand-side and supply-side insurance market responses to the Christchurch earthquakes. Despite the fact that New Zealand is prone to seismic activities, there are scant previous studies in the area of earthquake insurance. This study does offer a unique opportunity to examine and document the New Zealand insurance market response to catastrophe risk, providing results critical for understanding market responses after major loss events in general. A review of previous studies shows higher premiums suppress demand, but how higher premiums and a higher probability of risk affect demand is still largely unknown. According to previous studies, the supply of disaster coverage is curtailed unless the market is subsidised, however, there is still unsettled discussion on why demand decreases with time from the previous disaster even when the supply of coverage is subsidised by the government. Natural disaster risks pose a set of challenges for insurance market players because of substantial ambiguity associated with the probability of such events occurring and high spatial correlation of catastrophe losses. Private insurance market inefficiencies due to high premiums and spatially concentrated risks calls for government intervention in the provision of natural disaster insurance to avert situations of noninsurance and underinsurance. Political economy considerations make it more likely for government support to be called for if many people are uninsured than if few people are uninsured. However, emergency assistance for property owners after catastrophe events can encourage most property owners to not buy insurance against natural disaster and develop adverse selection behaviour, generating larger future risks for homeowners and governments. On the demand-side, this study has developed an intertemporal model to examine how demand for insurance changes post-catastrophe, and how to model it theoretically. In this intertemporal model, insurance can be sought in two sequential periods of time, and at the second period, it is known whether or not a loss event happened in period one. The results show that period one demand for insurance increases relative to the standard single period model when the second period is taken into consideration, period two insurance demand is higher post-loss, higher than both the period one demand and the period two demand without a period one loss. To investigate policyholders experience from the demand-side perspective, a total of 1600 survey questionnaires were administered, and responses from 254 participants received representing a 16 percent response rate. Survey data was gathered from four institutions in Canterbury and is probably not representative of the entire population. The results of the survey show that the change from full replacement value policy to nominated replacement value policy is a key determinant of the direction of change in the level of insurance coverage after the earthquakes. The earthquakes also highlighted the plight of those who were underinsured, prompting policyholders to update their insurance coverage to reflect the estimated cost of re-building their property. The survey has added further evidence to the existing literature, such as those who have had a recent experience with disaster loss report increased risk perception if a similar event happens in future with females reporting a higher risk perception than males. Of the demographic variables, only gender has a relationship with changes in household cover. On the supply-side, this study has built a risk-based pricing model suitable to generate a competitive premium rate for natural disaster insurance cover. Using illustrative data from the Christchurch Red-zone suburbs, the model generates competitive premium rates for catastrophe risk. When the proposed model incorporates the new RMS high-definition New Zealand Earthquake Model, for example, insurers can find the model useful to identify losses at a granular level so as to calculate the competitive premium. This study observes that the key to the success of the New Zealand dual insurance system despite the high prevalence of catastrophe losses are; firstly the EQC’s flat-rate pricing structure keeps private insurance premiums affordable and very high nationwide homeowner take-up rates of natural disaster insurance. Secondly, private insurers and the EQC have an elaborate reinsurance arrangement in place. By efficiently transferring risk to the reinsurer, the cost of writing primary insurance is considerably reduced ultimately expanding primary insurance capacity and supply of insurance coverage.

Research papers, The University of Auckland Library

The recent instances of seismic activity in Canterbury (2010/11) and Kaikōura (2016) in New Zealand have exposed an unexpected level of damage to non-structural components, such as buried pipelines and building envelope systems. The cost of broken buried infrastructure, such as pipeline systems, to the Christchurch Council was excessive, as was the cost of repairing building envelopes to building owners in both Christchurch and Wellington (due to the Kaikōura earthquake), which indicates there are problems with compliance pathways for both of these systems. Councils rely on product testing and robust engineering design practices to provide compliance certification on the suitability of product systems, while asset and building owners rely on the compliance as proof of an acceptable design. In addition, forensic engineers and lifeline analysts rely on the same product testing and design techniques to analyse earthquake-related failures or predict future outcomes pre-earthquake, respectively. The aim of this research was to record the actual field-observed damage from the Canterbury and Kaikōura earthquakes of seismic damage to buried pipeline and building envelope systems, develop suitable testing protocols to be able to test the systems’ seismic resilience, and produce prediction design tools that deliver results that reflect the collected field observations with better accuracy than the present tools used by forensic engineers and lifeline analysts. The main research chapters of this thesis comprise of four publications that describe the gathering of seismic damage to pipes (Publication 1 of 4) and building envelopes (Publication 2 of 4). Experimental testing and the development of prediction design tools for both systems are described in Publications 3 and 4. The field observation (discussed in Publication 1 of 4) revealed that segmented pipe joints, such as those used in thick-walled PVC pipes, were particularly unsatisfactory with respect to the joint’s seismic resilience capabilities. Once the joint was damaged, silt and other deleterious material were able to penetrate the pipeline, causing blockages and the shutdown of key infrastructure services. At present, the governing Standards for PVC pipes are AS/NZS 1477 (pressure systems) and AS/NZS 1260 (gravity systems), which do not include a protocol for evaluating the PVC pipes for joint seismic resilience. Testing methodologies were designed to test a PVC pipe joint under various different simultaneously applied axial and transverse loads (discussed in Publication 3 of 4). The goal of the laboratory experiment was to establish an easy to apply testing protocol that could fill the void in the mentioned standards and produce boundary data that could be used to develop a design tool that could predict the observed failures given site-specific conditions surrounding the pipe. A tremendous amount of building envelope glazing system damage was recorded in the CBDs of both Christchurch and Wellington, which included gasket dislodgement, cracked glazing, and dislodged glazing. The observational research (Publication 2 of 4) concluded that the glazing systems were a good indication of building envelope damage as the glazing had consistent breaking characteristics, like a ballistic fuse used in forensic blast analysis. The compliance testing protocol recognised in the New Zealand Building Code, Verification Method E2/VM1, relies on the testing method from the Standard AS/NZS 4284 and stipulates the inclusion of typical penetrations, such as glazing systems, to be included in the test specimen. Some of the building envelope systems that failed in the recent New Zealand earthquakes were assessed with glazing systems using either the AS/NZS 4284 or E2/VM1 methods and still failed unexpectedly, which suggests that improvements to the testing protocols are required. An experiment was designed to mimic the observed earthquake damage using bi-directional loading (discussed in Publication 4 of 4) and to identify improvements to the current testing protocol. In a similar way to pipes, the observational and test data was then used to develop a design prediction tool. For both pipes (Publication 3 of 4) and glazing systems (Publication 4 of 4), experimentation suggests that modifying the existing testing Standards would yield more realistic earthquake damage results. The research indicates that including a specific joint testing regime for pipes and positioning the glazing system in a specific location in the specimen would improve the relevant Standards with respect to seismic resilience of these systems. Improving seismic resilience in pipe joints and glazing systems would improve existing Council compliance pathways, which would potentially reduce the liability of damage claims against the government after an earthquake event. The developed design prediction tool, for both pipe and glazing systems, uses local data specific to the system being scrutinised, such as local geology, dimensional characteristics of the system, actual or predicted peak ground accelerations (both vertically and horizontally) and results of product-specific bi-directional testing. The design prediction tools would improve the accuracy of existing techniques used by forensic engineers examining the cause of failure after an earthquake and for lifeline analysts examining predictive earthquake damage scenarios.

Research papers, University of Canterbury Library

As a consequence of the 2010 – 2011 Canterbury earthquake sequence, Christchurch experienced widespread liquefaction, vertical settlement and lateral spreading. These geological processes caused extensive damage to both housing and infrastructure, and increased the need for geotechnical investigation substantially. Cone Penetration Testing (CPT) has become the most common method for liquefaction assessment in Christchurch, and issues have been identified with the soil behaviour type, liquefaction potential and vertical settlement estimates, particularly in the north-western suburbs of Christchurch where soils consist mostly of silts, clayey silts and silty clays. The CPT soil behaviour type often appears to over-estimate the fines content within a soil, while the liquefaction potential and vertical settlement are often calculated higher than those measured after the Canterbury earthquake sequence. To investigate these issues, laboratory work was carried out on three adjacent CPT/borehole pairs from the Groynes Park subdivision in northern Christchurch. Boreholes were logged according to NZGS standards, separated into stratigraphic layers, and laboratory tests were conducted on representative samples. Comparison of these results with the CPT soil behaviour types provided valuable information, where 62% of soils on average were specified by the CPT at the Groynes Park subdivision as finer than what was actually present, 20% of soils on average were specified as coarser than what was actually present, and only 18% of soils on average were correctly classified by the CPT. Hence the CPT soil behaviour type is not accurately describing the stratigraphic profile at the Groynes Park subdivision, and it is understood that this is also the case in much of northwest Christchurch where similar soils are found. The computer software CLiq, by GeoLogismiki, uses assessment parameter constants which are able to be adjusted with each CPT file, in an attempt to make each more accurate. These parameter changes can in some cases substantially alter the results for liquefaction analysis. The sensitivity of the overall assessment method, raising and lowering the water table, lowering the soil behaviour type index, Ic, liquefaction cutoff value, the layer detection option, and the weighting factor option, were analysed by comparison with a set of ‘base settings’. The investigation confirmed that liquefaction analysis results can be very sensitive to the parameters selected, and demonstrated the dependency of the soil behaviour type on the soil behaviour type index, as the tested assessment parameters made very little to no changes to the soil behaviour type plots. The soil behaviour type index, Ic, developed by Robertson and Wride (1998) has been used to define a soil’s behaviour type, which is defined according to a set of numerical boundaries. In addition to this, the liquefaction cutoff point is defined as Ic > 2.6, whereby it is assumed that any soils with an Ic value above this will not liquefy due to clay-like tendencies (Robertson and Wride, 1998). The method has been identified in this thesis as being potentially unsuitable for some areas of Christchurch as it was developed for mostly sandy soils. An alternative methodology involving adjustment of the Robertson and Wride (1998) soil behaviour type boundaries is proposed as follows:  Ic < 1.31 – Gravelly sand to dense sand  1.31 < Ic < 1.90 – Sands: clean sand to silty sand  1.90 < Ic < 2.50 – Sand mixtures: silty sand to sandy silt  2.50 < Ic < 3.20 – Silt mixtures: clayey silt to silty clay  3.20 < Ic < 3.60 – Clays: silty clay to clay  Ic > 3.60 – Organics soils: peats. When the soil behaviour type boundary changes were applied to 15 test sites throughout Christchurch, 67% showed an improved change of soil behaviour type, while the remaining 33% remained unchanged, because they consisted almost entirely of sand. Within these boundary changes, the liquefaction cutoff point was moved from Ic > 2.6 to Ic > 2.5 and altered the liquefaction potential and vertical settlement to more realistic ii values. This confirmed that the overall soil behaviour type boundary changes appear to solve both the soil behaviour type issues and reduce the overestimation of liquefaction potential and vertical settlement. This thesis acts as a starting point towards researching the issues discussed. In particular, future work which would be useful includes investigation of the CLiq assessment parameter adjustments, and those which would be most suitable for use in clay-rich soils such as those in Christchurch. In particular consideration of how the water table can be better assessed when perched layers of water exist, with the limitation that only one elevation can be entered into CLiq. Additionally, a useful investigation would be a comparison of the known liquefaction and settlements from the Canterbury earthquake sequence with the liquefaction and settlement potentials calculated in CLiq for equivalent shaking conditions. This would enable the difference between the two to be accurately defined, and a suitable adjustment applied. Finally, inconsistencies between the Laser-Sizer and Hydrometer should be investigated, as the Laser-Sizer under-estimated the fines content by up to one third of the Hydrometer values.

Research papers, University of Canterbury Library

Liquefaction of sandy soil has been observed to cause significant damage to infrastructure during major earthquakes. Historical cases of liquefaction have typically occurred in sands containing some portion of fines particles, which are defined as 75μm or smaller in diameter. The effects of fines on the undrained behaviour of sand are not however fully understood, and this study therefore attempts to quantify these effects through the undrained testing of sand mixed with non-plastic fines sourced from Christchurch, New Zealand. The experimental program carried out during this study consisted of undrained monotonic and cyclic triaxial tests performed on three different mixtures of sand and fines: the Fitzgerald Bridge mixture (FBM), and two Pinnacles Sand mixtures (PSM1 and PSM2). The fines content of each host sand was systematically varied up to a maximum of 30%, with all test specimens being reconstituted using moist tamping deposition. The undrained test results from the FBM soils were interpreted using a range of different measures of initial state. When using void ratio and relative density, the addition of fines to the FBM sand caused more contractive behaviour for both monotonic and cyclic loadings. This resulted in lower strengths at the steady state of deformation, and lower liquefaction resistances. When the intergranular void ratio was used for the interpretation, the effect of additional fines was to cause less contractive response in the sand. The state parameter and state index were also used to interpret the undrained cyclic test results – these measures suggested that additional fines caused less contractive sand behaviour, the opposite to that observed when using the void ratio. This highlighted the dependency on the parameter chosen as a basis for the response comparison when determining the effects of fines, and pointed out a need to identify a measure that normalizes such effects. Based on the FBM undrained test results and interpretations, the equivalent granular void ratio, e*, was identified from the literature as a measure of initial state that normalizes the effects of fines on the undrained behaviour of sand up to a fines content of 30%. This is done through a parameter within the e* definition termed the fines influence factor, b, which quantifies the effects of fines from a value of zero (no effect) to one (same effect as sand particles). The value of b was also determined to be different when interpreting the steady state lines (bSSL) and cyclic resistance curves (bCR) respectively for a given mixture of sand and fines. The steady state lines and cyclic resistance curves of the FBM soils and a number of other sand-fines mixtures sourced from the literature were subsequently interpreted using the equivalent granular void ratio concept, with bSSL and bCR values being back-calculated from the respective test data sets. Based on these interpretations, it was concluded that e* was conceptually a useful parameter for characterizing and quantifying the effects of fines on the undrained behaviour of sand, assuming the fines influence factor value could be derived. To allow prediction of the fines influence factor values, bSSL and bCR were correlated with material and depositional properties of the presented sand-fines mixtures. It was found that as the size of the fines particles relative to the sand particles became smaller, the values of bSSL and bCR reduced, indicating lower effect of fines. The same trend was also observed as the angularity of the sand particles increased. The depositional method was found to influence the value of bCR, due to the sensitivity of cyclic loading to initial soil fabric. This led to bSSL being used as a reference for the effect of fines, with specimens prepared by moist tamping having bCR > bSSL, and specimens prepared by slurry deposition having bCR < bSSL. Finally the correlations of the fines influence factor values with material and depositional properties were used to define the simplified estimation method – a procedure capable of predicting the approximate steady state lines and cyclic resistance curves of a sand as the non-plastic fines content is increased up to 30%. The method was critically reviewed based on the undrained test results of the PSM1 and PSM2 soils. This review suggested the method could accurately predict undrained response curves as the fines content was raised, based on the PSM1 test results. It also however identified some key issues with the method, such as the inability to accurately predict the responses of highly non-uniform soils, a lack of consideration for the entire particle size distribution of a soil, and the fact the errors in the prediction of bSSL carry through into the prediction of bCR. Lastly some areas of further investigation relating to the method were highlighted, including the need to verify the method through testing of sandy soils sourced from outside the Christchurch area, and the need to correlate the value of bCR with additional soil fabrics / depositional methods.

Research papers, University of Canterbury Library

This thesis studies the behaviour of diaphragms in multi-storey timber buildings by providing methods for the estimation of the diaphragm force demand, developing an Equivalent Truss Method for the analysis of timber diaphragms, and experimentally investigating the effects of displacement incompatibilities between the diaphragm and the lateral load resisting system and developing methods for their mitigation. The need to better understand the behaviour of diaphragms in timber buildings was highlighted by the recent 2010-2011 Canterbury Earthquake series, where a number of diaphragms in traditional concrete buildings performed poorly, compromising the lateral load resistance of the structure. Although shortcomings in the estimation of force demand, and in the analysis and design of concrete floor diaphragms have already been partially addressed by other researchers, the behaviour of diaphragms in modern multi-storey timber buildings in general, and in low damage Pres-Lam buildings (consisting of post-tensioned timber members) in particular is still unknown. The recent demand of mid-rise commercial timber buildings of ten storeys and beyond has further highlighted the lack of appropriate methods to analyse timber diaphragms with irregular floor geometries and large spans made of both light timber framing and massive timber panels. Due to the lower stiffness of timber lateral load resisting systems, compared with traditional construction materials, and the addition of in-plane flexible diaphragms, the effect of higher modes on the global dynamic behaviour of a structure becomes more critical. The results from a parametric non-linear time-history analysis on a series of timber frame and wall structures showed increased storey shear and moment demands even for four storey structures when compared to simplistic equivalent static analysis. This effect could successfully be predicted with methods available in literature. The presence of diaphragm flexibility increased diaphragm inter-storey drifts and the peak diaphragm demand in stiff wall structures, but had less influence on the storey shears and moments. Diaphragm force demands proved to be significantly higher than the forces derived from equivalent static analysis, leading to potentially unsafe designs. It is suggested to design all diaphragms for the same peak demand; a simplified approach to estimate these diaphragm forces is proposed for both frame and wall structures. Modern architecture often requires complex floor geometries with long spans leading to stress concentrations, high force demands and potentially large deformations in the diaphragms. There is a lack of guidance and regulation regarding the analysis and design of timber diaphragms and a practical alternative to the simplistic equivalent deep beam analysis or costly finite element modelling is required. An Equivalent Truss Method for the analysis of both light timber framed and massive timber diaphragms is proposed, based on analytical formulations and verified against finite element models. With this method the panel unit shear forces (shear flow) and therefore the fastener demand, chord forces and reaction forces can be evaluated. Because the panel stiffness and fastener stiffness are accounted for, diaphragm deflection, torsional effects and transfer forces can also be assessed. The proposed analysis method is intuitive and can be used with basic analysis software. If required, it can easily be adapted for the use with diaphragms working in the non-linear range. Damage to floor diaphragms resulting from displacement incompatibilities due to frame elongation or out-of plane deformation of walls can compromise the transfer of inertial forces to the lateral load resisting system as well as the stability of other structural elements. Two post-tensioned timber frame structures under quasi-static cyclic and dynamic load, respectively, were tested with different diaphragm panel layouts and connections investigating their ability to accommodate frame elongations. Additionally, a post-tensioned timber wall was loaded under horizontal cyclic loads through two pairs of collector beams. Several different connection details between the wall and the beams were tested, and no damage to the collector beams or connections was observed in any of the tests. To evaluate the increased strength and stiffness due to the wall-beam interaction an analytical procedure is presented. Finally, a timber staircase core was tested under bi-directional loading. Different connection details were used to study the effect of displacement incompatibilities between the orthogonal collector beams. These experiments showed that floor damage due to displacement incompatibilities can be prevented, even with high levels of lateral drift, by the flexibility of well-designed connections and the flexibility of the timber elements. It can be concluded that the flexibility of timber members and the flexibility of their connections play a major role in the behaviour of timber buildings in general and of diaphragms specifically under seismic loads. The increased flexibility enhances higher mode effects and alters the diaphragm force demand. Simple methods are provided to account for this effect on the storey shear, moment and drift demands as well as the diaphragm force demands. The analysis of light timber framing and massive timber diaphragms can be successfully analysed with an Equivalent Truss Method, which is calibrated by accounting for the panel shear and fastener stiffnesses. Finally, displacement incompatibilities in frame and wall structures can be accommodated by the flexibilities of the diaphragm panels and relative connections. A design recommendations chapter summarizes all findings and allows a designer to estimate diaphragm forces, to analyse the force path in timber diaphragms and to detail the connections to allow for displacement incompatibilities in multi-storey timber buildings.

Research papers, University of Canterbury Library

As a global phenomenon, many cities are undergoing urban renewal to accommodate rapid growth in urban population. However, urban renewal can struggle to balance social, economic, and environmental outcomes, whereby economic outcomes are often primarily considered by developers. This has important implications for urban forests, which have previously been shown to be negatively affected by development activities. Urban forests serve the purpose of providing ecosystem services and thus are beneficial to human wellbeing. Better understanding the effect of urban renewal on city trees may help improve urban forest outcomes via effective management and policy strategies, thereby maximising ecosystem service provision and human wellbeing. Though the relationship between certain aspects of development and urban forests has received consideration in previous literature, little research has focused on how the complete property redevelopment cycle affects urban forest dynamics over time. This research provides an opportunity to gain a comprehensive understanding of the effect of residential property redevelopment on urban forest dynamics, at a range of spatial scales, in Christchurch, New Zealand following a series of major earthquakes which occurred in 2010 – 2011. One consequence of the earthquakes is the redevelopment of thousands of properties over a relatively short time-frame. The research quantifies changes in canopy cover city-wide, as well as, tree removal, retention, and planting on individual residential properties. Moreover, the research identifies the underlying reasons for these dynamics, by exploring the roles of socio-economic and demographic factors, the spatial relationships between trees and other infrastructure, and finally, the attitudes of residential property owners. To quantify the effect of property redevelopment on canopy cover change in Christchurch, this research delineated tree canopy cover city-wide in 2011 and again in 2015. An object-based image analysis (OBIA) technique was applied to aerial imagery and LiDAR data acquired at both time steps, in order to estimate city-wide canopy cover for 2011 and 2015. Changes in tree canopy cover between 2011 and 2015 were then spatially quantified. Tree canopy cover change was also calculated for all meshblocks (a relatively fine-scale geographic boundary) in Christchurch. The results show a relatively small magnitude of tree canopy cover loss, city-wide, from 10.8% to 10.3% between 2011 and 2015, but a statistically significant change in mean tree canopy cover across all the meshblocks. Tree canopy cover losses were more likely to occur in meshblocks containing properties that underwent a complete redevelopment cycle, but the loss was insensitive to the density of redevelopment within meshblocks. To explore property-scale individual tree dynamics, a mixed-methods approach was used, combining questionnaire data and remote sensing analysis. A mail-based questionnaire was delivered to residential properties to collect resident and household data; 450 residential properties (321 redeveloped, 129 non- redeveloped) returned valid questionnaires and were identified as analysis subjects. Subsequently, 2,422 tree removals and 4,544 tree retentions were identified within the 450 properties; this was done by manually delineating individual tree crowns, based on aerial imagery and LiDAR data, and visually comparing the presence or absence of these trees between 2011 and 2015. The tree removal rate on redeveloped properties (44.0%) was over three times greater than on non-redeveloped properties (13.5%) and the average canopy cover loss on redeveloped properties (52.2%) was significantly greater than on non-redeveloped properties (18.8%). A classification tree (CT) analysis was used to model individual tree dynamics (i.e. tree removal, tree retention) and candidate explanatory variables (i.e. resident and household, economic, land cover, and spatial variables). The results indicate that the model including land cover, spatial, and economic variables had the best predicting ability for individual tree dynamics (accuracy = 73.4%). Relatively small trees were more likely to be removed, while trees with large crowns were more likely to be retained. Trees were most likely to be removed from redeveloped properties with capital values lower than NZ$1,060,000 if they were within 1.4 m of the boundary of a redeveloped building. Conversely, trees were most likely to be retained if they were on a property that was not redeveloped. The analysis suggested that the resident and household factors included as potential explanatory variables did not influence tree removal or retention. To conduct a further exploration of the relationship between resident attitudes and actions towards trees on redeveloped versus non-redeveloped properties, this research also asked the landowners from the 450 properties that returned mail questionnaires to indicate their attitudes towards tree management (i.e. tree removal, tree retention, and tree planting) on their properties. The results show that residents from redeveloped properties were more likely to remove and/or plant trees, while residents from non- redeveloped properties were more likely to retain existing trees. A principal component analysis (PCA) was used to explore resident attitudes towards tree management. The results of the PCA show that residents identified ecosystem disservices (e.g. leaf litter, root damage to infrastructure) as common reasons for tree removal; however, they also noted ecosystem services as important reasons for both tree planting and tree retention on their properties. Moreover, the reasons for tree removal and tree planting varied based on whether residents’ property had been redeveloped. Most tree removal occurred on redeveloped properties because trees were in conflict with redevelopment, but occurred on non- redeveloped properties because of perceived poor tree health. Residents from redeveloped properties were more likely to plant trees due to being aesthetically pleasing or to replace trees removed during redevelopment. Overall, this research adds to, and complements, the existing literature on the effects of residential property redevelopment on urban forest dynamics. The findings of this research provide empirical support for developing specific legislation or policies about urban forest management during residential property redevelopment. The results also imply that urban foresters should enhance public education on the ecosystem services provided by urban forests and thus minimise the potential for tree removal when undertaking property redevelopment.

Research papers, The University of Auckland Library

In September 2010 and February 2011 the Canterbury region of New Zealand was struck by two powerful earthquakes, registering magnitude 7.1 and 6.3 respectively on the Richter scale. The second earthquake was centred 10 kilometres south-east of the centre of Christchurch (the region’s capital and New Zealand’s third most populous urban area, with approximately 360,000 residents) at a depth of five kilometres. 185 people were killed, making it the second deadliest natural disaster in New Zealand’s history. (66 people were killed in the collapse of one building alone, the six-storey Canterbury Television building.) The earthquake occurred during the lunch hour, increasing the number of people killed on footpaths and in buses and cars by falling debris. In addition to the loss of life, the earthquake caused catastrophic damage to both land and buildings in Christchurch, particularly in the central business district. Many commercial and residential buildings collapsed in the tremors; others were damaged through soil liquefaction and surface flooding. Over 1,000 buildings in the central business district were eventually demolished because of safety concerns, and an estimated 70,000 people had to leave the city after the earthquakes because their homes were uninhabitable. The New Zealand Government declared a state of national emergency, which stayed in force for ten weeks. In 2014 the Government estimated that the rebuild process would cost NZ$40 billion (approximately US$27.3 billion, a cost equivalent to 17% of New Zealand’s annual GDP). Economists now estimate it could take the New Zealand economy between 50 and 100 years to recover. The earthquakes generated tens of thousands of insurance claims, both against private home insurance companies and against the New Zealand Earthquake Commission, a government-owned statutory body which provides primary natural disaster insurance to residential property owners in New Zealand. These ranged from claims for hundreds of millions of dollars concerning the local port and university to much smaller claims in respect of the thousands of residential homes damaged. Many of these insurance claims resulted in civil proceedings, caused by disputes about policy cover, the extent of the damage and the cost and/or methodology of repairs, as well as failures in communication and delays caused by the overwhelming number of claims. Disputes were complicated by the fact that the Earthquake Commission provides primary insurance cover up to a monetary cap, with any additional costs to be met by the property owner’s private insurer. Litigation funders and non-lawyer claims advocates who took a percentage of any insurance proceeds also soon became involved. These two factors increased the number of parties involved in any given claim and introduced further obstacles to resolution. Resolving these disputes both efficiently and fairly was (and remains) central to the rebuild process. This created an unprecedented challenge for the justice system in Christchurch (and New Zealand), exacerbated by the fact that the Christchurch High Court building was itself damaged in the earthquakes, with the Court having to relocate to temporary premises. (The High Court hears civil claims exceeding NZ$200,000 in value (approximately US$140,000) or those involving particularly complex issues. Most of the claims fell into this category.) This paper will examine the response of the Christchurch High Court to this extraordinary situation as a case study in innovative judging practices and from a jurisprudential perspective. In 2011, following the earthquakes, the High Court made a commitment that earthquake-related civil claims would be dealt with as swiftly as the Court's resources permitted. In May 2012, it commenced a special “Earthquake List” to manage these cases. The list (which is ongoing) seeks to streamline the trial process, resolve quickly claims with precedent value or involving acute personal hardship or large numbers of people, facilitate settlement and generally work proactively and innovatively with local lawyers, technical experts and other stakeholders. For example, the Court maintains a public list (in spreadsheet format, available online) with details of all active cases before the Court, listing the parties and their lawyers, summarising the facts and identifying the legal issues raised. It identifies cases in which issues of general importance have been or will be decided, with the expressed purpose being to assist earthquake litigants and those contemplating litigation and to facilitate communication among parties and lawyers. This paper will posit the Earthquake List as an attempt to implement innovative judging techniques to provide efficient yet just legal processes, and which can be examined from a variety of jurisprudential perspectives. One of these is as a case study in the well-established debate about the dialogic relationship between public decisions and private settlement in the rule of law. Drawing on the work of scholars such as Hazel Genn, Owen Fiss, David Luban, Carrie Menkel-Meadow and Judith Resnik, it will explore the tension between the need to develop the law through the doctrine of precedent and the need to resolve civil disputes fairly, affordably and expeditiously. It will also be informed by the presenter’s personal experience of the interplay between reported decisions and private settlement in post-earthquake Christchurch through her work mediating insurance disputes. From a methodological perspective, this research project itself gives rise to issues suitable for discussion at the Law and Society Annual Meeting. These include the challenges in empirical study of judges, working with data collected by the courts and statistical analysis of the legal process in reference to settlement. September 2015 marked the five-year anniversary of the first Christchurch earthquake. There remains widespread dissatisfaction amongst Christchurch residents with the ongoing delays in resolving claims, particularly insurers, and the rebuild process. There will continue to be challenges in Christchurch for years to come, both from as-yet unresolved claims but also because of the possibility of a new wave of claims arising from poor quality repairs. Thus, a final purpose of presenting this paper at the 2016 Meeting is to gain the benefit of other scholarly perspectives and experiences of innovative judging best practice, with a view to strengthening and improving the judicial processes in Christchurch. This Annual Meeting of the Law and Society Association in New Orleans is a particularly appropriate forum for this paper, given the recent ten year anniversary of Hurricane Katrina and the plenary session theme of “Natural and Unnatural Disasters – human crises and law’s response.” The presenter has a personal connection with this theme, as she was a Fulbright scholar from New Zealand at New York University in 2005/2006 and participated in the student volunteer cleanup effort in New Orleans following Katrina. http://www.lawandsociety.org/NewOrleans2016/docs/2016_Program.pdf

Research papers, University of Canterbury Library

Environmental stress and disturbance can affect the structure and functioning of marine ecosystems by altering their physical, chemical and biological features. In estuaries, benthic invertebrate communities play important roles in structuring sediments, influencing primary production and biogeochemical flux, and occupying key food web positions. Stress and disturbance can reduce species diversity, richness and abundance, with ecological theory predicting that biodiversity will be at its lowest soon after a disturbance with assemblages dominated by opportunistic species. The Avon-Heathcote Estuary in Christchurch New Zealand has provided a novel opportunity to examine the effects of stress, in the form of eutrophication, and disturbance, in the form of cataclysmic earthquake events, on the structure and functioning of an estuarine ecosystem. For more than 50 years, large quantities (up to 500,000m3/day) of treated wastewater were released into this estuary but in March 2010 this was diverted to an ocean outfall, thereby reducing the nutrient loading by around 90% to the estuary. This study was therefore initially focussed on the reversal of eutrophication and consequent effects on food web structure in the estuary as it responded to lower nutrients. In 2011, however, Christchurch was struck with a series of large earthquakes that greatly changed the estuary. Massive amounts of liquefied sediments, covering up to 65% of the estuary floor, were forced up from deep below the estuary, the estuary was tilted by up to a 50cm rise on one side and a corresponding drop on the other, and large quantities of raw sewage from broken wastewater infrastructure entered the estuary for up to nine months. This study was therefore a test of the potentially synergistic effects of nutrient reduction and earthquake disturbance on invertebrate communities, associated habitats and food web dynamics. Because there was considerable site-to-site heterogeneity in the estuary, the sites in this study were selected to represent a eutrophication gradient from relatively “clean” (where the influence of tidal flows was high) to highly impacted (near the historical discharge site). The study was structured around these sites, with components before the wastewater diversion, after the diversion but before the earthquakes, and after the earthquakes. The eutrophication gradient was reflected in the composition and isotopic chemistry of primary producer and invertebrate communities and the characteristics of sediments across the sample sites. Sites closest to the former wastewater discharge pipe were the most eutrophic and had cohesive organic -rich, fine sediments and relatively depauperate communities dominated by the opportunistic taxa Capitellidae. The less-impacted sites had coarser, sandier sediments with fewer pollutants and far less organic matter than at the eutrophic sites, relatively high diversity and lower abundances of micro- and macro-algae. Sewage-derived nitrogen had became incorporated into the estuarine food web at the eutrophic sites, starting at the base of the food chain with benthic microalgae (BMA), which were found to use mostly sediment-derived nitrogen. Stable isotopic analysis showed that δ13C and δ15N values of most food sources and consumers varied spatially, temporally and in relation to the diversion of wastewater, whereas the earthquakes did not appear to affect the overall estuarine food web structure. This was seen particularly at the most eutrophic site, where isotopic signatures became more similar to the cleaner sites over two-and-a-half years after the diversion. New sediments (liquefaction) produced by the earthquakes were found to be coarser, have lower concentrations of heavy metals and less organic matter than old (existing) sediments. They also had fewer macroinvertebrate inhabitants initially after the earthquakes but most areas recovered to pre-earthquake abundance and diversity within two years. Field experiments showed that there were higher amounts of primary production and lower amounts of nutrient efflux from new sediments at the eutrophic sites after the earthquakes. Primary production was highest in new sediments due to the increased photosynthetic efficiency of BMA resulting from the increased permeability of new sediments allowing increased light penetration, enhanced vertical migration of BMA and the enhanced transport of oxygen and nutrients. The reduced efflux of NH4-N in new sediments indicated that the capping of a large portion of eutrophic old sediments with new sediments had reduced the release of legacy nutrients (originating from the historical discharge) from the sediments to the overlying water. Laboratory experiments using an array of species and old and new sediments showed that invertebrates altered levels of primary production and nutrient flux but effects varied among species. The mud snail Amphibola crenata and mud crab Austrohelice crassa were found to reduce primary production and BMA biomass through the consumption of BMA (both species) and its burial from bioturbation and the construction of burrows (Austrohelice). In contrast, the cockle Austrovenus stutchburyi did not significantly affect primary production and BMA biomass. These results show that changes in the structure of invertebrate communities resulting from disturbances can also have consequences for the functioning of the system. The major conclusions of this study were that the wastewater diversion had a major effect on food web dynamics and that the large quantities of clean and unpolluted new sediments introduced to the estuary during the earthquakes altered the recovery trajectory of the estuary, accelerating it at least throughout the duration of this study. This was largely through the ‘capping’ effect of the new liquefied, coarser-grained sediments as they dissipated across the estuary and covered much of the old organic-rich eutrophic sediments. For all aspects of this study, the largest changes occurred at the most eutrophic sites; however, the surrounding habitats were important as they provided the context for recovery of the estuary, particularly because of the very strong influence of sediments, their biogeochemistry, microalgal and macroalgal dynamics. There have been few studies documenting system level responses to eutrophication amelioration and to the best on my knowledge there are no other published studies examining the impacts of large earthquakes on benthic communities in an estuarine ecosystem. This research gives valuable insight and advancements in the scientific understanding of the effects that eutrophication recovery and large-scale disturbances can have on the ecology of a soft-sediment ecosystem.

Research papers, University of Canterbury Library

A buckling-restrained braced frame (BRBF) is a structural bracing system that provides lateral strength and stiffness to buildings and bridges. They were first developed in Japan in the 1970s (Watanabe et al. 1973, Kimura et al. 1976) and gained rapid acceptance in the United States after the Northridge earthquake in 1994 (Bruneau et al. 2011). However, it was not until the Canterbury earthquakes of 2010/2011, that the New Zealand construction market saw a significant uptake in the use of buckling-restrained braces (BRBs) in commercial buildings (MacRae et al. 2015). In New Zealand there is not yet any documented guidance or specific instructions in regulatory standards for the design of BRBFs. This makes it difficult for engineers to anticipate all the possible stability and strength issues within a BRBF system and actively mitigate them in each design. To help ensure BRBF designs perform as intended, a peer review with physical testing are needed to gain building compliance in New Zealand. Physical testing should check the manufacturing and design of each BRB (prequalification testing), and the global strength and stability of each BRB its frame (subassemblage testing). However, the financial pressures inherent in commercial projects has led to prequalification testing (BRB only testing) being favoured without adequate design specific subassemblage testing. This means peer reviewers have to rely on BRB suppliers for assurances. This low regulation environment allows for a variety of BRBF designs to be constructed without being tested or well understood. The concern is that there may be designs that pose risk and that issues are being overlooked in design and review. To improve the safety and design of BRBFs in New Zealand, this dissertation studies the behaviour of BRBs and how they interact with other frame components. Presented is the experimental test process and results of five commercially available BRB designs (Chapter 2). It discusses the manufacturing process, testing conditions and limitations of observable information. It also emphasises that even though subassemblage testing is impractical, uniaxial testing of the BRB only is not enough, as this does not check global strength or stability. As an alternative to physical testing, this research uses computer simulation to model BRB behaviour. To overcome the traditional challenges of detailed BRB modelling, a strategy to simulate the performance of generic BRB designs was developed (Chapter 3). The development of nonlinear material and contact models are important aspects of this strategy. The Chaboche method is employed using a minimum of six backstress curves to characterize the combined isotropic and kinematic hardening exhibited by the steel core. A simplified approach, adequate for modelling the contact interaction between the restrainer and the core was found. Models also capture important frictional dissipation as well as lateral motion and bending associated with high order constrained buckling of the core. The experimental data from Chapter 2 was used to validate this strategy. As BRBs resist high compressive loading, global stability of the BRB and gusseted connection zone need to be considered. A separate study was conducted that investigated the yielding and buckling strength of gusset plates (Chapter 4). The stress distribution through a gusset plate is complex and difficult to predict because the cross-sectional area of gusset plate is not uniform, and each gusset plate design is unique in shape and size. This has motivated design methods that approximate yielding of gusset plates. Finite element modelling was used to study the development of yielding, buckling and plastic collapse behaviour of a brace end bolted to a series of corner gusset plates. In total 184 variations of gusset plate geometries were modelled in Abaqus®. The FEA modelling applied monotonic uniaxial load with an imperfection. Upon comparing results to current gusset plate design methods, it was found that the Whitmore width method for calculating the yield load of a gusset is generally un-conservative. To improve accuracy and safety in the design of gusset plates, modifications to current design methods for calculating the yield area and compressive strength for gusset plates is proposed. Bolted connections are a popular and common connection type used in BRBF design. Global out-of-plane stability tends to govern the design for this connection type with numerous studies highlighting the risk of instability initiated by inelasticity in the gussets, neck of the BRB end and/or restrainer ends. Subassemblage testing is the traditional method for evaluating global stability. However, physical testing of every BRBF variation is cost prohibitive. As such, Japan has developed an analytical approach to evaluate out-of-plane stability of BRBFs and incorporated this in their design codes. This analytical approach evaluates the different BRB components under possible collapse mechanisms by focusing on moment transfer between the restrainer and end of the BRB. The approach have led to strict criteria for BRBF design in Japan. Structural building design codes in New Zealand, Europe and the United States do not yet provide analytical methods to assess BRB and connection stability, with prototype/subassemblage testing still required as the primary means of accreditation. Therefore it is of interest to investigate the capability of this method to evaluate stability of BRBs designs and gusset plate designs used in New Zealand (including unstiffened gusset connection zones). Chapter 5 demonstrates the capability of FEA to study to the performance of a subassemblage test under cyclic loading – resembling that of a diagonal ground storey BRBF with bolted connections. A series of detailed models were developed using the strategy presented in Chapter 3. The geometric features of BRB 6.5a (Chapter 2) were used as a basis for the BRBs modelled. To capture the different failure mechanisms identified in Takeuchi et al. (2017), models varied the length that the cruciform (non-yielding) section inserts into the restrainer. Results indicate that gusset plates designed according to New Zealand’s Steel Structures Standard (NZS 3404) limit BRBF performance. Increasing the thickness of the gusset plates according to modifications discussed in Chapter 4, improved the overall performance for all variants (except when Lin/ Bcruc = 0.5). The effect of bi-directional loading was not found to notably affect out-of-plane stability. Results were compared against predictions made by the analytical method used in Japan (Takeuchi method). This method was found to be generally conservative is predicting out-of-plane stability of each BRBF model. Recommendations to improve the accuracy of Takeuchi’s method are also provided. The outcomes from this thesis should be helpful for BRB manufacturers, researchers, and in the development of further design guidance of BRBFs.

Research papers, University of Canterbury Library

Non-structural elements (NSEs) have frequently proven to contribute to significant losses sustained from earthquakes in the form of damage, downtime, injury and death. In New Zealand (NZ), the 2010 and 2011 Canterbury Earthquake Sequence (CES), the 2013 Seddon and Cook Strait earthquake sequence and the 2016 Kaikoura earthquake were major milestones in this regard as significant damage to building NSEs both highlighted and further reinforced the importance of NSE seismic performance to the resilience of urban centres. Extensive damage in suspended ceilings, partition walls, façades and building services following the CES was reported to be partly due to erroneous seismic design or installation or caused by intervening elements. Moreover, the low-damage solutions developed for structural systems sometimes allow for relatively large inter-story drifts -compared to conventional designs- which may not have been considered in the seismic design of NSEs. Having observed these shortcomings, this study on suspended ceilings was carried out with five main goals: i) Understanding the seismic performance of the system commonly used in NZ; ii) Understanding the transfer of seismic design actions through different suspended ceiling components, iii) Investigating potential low-damage solutions; iii) Evaluating the compatibility of the current ceiling system with other low-damage NSEs; and iv) Investigating the application of numerical analysis to simulate the response of ceiling systems. The first phase of the study followed a joint research work between the University of Canterbury (UC) in NZ, and the Politecnico Di Milano, in Italy. The experimental ceiling component fragility curves obtained in this existing study were employed to produce analytical fragility curves for a perimeter-fixed ceiling of a given size and weight, with grid acceleration as the intensity measure. The validity of the method was proven through comparisons between this proposed analytical approach with the recommended procedures in proprietary products design guidelines, as well as experimental fragility curves from other studies. For application to engineering design practice, and using fragility curves for a range of ceiling lengths and weights, design curves were produced for estimating the allowable grid lengths for a given demand level. In the second phase of this study, three specimens of perimeter-fixed ceilings were tested on a shake table under both sinusoidal and random floor motion input. The experiments considered the relationship between the floor acceleration, acceleration of the ceiling grid, the axial force induced in the grid members, and the effect of boundary conditions on the transfer of these axial forces. A direct correlation was observed between the axial force (recorded via load cells) and the horizontal acceleration measured on the ceiling grid. Moreover, the amplification of floor acceleration, as transferred through ceiling components, was examined and found (in several tests) to be greater than the recommended factor for the design of ceilings provided in the NZ earthquake loadings standard NZS1170.5. However, this amplification was found to be influenced by the pounding interactions between the ceiling grid members and the tiles, and this amplification diminished considerably when the high frequency content was filtered out from the output time histories. The experiments ended with damage in the ceiling grid connection at an axial force similar to the capacity of these joints previously measured through static tests in phase one. The observation of common forms of damage in ceilings in earthquakes triggered the monotonic experiments carried out in the third phase of this research with the objective of investigating a simple and easily applicable mitigation strategy for existing or new suspended ceilings. The tests focused on the possibility of using proprietary cross-shaped clip elements ordinarily used to provide seismic gap as a strengthening solution for the weak components of a ceiling. The results showed that the solution was effective under both tension and compression loads through increasing load bearing capacity and ductility in grid connections. The feasibility of a novel type of suspended ceiling called fully-floating ceiling system was investigated through shaking table tests in the next phase of this study with the main goal of isolating the ceiling from the surrounding structure; thereby arresting the transfer of associated seismic forces from the structure to the ceiling. The fully-floating ceiling specimen was freely hung from the floor above lacking any lateral bracing and connections with the perimeter. Throughout different tests, a satisfactory agreement between the fully-floating ceiling response and simple pendulum theory was demonstrated. The addition of isolation material in perimeter gaps was found effective in inducing extra damping and protecting the ceiling from pounding impact; resulting in much reduced ceiling displacements and accelerations. The only form of damage observed throughout the random floor motion tests and the sinusoidal tests was a panel dislodgement observed in a test due to successive poundings between the ceiling specimen and the surrounding beams at resonant frequencies. Partition walls as the first effective NSE in direct interaction with ceilings were the topic of the final experimental phase. Low-damage drywall partitions proposed in a previous study in the UC were tested with two common forms of suspended ceiling: braced and perimeter-fixed. The experiments investigated the in-plane and out-of-plane performance of the low-damage drywall partitions, as well as displacement compatibility between these walls and the suspended ceilings. In the braced ceiling experiment, where no connection was made between ceiling grids and surrounding walls no damage in the grid system or partitions was observed. However, at high drift values panel dislodgement was observed on corners of the ceiling where the free ends of grids were not restrained against spreading. This could be prevented by framing the grid ends using a perimeter angle that is riveted only to the grid members while keeping sufficient clearance from the perimeter walls. In the next set of tests with the perimeter-fixed ceiling, no damage was observed in the ceiling system or the drywalls. Based on the results of the experiments it was concluded that the tested ceiling had enough flexibility to accommodate the relative displacement between two perpendicular walls up to the inter-storey drifts achieved. The experiments on perimeter-fixed ceilings were followed by numerical simulations of the performance of these ceilings in a finite element model developed in the structural analysis software, SAP2000. This model was relatively simple and easy to develop and was able to replicate the experimental results to a reasonable degree. Filtering was applied to the experimental output to exclude the effect of high frequency noise and tile-grid impact. The developed model generally simulated the acceleration responses well but underestimated the peak ceiling grid accelerations. This was possibly because the peak values in time histories were affected by impact occurring at very short periods. The model overestimated the axial forces in ceiling grids which was assumed to be caused by the initial assumptions made about the tributary area or constant acceleration associated with each grid line in the direction of excitation. Otherwise, the overall success of the numerical modelling in replicating the experimental results implies that numerical modelling using conventional structural analysis software could be used in engineering practice to analyse alternative ceiling geometries proposed for application to varying structural systems. This however, needs to be confirmed through similar analyses on other ceiling examples from existing instrumented buildings during real earthquakes. As the concluding part of this research the final phase addressed the issues raised following the review of existing ceiling standards and guidelines. The applicability of the research findings to current practice and their implications were discussed. Finally, an example was provided for the design of a suspended ceiling utilising the new knowledge acquired in this research.