The purpose of this thesis is to conduct a detailed examination of the forward-directivity characteristics of near-fault ground motions produced in the 2010-11 Canterbury earthquakes, including evaluating the efficacy of several existing empirical models which form the basis of frameworks for considering directivity in seismic hazard assessment. A wavelet-based pulse classification algorithm developed by Baker (2007) is firstly used to identify and characterise ground motions which demonstrate evidence of forward-directivity effects from significant events in the Canterbury earthquake sequence. The algorithm fails to classify a large number of ground motions which clearly exhibit an early-arriving directivity pulse due to: (i) incorrect pulse extraction resulting from the presence of pulse-like features caused by other physical phenomena; and (ii) inadequacy of the pulse indicator score used to carry out binary pulse-like/non-pulse-like classification. An alternative ‘manual’ approach is proposed to ensure 'correct' pulse extraction and the classification process is also guided by examination of the horizontal velocity trajectory plots and source-to-site geometry. Based on the above analysis, 59 pulse-like ground motions are identified from the Canterbury earthquakes , which in the author's opinion, are caused by forward-directivity effects. The pulses are also characterised in terms of their period and amplitude. A revised version of the B07 algorithm developed by Shahi (2013) is also subsequently utilised but without observing any notable improvement in the pulse classification results. A series of three chapters are dedicated to assess the predictive capabilities of empirical models to predict the: (i) probability of pulse occurrence; (ii) response spectrum amplification caused by the directivity pulse; (iii) period and amplitude (peak ground velocity, PGV) of the directivity pulse using observations from four significant events in the Canterbury earthquakes. Based on the results of logistic regression analysis, it is found that the pulse probability model of Shahi (2013) provides the most improved predictions in comparison to its predecessors. Pulse probability contour maps are developed to scrutinise observations of pulses/non-pulses with predicted probabilities. A direct comparison of the observed and predicted directivity amplification of acceleration response spectra reveals the inadequacy of broadband directivity models, which form the basis of the near-fault factor in the New Zealand loadings standard, NZS1170.5:2004. In contrast, a recently developed narrowband model by Shahi & Baker (2011) provides significantly improved predictions by amplifying the response spectra within a small range of periods. The significant positive bias demonstrated by the residuals associated with all models at longer vibration periods (in the Mw7.1 Darfield and Mw6.2 Christchurch earthquakes) is likely due to the influence of basin-induced surface waves and non-linear soil response. Empirical models for the pulse period notably under-predict observations from the Darfield and Christchurch earthquakes, inferred as being a result of both the effect of nonlinear site response and influence of the Canterbury basin. In contrast, observed pulse periods from the smaller magnitude June (Mw6.0) and December (Mw5.9) 2011 earthquakes are in good agreement with predictions. Models for the pulse amplitude generally provide accurate estimates of the observations at source-to-site distances between 1 km and 10 km. At longer distances, observed PGVs are significantly under-predicted due to their slower apparent attenuation. Mixed-effects regression is employed to develop revised models for both parameters using the latest NGA-West2 pulse-like ground motion database. A pulse period relationship which accounts for the effect of faulting mechanism using rake angle as a continuous predictor variable is developed. The use of a larger database in model development, however does not result in improved predictions of pulse period for the Darfield and Christchurch earthquakes. In contrast, the revised model for PGV provides a more appropriate attenuation of the pulse amplitude with distance, and does not exhibit the bias associated with previous models. Finally, the effects of near-fault directivity are explicitly included in NZ-specific probabilistic seismic hazard analysis (PSHA) using the narrowband directivity model of Shahi & Baker (2011). Seismic hazard analyses are conducted with and without considering directivity for typical sites in Christchurch and Otira. The inadequacy of the near-fault factor in the NZS1170.5: 2004 is apparent based on a comparison with the directivity amplification obtained from PSHA.
In most design codes, infill walls are considered as non-structural elements and thus are typically neglected in the design process. The observations made after major earthquakes (Duzce 1999, L’Aquila 2009, Christchurch 2011) have shown that even though infill walls are considered to be non-structural elements, they interact with the structural system during seismic actions. In the case of heavy infill walls (i.e. clay brick infill walls), the whole behaviour of the structure may be affected by this interaction (i.e. local or global structural failures such as soft storey mechanism). In the case of light infill walls (i.e. non-structural drywalls), this may cause significant economical losses. To consider the interaction of the structural system with the ‘non-structural ’infill walls at design stage may not be a practical approach due to the complexity of the infill wall behaviour. Therefore, the purpose of the reported research is to develop innovative technological solutions and design recommendations for low damage non-structural wall systems for seismic actions by making use of alternative approaches. Light (steel/timber framed drywalls) and heavy (unreinforced clay brick) non-structural infill wall systems were studied by following an experimental/numerical research programme. Quasi-static reverse cyclic tests were carried out by utilizing a specially designed full scale reinforced concrete frame, which can be used as a re-usable bare frame. In this frame, two RC beams and two RC columns were connected by two un-bonded post tensioning bars, emulating a jointed ductile frame system (PRESSS technology). Due to the rocking behaviour at the beam-column joint interfaces, this frame was typically a low damage structural solution, with the post-tensioning guaranteeing a linear elastic behaviour. Therefore, this frame could be repeatedly used in all of the tests carried out by changing only the infill walls within this frame. Due to the linear elastic behaviour of this structural bare frame, it was possible to extract the exact behaviour of the infill walls from the global results. In other words, the only parameter that affected the global results was given by the infill walls. For the test specimens, the existing practice of construction (as built) for both light and heavy non-structural walls was implemented. In the light of the observations taken during these tests, modified low damage construction practices were proposed and tested. In total, seven tests were carried out: 1) Bare frame , in order to confirm its linear elastic behaviour. 2) As built steel framed drywall specimen FIF1-STFD (Light) 3) As built timber framed drywall specimen FIF2-TBFD (Light) 4) As built unreinforced clay brick infill wall specimen FIF3-UCBI (Heavy) 5) Low damage steel framed drywall specimen MIF1-STFD (Light) 6) Low damage timber framed drywall specimen MIF2-TBFD (Light) 7) Low damage unreinforced clay brick infill wall specimen MIF5-UCBI (Heavy) The tests of the as built practices showed that both drywalls and unreinforced clay brick infill walls have a low serviceability inter-storey drift limit (0.2-0.3%). Based on the observations, simple modifications and details were proposed for the low damage specimens. The details proved to be working effectively in lowering the damage and increasing the serviceability drift limits. For drywalls, the proposed low damage solutions do not introduce additional cost, material or labour and they are easily applicable in real buildings. For unreinforced clay brick infill walls, a light steel sub-frame system was suggested that divides the infill panel zone into smaller individual panels, which requires additional labour and some cost. However, both systems can be engineered for seismic actions and their behaviour can be controlled by implementing the proposed details. The performance of the developed details were also confirmed by the numerical case study analyses carried out using Ruaumoko 2D on a reinforced concrete building model designed according to the NZ codes/standards. The results have confirmed that the implementation of the proposed low damage solutions is expected to significantly reduce the non-structural infill wall damage throughout a building.
Reinforced concrete structures designed in pre-1970s are vulnerable under earthquakes due to lack of seismic detailing to provide adequate ductility. Typical deficiencies of pre-1970s reinforced concrete structures are (a) use of plain bars as longitudinal reinforcement, (b) inadequate anchorage of beam longitudinal reinforcement in the column (particularly exterior column), (c) lack of joint transverse reinforcement if any, (d) lapped splices located just above joint, and (e) low concrete strength. Furthermore, the use of infill walls is a controversial issue because it can help to provide additional stiffness to the structure on the positive side and on the negative side it can increase the possibility of soft-storey mechanisms if it is distributed irregularly. Experimental research to investigate the possible seismic behaviour of pre-1970s reinforced concrete structures have been carried out in the past. However, there is still an absence of experimental tests on the 3-D response of existing beam-column joints under bi-directional cyclic loading, such as corner joints. As part of the research work herein presented, a series of experimental tests on beam-column subassemblies with typical detailing of pre-1970s buildings has been carried out to investigate the behaviour of existing reinforced concrete structures. Six two-third scale plane frame exterior beam-column joint subassemblies were constructed and tested under quasi-static cyclic loading in the Structural Laboratory of the University of Canterbury. The reinforcement detailing and beam dimension were varied to investigate their effect on the seismic behaviour. Four specimens were conventional deep beam-column joint, with two of them using deformed longitudinal bars and beam bars bent in to the joint and the two others using plain round longitudinal bars and beam bars with end hooks. The other two specimens were shallow beam-column joint, one with deformed longitudinal bars and beam bars bent in to the joint, the other with plain round longitudinal bars and beam bars with end hooks. All units had one transverse reinforcement in the joint. The results of the experimental tests indicated that conventional exterior beam-column joint with typical detailing of pre-1970s building would experience serious diagonal tension cracking in the joint panel under earthquake. The use of plain round bars with end hooks for beam longitudinal reinforcement results in more severe damage in the joint core when compared to the use of deformed bars for beam longitudinal reinforcement bent in to the joint, due to the combination of bar slips and concrete crushing. One interesting outcome is that the use of shallow beam in the exterior beam-column joint could avoid the joint cracking due to the beam size although the strength provided lower when compared with the use of deep beam with equal moment capacity. Therefore, taking into account the low strength and stiffness, shallow beam can be reintroduced as an alternative solution in design process. In addition, the presence of single transverse reinforcement in the joint core can provide additional confinement after the first crack occurred, thus delaying the strength degradation of the structure. Three two-third scale space frame corner beam-column joint subassemblies were also constructed to investigate the biaxial loading effect. Two specimens were deep-deep beam-corner column joint specimens and the other one was deep-shallow beam-corner column joint specimen. One deep-deep beam-corner column joint specimen was not using any transverse reinforcement in the joint core while the two other specimens were using one transverse reinforcement in the joint core. Plain round longitudinal bars were used for all units with hook anchorage for the beam bars. Results from the tests confirmed the evidences from earthquake damage observations with the exterior 3-D (corner) beam-column joint subjected to biaxial loading would have less strength and suffer higher damage in the joint area under earthquake. Furthermore, the joint shear relation in the two directions is calibrated from the results to provide better analysis. An analytical model was used to simulate the seismic behaviour of the joints with the help of Ruaumoko software. Alternative strength degradation curves corresponding to different reinforcement detailing of beam-column joint unit were proposed based on the test results.
The Lake Coleridge Rock Avalanche Deposits (LCRADs) are located on Ryton Station in the middle Rakaia Valley, approximately 80 km west of Christchurch. Torlesse Supergroup greywacke is the basement material and has been significantly influenced by both active tectonics and glaciation. Both glacial and post-glacial processes have produced large volumes of material which blanket the bedrock on slopes and in the valley floors. The LCRADs were part of a regional study of rock avalanches by WHITEHOUSE (1981, 1983) and WHITEHOUSE and GRIFFITHS (1983), and a single rock avalanche event was recognised with a weathering rind age of 120 years B.P. that was later modified to 150 ± 40 years B.P. The present study has refined details of both the age and the sequence of events at the site, by identifying three separate rock avalanche deposits (termed the LCRA1, LCRA2 and LCRA3 deposits), which are all sourced from near the summit of Carriage Drive. The LCRA1 deposit is lobate in shape and had an estimated original deposit volume of 12.5 x 10⁶ m³, although erosion by the Ryton River has reduced the present day debris volume to 5.1 x 10⁶ m³. An optically stimulated luminescence date taken from sandy loess immediately beneath the LCRA1 deposit provided a maximum age for the rock avalanche event of 9,720 ± 750 years B.P., which is believed to be realistic given that this is shortly after the retreat of Acheron 3 ice from this part of the valley. Emplacement of rock avalanche material into an ancestral Ryton riverbed created a natural dam with a ~17 M m³ lake upstream. The river is thought to have created a natural spillway over the dam structure at ~557 m (a.s.l), and to have existed for a number of years before any significant downcutting occurred. Although a triggering mechanism for the LCRA1 deposit was poorly constrained, it is thought that stress rebound after glacial ice removal may have initiated failure. Due to the event occurring c.10,000 years ago, there was a lack of definition for a possible earthquake trigger, though the possibility is obvious. The LCRA₂ event had an original deposit volume of 0.66 x 10⁶ m³, and was constrained to the low-lying area adjacent to the Ryton River that had been created by river erosion of the LCRA1 deposit. Further erosion by the Ryton River has reduced the deposit volume to 0.4 x 10⁶ m³. A radiocarbon date from a piece of mānuka found within the LCRA2 deposit provided an age of 668 ± 36 years B.P., and this is thought to reliably date the event. The LCRA2 event also dammed the Ryton River, and the preservation of dam-break outwash terraces downstream from the deposit provides clear evidence of rapid dam erosion and flooding after overtopping, and breaching by the Ryton River. Based on the mean annual flow of the Ryton River, the LCRA2 lake would have taken approximately two weeks to fill assuming that there were no preferred breach paths and the material was relatively impermeable. The LCRA2 event is thought to have been coseismic with a fault rupture along the western segment of the PPAFZ, which has been dated at 600 ± 100 years B.P. by SMITH (2003). The small LCRA3 event was not able to be dated, but it is believed to have failed shortly after the LCRA2 event and it may in fact be a lag deposit of the second rock avalanche event possibly triggered by an aftershock. The deposit is only visible at one locality within the cliffs that line the Ryton River, and its lack of geomorphic expression is attributed to it occurring closely after the LCRA2 event, while the Ryton River was still dammed from the second rock avalanche event. A wedge-block of some 35,000 m³ of source material for a future rock avalanche was identified at the summit of Carriage Drive. The dilation of the rock mass, combined with unfavourably oriented sub-vertical bedding in the Torlesse Supergroup bedrock, has allowed toppling-style failure on both of the main ridge lines around the source area for the LCRADs. In the event of a future rock avalanche occurring within the Ryton riverbed an emergency response plan has been developed to provide a staged response, especially in relation to the camping ground located at the mouth of the Ryton River. A long-term management plan has also been developed for mitigation measures for the Ryton riverbed and adjacent floodplain areas downstream of a future rock avalanche at the LCRAD site.
In September 2010 and February 2011, the Canterbury region experienced devastating earthquakes with an estimated economic cost of over NZ$40 billion (Parker and Steenkamp, 2012; Timar et al., 2014; Potter et al., 2015). The insurance market played an important role in rebuilding the Canterbury region after the earthquakes. Homeowners, insurance and reinsurance markets and New Zealand government agencies faced a difficult task to manage the rebuild process. From an empirical and theoretic research viewpoint, the Christchurch disaster calls for an assessment of how the insurance market deals with such disasters in the future. Previous studies have investigated market responses to losses in global catastrophes by focusing on the insurance supply-side. This study investigates both demand-side and supply-side insurance market responses to the Christchurch earthquakes. Despite the fact that New Zealand is prone to seismic activities, there are scant previous studies in the area of earthquake insurance. This study does offer a unique opportunity to examine and document the New Zealand insurance market response to catastrophe risk, providing results critical for understanding market responses after major loss events in general. A review of previous studies shows higher premiums suppress demand, but how higher premiums and a higher probability of risk affect demand is still largely unknown. According to previous studies, the supply of disaster coverage is curtailed unless the market is subsidised, however, there is still unsettled discussion on why demand decreases with time from the previous disaster even when the supply of coverage is subsidised by the government. Natural disaster risks pose a set of challenges for insurance market players because of substantial ambiguity associated with the probability of such events occurring and high spatial correlation of catastrophe losses. Private insurance market inefficiencies due to high premiums and spatially concentrated risks calls for government intervention in the provision of natural disaster insurance to avert situations of noninsurance and underinsurance. Political economy considerations make it more likely for government support to be called for if many people are uninsured than if few people are uninsured. However, emergency assistance for property owners after catastrophe events can encourage most property owners to not buy insurance against natural disaster and develop adverse selection behaviour, generating larger future risks for homeowners and governments. On the demand-side, this study has developed an intertemporal model to examine how demand for insurance changes post-catastrophe, and how to model it theoretically. In this intertemporal model, insurance can be sought in two sequential periods of time, and at the second period, it is known whether or not a loss event happened in period one. The results show that period one demand for insurance increases relative to the standard single period model when the second period is taken into consideration, period two insurance demand is higher post-loss, higher than both the period one demand and the period two demand without a period one loss. To investigate policyholders experience from the demand-side perspective, a total of 1600 survey questionnaires were administered, and responses from 254 participants received representing a 16 percent response rate. Survey data was gathered from four institutions in Canterbury and is probably not representative of the entire population. The results of the survey show that the change from full replacement value policy to nominated replacement value policy is a key determinant of the direction of change in the level of insurance coverage after the earthquakes. The earthquakes also highlighted the plight of those who were underinsured, prompting policyholders to update their insurance coverage to reflect the estimated cost of re-building their property. The survey has added further evidence to the existing literature, such as those who have had a recent experience with disaster loss report increased risk perception if a similar event happens in future with females reporting a higher risk perception than males. Of the demographic variables, only gender has a relationship with changes in household cover. On the supply-side, this study has built a risk-based pricing model suitable to generate a competitive premium rate for natural disaster insurance cover. Using illustrative data from the Christchurch Red-zone suburbs, the model generates competitive premium rates for catastrophe risk. When the proposed model incorporates the new RMS high-definition New Zealand Earthquake Model, for example, insurers can find the model useful to identify losses at a granular level so as to calculate the competitive premium. This study observes that the key to the success of the New Zealand dual insurance system despite the high prevalence of catastrophe losses are; firstly the EQC’s flat-rate pricing structure keeps private insurance premiums affordable and very high nationwide homeowner take-up rates of natural disaster insurance. Secondly, private insurers and the EQC have an elaborate reinsurance arrangement in place. By efficiently transferring risk to the reinsurer, the cost of writing primary insurance is considerably reduced ultimately expanding primary insurance capacity and supply of insurance coverage.
Deconstruction, at the end of the useful life of a building, produces a considerable amount of materials which must be disposed of, or be recycled / reused. At present, in New Zealand, most timber construction and demolition (C&D) material, particularly treated timber, is simply waste and is placed in landfills. For both technical and economic reasons (and despite the increasing cost of landfills), this position is unlikely to change in the next 10 – 15 years unless legislation dictates otherwise. Careful deconstruction, as opposed to demolition, can provide some timber materials which can be immediately re-used (eg. doors and windows), or further processed into other components (eg. beams or walls) or recycled (‘cascaded’) into other timber or composite products (e.g. fibre-board). This reusing / recycling of materials is being driven slowly in NZ by legislation, the ‘greening’ of the construction industry and public pressure. However, the recovery of useful material can be expensive and uneconomic (as opposed to land-filling). In NZ, there are few facilities which are able to sort and separate timber materials from other waste, although the soon-to-be commissioned Burwood Resource Recovery Park in Christchurch will attempt to deal with significant quantities of demolition waste from the recent earthquakes. The success (or otherwise) of this operation should provide good information as to how future C&D waste will be managed in NZ. In NZ, there are only a few, small scale facilities which are able to burn waste wood for energy recovery (e.g. timber mills), and none are known to be able to handle large quantities of treated timber. Such facilities, with constantly improving technology, are being commissioned in Europe (often with Government subsidies) and this indicates that similar bio-energy (co)generation will be established in NZ in the future. However, at present, the NZ Government provides little assistance to the bio-energy industry and the emergence worldwide of shale-gas reserves is likely to push the economic viability of bio-energy further into the future. The behaviour of timber materials placed in landfills is complex and poorly understood. Degrading timber in landfills has the potential to generate methane, a potent greenhouse gas, which can escape to the atmosphere and cancel out the significant benefits of carbon sequestration during tree growth. Improving security of landfills and more effective and efficient collection and utilisation of methane from landfills in NZ will significantly reduce the potential for leakage of methane to the atmosphere, acting as an offset to the continuing use of underground fossil fuels. Life cycle assessment (LCA), an increasingly important methodology for quantifying the environmental impacts of building materials (particularly energy, and global warming potential (GWP)), will soon be incorporated into the NZ Green Building Council Greenstar rating tools. Such LCA studies must provide a level playing field for all building materials and consider the whole life cycle. Whilst the end-of-life treatment of timber by LCA may establish a present-day base scenario, any analysis must also present a realistic end-of-life scenario for the future deconstruction of any 6 new building, as any building built today will be deconstructed many years in the future, when very different technologies will be available to deal with construction waste. At present, LCA practitioners in NZ and Australia place much value on a single research document on the degradation of timber in landfills (Ximenes et al., 2008). This leads to an end-of-life base scenario for timber which many in the industry consider to be an overestimation of the potential negative effects of methane generation. In Europe, the base scenario for wood disposal is cascading timber products and then burning for energy recovery, which normally significantly reduces any negative effects of the end-of-life for timber. LCA studies in NZ should always provide a sensitivity analysis for the end-of-life of timber and strongly and confidently argue that alternative future scenarios are realistic disposal options for buildings deconstructed in the future. Data-sets for environmental impacts (such as GWP) of building materials in NZ are limited and based on few research studies. The compilation of comprehensive data-sets with country-specific information for all building materials is considered a priority, preferably accounting for end-of-life options. The NZ timber industry should continue to ‘champion’ the environmental credentials of timber, over and above those of the other major building materials (concrete and steel). End-of-life should not be considered the ‘Achilles heel’ of the timber story.
Liquefaction-induced lateral spreading in large seismic events often results in pervasive and costly damage to engineering structures and lifelines, making it a critical component of engineering design. However, the complex nature of this phenomenon leads to designing for such a hazard extremely challenging and there is a clear for an improved understanding and predicting liquefaction-induced lateral spreading. The 2010-2011 Canterbury (New Zealand) Earthquakes triggered severe liquefaction-induced lateral spreading along the streams and rivers of the Christchurch region, causing extensive damage to roads, bridges, lifelines, and structures in the vicinity. The unfortunate devastation induced from lateral spreading in these events also rendered the rare opportunity to gain an improved understanding of lateral spreading displacements specific to the Christchurch region. As part of this thesis, the method of ground surveying was employed following the 4 September 2010 Darfield (Mw 7.1) and 22 February 2011 Christchurch (Mw 6.2) earthquakes at 126 locations (19 repeated) throughout Christchurch and surrounding suburbs. The method involved measurements and then summation of crack widths along a specific alignment (transect) running approximately perpendicular to the waterway to indicate typically a maximum lateral displacement at the bank and reduction of the magnitude of displacements with distance from the river. Rigorous data processing and comparisons with alternative measurements of lateral spreading were performed to verify results from field observations and validate the method of ground surveying employed, as well as highlight the complex nature of lateral spreading displacements. The welldocumented field data was scrutinized to gain an understanding of typical magnitudes and distribution patterns (distribution of displacement with distance) of lateral spreading observed in the Christchurch area. Maximum displacements ranging from less than 10 cm to over 3.5 m were encountered at the sites surveyed and the area affected by spreading ranged from less than 20 m to over 200 m from the river. Despite the highly non-uniform displacements, four characteristic distribution patterns including large, distributed ground displacements, block-type movements, large and localized ground displacements, and areas of little to no displacements were identified. Available geotechnical, seismic, and topographic data were collated at the ground surveying sites for subsequent analysis of field measurements. Two widely-used empirical models (Zhang et al. (2004), Youd et al. (2002)) were scrutinized and applied to locations in the vicinity of field measurements for comparison with model predictions. The results indicated generally poor correlation (outside a factor of two) with empirical predictions at most locations and further validated the need for an improved, analysis- based method of predicting lateral displacements that considers the many factors involved on a site-specific basis. In addition, the development of appropriate model input parameters for the Youd et al. (2002) model led to a site-specific correlation of soil behavior type index, Ic, and fines content, FC, for sites along the Avon River in Christchurch that matched up well with existing Ic – FC relationships commonly used in current practice. Lastly, a rigorous analysis was performed for 25 selected locations of ground surveying measurements along the Avon River where ground slope conditions are mild (-1 to 2%) and channel heights range from about 2 – 4.5 m. The field data was divided into categories based on the observed distribution pattern of ground displacements including: large and distributed, moderate and distributed, small to negligible, and large and localized. A systematic approach was applied to determine potential critical layers contributing to the observed displacement patterns which led to the development of characteristic profiles for each category considered. The results of these analyses outline an alternative approach to the evaluation of lateral spreading in which a detailed geotechnical analysis is used to identify the potential for large spreading displacements and likely spatial distribution patterns of spreading. Key factors affecting the observed magnitude and distribution of spreading included the thickness of the critical layer, relative density, soil type and layer continuity. It was found that the large and distributed ground displacements were associated with a thick (1.5 – 2.5 m) deposit of loose, fine to silty sand (qc1 ~4-7 MPa, Ic 1.9-2.1, qc1n_cs ~50-70) that was continuous along the bank and with distance from the river. In contrast, small to negligible displacements were characterized by an absence of or relatively thin (< 1 m), discontinuous critical layer. Characteristic features of the moderate and distributed displacements were found to be somewhere between these two extremes. The localized and large displacements showed a characteristic critical layer similar to that observed in the large and distributed sites but that was not continuous and hence leading to the localized zone of displacement. The findings presented in this thesis illustrate the highly complex nature of lateral displacements that cannot be captured in simplified models but require a robust geotechnical analysis similar to that performed for this research.
Coastal margins are exposed to rising sea levels that present challenging circumstances for natural resource management. This study investigates a rare example of tectonic displacement caused by earthquakes that generated rapid sea-level change in a tidal lagoon system typical of many worldwide. This thesis begins by evaluating the coastal squeeze effects caused by interactions between relative sea-level (RSL) rise and the built environment of Christchurch, New Zealand, and also examples of release from similar effects in areas of uplift where land reclamations were already present. Quantification of area gains and losses demonstrated the importance of natural lagoon expansion into areas of suitable elevation under conditions of RSL rise and showed that they may be necessary to offset coastal squeeze losses experienced elsewhere. Implications of these spatial effects include the need to provide accommodation space for natural ecosystems under RSL rise, yet other land-uses are likely to be present in the areas required. Consequently, the resilience of these environments depends on facilitating transitions between human land-uses either proactively or in response to disaster events. Principles illustrated by co-seismic sea-level change are generally applicable to climate change adaptation due to the similarity of inundation effects. Furthermore, they highlight the potential role of non-climatic factors in determining the overall trajectory of change. Chapter 2 quantifies impacts on riparian wetland ecosystems over an eight year period post- quake. Coastal wetlands were overwhelmed by RSL rise and recovery trajectories were surprisingly slow. Four risk factors were identified from the observed changes: 1) the encroachment of anthropogenic land-uses, 2) connectivity losses between areas of suitable elevation, 3) the disproportionate effect of larger wetland vulnerabilities, and 4) the need to protect new areas to address the future movement of ecosystems. Chapter 3 evaluates the unique context of shoreline management on a barrier sandspit under sea-level rise. A linked scenario approach was used to evaluate changes on the open coast and estuarine shorelines simultaneously and consider combined effects. The results show dune loss from a third of the study area using a sea-level rise scenario of 1 m over 100 years and with continuation of current land-uses. Increased exposure to natural hazards and accompanying demand for seawalls is a likely consequence unless natural alternatives can be progressed. In contrast, an example of managed retreat following earthquake-induced subsidence of the backshore presents a new opportunity to restart saltmarsh accretion processes seaward of coastal defences with the potential to reverse decades of degradation and build sea-level rise resilience. Considering both shorelines simultaneously highlights the existence of pinch-points from opposing forces that result in small land volumes above the tidal range. Societal adaptation is delicately poised between the paradigms of resisting or accommodating nature and challenged by the long perimeter and confined nature of the sandspit feature. The remaining chapters address the potential for salinity effects caused by tidal prism changes with a focus on the conservation of īnanga (Galaxias maculatus), a culturally important fish that supports New Zealand‘s whitebait fishery. Methodologies were developed to test the hypothesis that RSL changes would drive a shift in the distribution of spawning sites with implications for their management. Chapter 4 describes a new practical methodology for quantifying the total productivity and spatiotemporal variability of spawning sites at catchment scale. Chapter 5 describes the novel use of artificial habitats as a detection tools to help overcome field survey limitations in degraded environments where egg mortality can be high. The results showed that RSL changes resulted in major shifts in spawning locations and these were associated with new patterns of vulnerability due to the continuation of pre-disturbance land-uses. Unexpected findings includes an improved understanding of the spatial relationship between salinity and spawning habitat, and identification of an invasive plant species as important spawning habitat, both with practical management implications. To conclude, the design of legal protection mechanisms was evaluated in relation to the observed habitat shifts and with a focus on two new planning initiatives that identified relatively large protected areas (PAs) in the lower river corridors. Although the larger PAs were better able to accommodate the observed habitat shifts inefficiencies were also apparent due to spatial disparities between PA boundaries and the values requiring protection. To reduce unnecessary trade-offs with other land-uses, PAs of sufficient size to cover the observable spatiotemporal variability and coupled with adaptive capacity to address future change may offer a high effectiveness from a network of smaller PAs. The latter may be informed by both monitoring and modelling of future shifts and these are expected to include upstream habitat migration driven by the identified salinity relationships and eustatic sea-level rise. The thesis concludes with a summary of the knowledge gained from this research that can assist the development of a new paradigm of environmental sustainability incorporating conservation and climate change adaptation. Several promising directions for future research identified within this project are also discussed.
As a consequence of the 2010 – 2011 Canterbury earthquake sequence, Christchurch experienced widespread liquefaction, vertical settlement and lateral spreading. These geological processes caused extensive damage to both housing and infrastructure, and increased the need for geotechnical investigation substantially. Cone Penetration Testing (CPT) has become the most common method for liquefaction assessment in Christchurch, and issues have been identified with the soil behaviour type, liquefaction potential and vertical settlement estimates, particularly in the north-western suburbs of Christchurch where soils consist mostly of silts, clayey silts and silty clays. The CPT soil behaviour type often appears to over-estimate the fines content within a soil, while the liquefaction potential and vertical settlement are often calculated higher than those measured after the Canterbury earthquake sequence. To investigate these issues, laboratory work was carried out on three adjacent CPT/borehole pairs from the Groynes Park subdivision in northern Christchurch. Boreholes were logged according to NZGS standards, separated into stratigraphic layers, and laboratory tests were conducted on representative samples. Comparison of these results with the CPT soil behaviour types provided valuable information, where 62% of soils on average were specified by the CPT at the Groynes Park subdivision as finer than what was actually present, 20% of soils on average were specified as coarser than what was actually present, and only 18% of soils on average were correctly classified by the CPT. Hence the CPT soil behaviour type is not accurately describing the stratigraphic profile at the Groynes Park subdivision, and it is understood that this is also the case in much of northwest Christchurch where similar soils are found. The computer software CLiq, by GeoLogismiki, uses assessment parameter constants which are able to be adjusted with each CPT file, in an attempt to make each more accurate. These parameter changes can in some cases substantially alter the results for liquefaction analysis. The sensitivity of the overall assessment method, raising and lowering the water table, lowering the soil behaviour type index, Ic, liquefaction cutoff value, the layer detection option, and the weighting factor option, were analysed by comparison with a set of ‘base settings’. The investigation confirmed that liquefaction analysis results can be very sensitive to the parameters selected, and demonstrated the dependency of the soil behaviour type on the soil behaviour type index, as the tested assessment parameters made very little to no changes to the soil behaviour type plots. The soil behaviour type index, Ic, developed by Robertson and Wride (1998) has been used to define a soil’s behaviour type, which is defined according to a set of numerical boundaries. In addition to this, the liquefaction cutoff point is defined as Ic > 2.6, whereby it is assumed that any soils with an Ic value above this will not liquefy due to clay-like tendencies (Robertson and Wride, 1998). The method has been identified in this thesis as being potentially unsuitable for some areas of Christchurch as it was developed for mostly sandy soils. An alternative methodology involving adjustment of the Robertson and Wride (1998) soil behaviour type boundaries is proposed as follows: Ic < 1.31 – Gravelly sand to dense sand 1.31 < Ic < 1.90 – Sands: clean sand to silty sand 1.90 < Ic < 2.50 – Sand mixtures: silty sand to sandy silt 2.50 < Ic < 3.20 – Silt mixtures: clayey silt to silty clay 3.20 < Ic < 3.60 – Clays: silty clay to clay Ic > 3.60 – Organics soils: peats. When the soil behaviour type boundary changes were applied to 15 test sites throughout Christchurch, 67% showed an improved change of soil behaviour type, while the remaining 33% remained unchanged, because they consisted almost entirely of sand. Within these boundary changes, the liquefaction cutoff point was moved from Ic > 2.6 to Ic > 2.5 and altered the liquefaction potential and vertical settlement to more realistic ii values. This confirmed that the overall soil behaviour type boundary changes appear to solve both the soil behaviour type issues and reduce the overestimation of liquefaction potential and vertical settlement. This thesis acts as a starting point towards researching the issues discussed. In particular, future work which would be useful includes investigation of the CLiq assessment parameter adjustments, and those which would be most suitable for use in clay-rich soils such as those in Christchurch. In particular consideration of how the water table can be better assessed when perched layers of water exist, with the limitation that only one elevation can be entered into CLiq. Additionally, a useful investigation would be a comparison of the known liquefaction and settlements from the Canterbury earthquake sequence with the liquefaction and settlement potentials calculated in CLiq for equivalent shaking conditions. This would enable the difference between the two to be accurately defined, and a suitable adjustment applied. Finally, inconsistencies between the Laser-Sizer and Hydrometer should be investigated, as the Laser-Sizer under-estimated the fines content by up to one third of the Hydrometer values.
In the last century, seismic design has undergone significant advancements. Starting from the initial concept of designing structures to perform elastically during an earthquake, the modern seismic design philosophy allows structures to respond to ground excitations in an inelastic manner, thereby allowing damage in earthquakes that are significantly less intense than the largest possible ground motion at the site of the structure. Current performance-based multi-objective seismic design methods aim to ensure life-safety in large and rare earthquakes, and to limit structural damage in frequent and moderate earthquakes. As a result, not many recently built buildings have collapsed and very few people have been killed in 21st century buildings even in large earthquakes. Nevertheless, the financial losses to the community arising from damage and downtime in these earthquakes have been unacceptably high (for example; reported to be in excess of 40 billion dollars in the recent Canterbury earthquakes). In the aftermath of the huge financial losses incurred in recent earthquakes, public has unabashedly shown their dissatisfaction over the seismic performance of the built infrastructure. As the current capacity design based seismic design approach relies on inelastic response (i.e. ductility) in pre-identified plastic hinges, it encourages structures to damage (and inadvertently to incur loss in the form of repair and downtime). It has now been widely accepted that while designing ductile structural systems according to the modern seismic design concept can largely ensure life-safety during earthquakes, this also causes buildings to undergo substantial damage (and significant financial loss) in moderate earthquakes. In a quest to match the seismic design objectives with public expectations, researchers are exploring how financial loss can be brought into the decision making process of seismic design. This has facilitated conceptual development of loss optimisation seismic design (LOSD), which involves estimating likely financial losses in design level earthquakes and comparing against acceptable levels of loss to make design decisions (Dhakal 2010a). Adoption of loss based approach in seismic design standards will be a big paradigm shift in earthquake engineering, but it is still a long term dream as the quantification of the interrelationships between earthquake intensity, engineering demand parameters, damage measures, and different forms of losses for different types of buildings (and more importantly the simplification of the interrelationship into design friendly forms) will require a long time. Dissecting the cost of modern buildings suggests that the structural components constitute only a minor portion of the total building cost (Taghavi and Miranda 2003). Moreover, recent research on seismic loss assessment has shown that the damage to non-structural elements and building contents contribute dominantly to the total building loss (Bradley et. al. 2009). In an earthquake, buildings can incur losses of three different forms (damage, downtime, and death/injury commonly referred as 3Ds); but all three forms of seismic loss can be expressed in terms of dollars. It is also obvious that the latter two loss forms (i.e. downtime and death/injury) are related to the extent of damage; which, in a building, will not just be constrained to the load bearing (i.e. structural) elements. As observed in recent earthquakes, even the secondary building components (such as ceilings, partitions, facades, windows parapets, chimneys, canopies) and contents can undergo substantial damage, which can lead to all three forms of loss (Dhakal 2010b). Hence, if financial losses are to be minimised during earthquakes, not only the structural systems, but also the non-structural elements (such as partitions, ceilings, glazing, windows etc.) should be designed for earthquake resistance, and valuable contents should be protected against damage during earthquakes. Several innovative building technologies have been (and are being) developed to reduce building damage during earthquakes (Buchanan et. al. 2011). Most of these developments are aimed at reducing damage to the buildings’ structural systems without due attention to their effects on non-structural systems and building contents. For example, the PRESSS system or Damage Avoidance Design concept aims to enable a building’s structural system to meet the required displacement demand by rocking without the structural elements having to deform inelastically; thereby avoiding damage to these elements. However, as this concept does not necessarily reduce the interstory drift or floor acceleration demands, the damage to non-structural elements and contents can still be high. Similarly, the concept of externally bracing/damping building frames reduces the drift demand (and consequently reduces the structural damage and drift sensitive non-structural damage). Nevertheless, the acceleration sensitive non-structural elements and contents will still be very vulnerable to damage as the floor accelerations are not reduced (arguably increased). Therefore, these concepts may not be able to substantially reduce the total financial losses in all types of buildings. Among the emerging building technologies, base isolation looks very promising as it seems to reduce both inter-storey drifts and floor accelerations, thereby reducing the damage to the structural/non-structural components of a building and its contents. Undoubtedly, a base isolated building will incur substantially reduced loss of all three forms (dollars, downtime, death/injury), even during severe earthquakes. However, base isolating a building or applying any other beneficial technology may incur additional initial costs. In order to provide incentives for builders/owners to adopt these loss-minimising technologies, real-estate and insurance industries will have to acknowledge the reduced risk posed by (and enhanced resilience of) such buildings in setting their rental/sale prices and insurance premiums.
The nonlinear dynamic soil-foundation-structure interaction (SFSI) can signifi cantly affect the seismic response of buildings, causing additional deformation modes, damage and repair costs. Because of nonlinear foundation behaviour and interactions, the seismic demand on the superstructure may considerably change, and also permanent deformations at the foundation level may occur. Although SFSI effects may be benefi cial to the superstructure performance, any advantage would be of little structural value unless the phenomenon can be reliably controlled and exploited. Detrimental SFSI effects may also occur, including acceleration and displacement response ampli cation and differential settlements, which would be unconservative to neglect. The lack of proper understanding of the phenomenon and the limited available simpli ed tools accounting for SFSI have been major obstacles to the implementation of integrated design and assessment procedures into the everyday practice. In this study concepts, ideas and practical tools (inelastic spectra) for the seismic design and assessment of integrated foundation-superstructure systems are presented, with the aim to explicitly consider the impact of nonlinearities occurring at the soil-foundation interface on the building response within an integrated approach, where the foundation soil and superstructure are considered as part of an integrated system when evaluating the seismic response, working synergically for the achievement of a target global performance. A conceptual performance-based framework for the seismic design and assessment of integrated foundation-superstructure systems is developed. The framework is based on the use of peak and residual response parameters for both the superstructure and the foundation, which are then combined to produce the system performance matrix. Each performance matrix allows for worsening of the performance when different contributions are combined. An attempt is made to test the framework by using case histories from the 2011 Christchurch earthquake, which are previously shown to have been severely affected by nonlinear SFSI. The application highlights the framework sensitivity to the adopted performance limit states, which must be realistic for a reliable evaluation of the system performance. Constant ductility and constant strength inelastic spectra are generated for nonlinear SFSI systems (SDOF nonlinear superstructure and 3DOF foundation allowing for uplift and soil yielding), representing multistorey RC buildings with shallow rigid foundations supported by cohesive soils. Different ductilities/strengths, hysteretic rules (Bi-linear, Takeda and Flag-Shape), soil stiffness and strength and bearing capacity factors are considered. Footings and raft foundations are investigated, characterized respectively by constant (3 and 8) and typically large bearing capacity factors. It is confi rmed that when SFSI is considered, the superstructure yielding force needed to satisfy a target ductility for a new building changes, and that similarly, for an existing building, the ductility demand on a building of a given strength varies. The extent of change of seismic response with respect to xed-base (FB) conditions depends on the class of soils considered, and on the bearing capacity factor (SF). For SF equal to 3, the stiffer soils enhance the nonlinear rotational foundation behaviour and are associated with reduced settlement, while the softer ones are associated with increased settlement response but not signi ficant rotational behaviour. On average terms, for the simplifi ed models considered, SFSI is found to be bene cial to the superstructure performance in terms of acceleration and superstructure displacement demand, although exceptions are recorded due to ground motion variability. Conversely, in terms of total displacement, a signi cant response increase is observed. The larger the bearing capacity factor, the more the SFSI response approaches the FB system. For raft foundation buildings, characterized by large bearing capacity factors, the impact of foundation response is mostly elastic, and the system on average approaches FB conditions. Well de fined displacement participation factors to the peak total lateral displacement are observed for the different contributions (i.e. peak foundation rotation and translation and superstructure displacement). While the superstructure and foundation rotation show compensating trends, the foundation translation contribution varies as a function of the moment-to-shear ratio, becoming negligible in the medium-to-long periods. The longer the superstructure FB period, the less the foundation response is signifi cant. The larger the excitation level and the less ductile the superstructure, the larger the foundation contribution to the total lateral displacement, and the less the superstructure contribution. In terms of hysteretic behaviour, its impact is larger when the superstructure response is more signifi cant, i.e. for the softer/weaker soils and larger ductilities. Particularly, for the Flag Shape rule, larger superstructure displacement participation factors and smaller foundation contributions are recorded. In terms of residual displacements, the total residual-to-maximum ratios are similar in amplitudes and trends to the corresponding FB system responses, with the foundation and superstructure contributions showing complementary trends. The impact of nonlinear SFSI is especially important for the Flag Shape hysteresis rule, which would not otherwise suffer of any permanent deformations. By using the generated peak and residual inelastic spectra (i.e. inelastic acceleration/ displacement modifi cation factor spectra, and/or participation factor and residual spectra), conceptual simplifi ed procedures for the seismic design and assessment of integrated foundation-superstructure systems are presented. The residual displacements at both the superstructure and foundation levels are explicitly considered. Both the force- and displacement-based approaches are explored. The procedures are de fined to be complementary to the previously proposed integrated performance-based framework. The use of participation factor spectra allows the designer to easily visualize the response of the system components, and could assist the decision making process of both the design and assessment of SFSI systems. The presented numerical results have been obtained using simpli ed models, assuming rigid foundation behaviour and neglecting P-Delta effects. The consideration of more complex systems including asymmetry in stiffness, mass, axial load and ground conditions with a exible foundation layout would highlight detrimental SFSI effects as related to induced differential settlements, while accounting for PDelta effects would further amplify the displacement response. Also, the adopted acceleration records were selected and scaled to match conventional design spectra, thus not representing any response ampli cation in the medium-to-long period range which could as well cause detrimental SFSI effects. While these limitations should be the subject of further research, this study makes a step forward to the understanding of SFSI phenomenon and its incorporation into performance-based design/assessment considerations.