Search

found 7 results

Research Papers, Lincoln University

Mitigating the cascade of environmental damage caused by the movement of excess reactive nitrogen (N) from land to sea is currently limited by difficulties in precisely and accurately measuring N fluxes due to variable rates of attenuation (denitrification) during transport. This thesis develops the use of the natural abundance isotopic composition of nitrate (δ15N and δ18O of NO₃-) to integrate the spatialtemporal variability inherent to denitrification, creating an empirical framework for evaluating attenuation during land to water NO₃- transfers. This technique is based on the knowledge that denitrifiers kinetically discriminate against 'heavy' forms of both N and oxygen (O), creating a parallel enrichment in isotopes of both species as the reaction progresses. This discrimination can be quantitatively related to NO₃- attenuation by isotopic enrichment factors (εdenit). However, while these principles are understood, use of NO₃- isotopes to quantify denitrification fluxes in non-marine environments has been limited by, 1) poor understanding of εdenit variability, and, 2) difficulty in distinguishing the extent of mixing of isotopically distinct sources from the imprint of denitrification. Through a combination of critical literature analysis, mathematical modelling, mesocosm to field scale experiments, and empirical studies on two river systems over distance and time, these short comings are parametrised and a template for future NO₃- isotope based attenuation measurements outlined. Published εdenit values (n = 169) are collated in the literature analysis presented in Chapter 2. By evaluating these values in the context of known controllers on the denitrification process, it is found that the magnitude of εdenit, for both δ15N and δ18O, is controlled by, 1) biology, 2) mode of transport through the denitrifying zone (diffusion v. advection), and, 3) nitrification (spatial-temporal distance between nitrification and denitrification). Based on the outcomes of this synthesis, the impact of the three factors identified as controlling εdenit are quantified in the context of freshwater systems by combining simple mathematical modelling and lab incubation studies (comparison of natural variation in biological versus physical expression). Biologically-defined εdenit, measured in sediments collected from four sites along a temperate stream and from three tropical submerged paddy fields, varied from -3‰ to -28‰ depending on the site’s antecedent carbon content. Following diffusive transport to aerobic surface water, εdenit was found to become more homogeneous, but also lower, with the strength of the effect controlled primarily by diffusive distance and the rate of denitrification in the sediments. I conclude that, given the variability in fractionation dynamics at all levels, applying a range of εdenit from -2‰ to -10‰ provides more accurate measurements of attenuation than attempting to establish a site-specific value. Applying this understanding of denitrification's fractionation dynamics, four field studies were conducted to measure denitrification/ NO₃- attenuation across diverse terrestrial → freshwater systems. The development of NO₃- isotopic signatures (i.e., the impact of nitrification, biological N fixation, and ammonia volatilisation on the isotopic 'imprint' of denitrification) were evaluated within two key agricultural regions: New Zealand grazed pastures (Chapter 4) and Philippine lowland submerged rice production (Chapter 5). By measuring the isotopic composition of soil ammonium, NO₃- and volatilised ammonia following the bovine urine deposition, it was determined that the isotopic composition of NO₃ - leached from grazed pastures is defined by the balance between nitrification and denitrification, not ammonia volatilisation. Consequently, NO₃- created within pasture systems was predicted to range from +10‰ (δ15N)and -0.9‰ (δ18O) for non-fertilised fields (N limited) to -3‰ (δ15N) and +2‰ (δ18O) for grazed fertilised fields (N saturated). Denitrification was also the dominant determinant of NO₃- signatures in the Philippine rice paddy. Using a site-specific εdenit for the paddy, N inputs versus attenuation were able to be calculated, revealing that >50% of available N in the top 10 cm of soil was denitrified during land preparation, and >80% of available N by two weeks post-transplanting. Intriguingly, this denitrification was driven by rapid NO₃- production via nitrification of newly mineralised N during land preparation activities. Building on the relevant range of εdenit established in Chapters 2 and 3, as well as the soil-zone confirmation that denitrification was the primary determinant of NO₃- isotopic composition, two long-term longitudinal river studies were conducted to assess attenuation during transport. In Chapter 6, impact and recovery dynamics in an urban stream were assessed over six months along a longitudinal impact gradient using measurements of NO₃- dual isotopes, biological populations, and stream chemistry. Within 10 days of the catastrophic Christchurch earthquake, dissolved oxygen in the lowest reaches was <1 mg l⁻¹, in-stream denitrification accelerated (attenuating 40-80% of sewage N), microbial biofilm communities changed, and several benthic invertebrate taxa disappeared. To test the strength of this method for tackling the diffuse, chronic N loading of streams in agricultural regions, two years of longitudinal measurements of NO₃- isotopes were collected. Attenuation was negatively correlated with NO₃- concentration, and was highly dependent on rainfall: 93% of calculated attenuation (20 kg NO₃--N ha⁻¹ y⁻¹) occurred within 48 h of rainfall. The results of these studies demonstrate the power of intense measurements of NO₃- stable isotope for distinguishing temporal and spatial trends in NO₃ - loss pathways, and potentially allow for improved catchment-scale management of agricultural intensification. Overall this work now provides a more cohesive understanding for expanding the use of NO₃- isotopes measurements to generate accurate understandings of the controls on N losses. This information is becoming increasingly important to predict ecosystem response to future changes, such the increasing agricultural intensity needed to meet global food demand, which is occurring synergistically with unpredictable global climate change.

Research papers, University of Canterbury Library

This study investigates the uncertainty of simulated earthquake ground motions for smallmagnitude events (Mw 3.5 – 5) in Canterbury, New Zealand. 148 events were simulated with specified uncertainties in: event magnitude, hypocentre location, focal mechanism, high frequency rupture velocity, Brune stress parameter, the site 30-m time-averaged shear wave velocity (Vs30), anelastic attenuation (Q) and high frequency path duration. In order to capture these uncertainties, 25 realisations for each event were generated using the Graves and Pitarka (2015) hybrid broadband simulation approach. Monte-Carlo realisations were drawn from distributions for each uncertainty, to generate a suite of simulation realisations for each event and site. The fit of the multiple simulation realisations to observations were assessed using linear mixed effects regression to generate the systematic source, path and site effects components across all ground motion intensity measure residuals. Findings show that additional uncertainties are required in each of the three source, path, and site components, however the level of output uncertainty is promising considering the input uncertainties included.

Research papers, University of Canterbury Library

The purpose of this thesis is to conduct a detailed examination of the forward-directivity characteristics of near-fault ground motions produced in the 2010-11 Canterbury earthquakes, including evaluating the efficacy of several existing empirical models which form the basis of frameworks for considering directivity in seismic hazard assessment. A wavelet-based pulse classification algorithm developed by Baker (2007) is firstly used to identify and characterise ground motions which demonstrate evidence of forward-directivity effects from significant events in the Canterbury earthquake sequence. The algorithm fails to classify a large number of ground motions which clearly exhibit an early-arriving directivity pulse due to: (i) incorrect pulse extraction resulting from the presence of pulse-like features caused by other physical phenomena; and (ii) inadequacy of the pulse indicator score used to carry out binary pulse-like/non-pulse-like classification. An alternative ‘manual’ approach is proposed to ensure 'correct' pulse extraction and the classification process is also guided by examination of the horizontal velocity trajectory plots and source-to-site geometry. Based on the above analysis, 59 pulse-like ground motions are identified from the Canterbury earthquakes , which in the author's opinion, are caused by forward-directivity effects. The pulses are also characterised in terms of their period and amplitude. A revised version of the B07 algorithm developed by Shahi (2013) is also subsequently utilised but without observing any notable improvement in the pulse classification results. A series of three chapters are dedicated to assess the predictive capabilities of empirical models to predict the: (i) probability of pulse occurrence; (ii) response spectrum amplification caused by the directivity pulse; (iii) period and amplitude (peak ground velocity, PGV) of the directivity pulse using observations from four significant events in the Canterbury earthquakes. Based on the results of logistic regression analysis, it is found that the pulse probability model of Shahi (2013) provides the most improved predictions in comparison to its predecessors. Pulse probability contour maps are developed to scrutinise observations of pulses/non-pulses with predicted probabilities. A direct comparison of the observed and predicted directivity amplification of acceleration response spectra reveals the inadequacy of broadband directivity models, which form the basis of the near-fault factor in the New Zealand loadings standard, NZS1170.5:2004. In contrast, a recently developed narrowband model by Shahi & Baker (2011) provides significantly improved predictions by amplifying the response spectra within a small range of periods. The significant positive bias demonstrated by the residuals associated with all models at longer vibration periods (in the Mw7.1 Darfield and Mw6.2 Christchurch earthquakes) is likely due to the influence of basin-induced surface waves and non-linear soil response. Empirical models for the pulse period notably under-predict observations from the Darfield and Christchurch earthquakes, inferred as being a result of both the effect of nonlinear site response and influence of the Canterbury basin. In contrast, observed pulse periods from the smaller magnitude June (Mw6.0) and December (Mw5.9) 2011 earthquakes are in good agreement with predictions. Models for the pulse amplitude generally provide accurate estimates of the observations at source-to-site distances between 1 km and 10 km. At longer distances, observed PGVs are significantly under-predicted due to their slower apparent attenuation. Mixed-effects regression is employed to develop revised models for both parameters using the latest NGA-West2 pulse-like ground motion database. A pulse period relationship which accounts for the effect of faulting mechanism using rake angle as a continuous predictor variable is developed. The use of a larger database in model development, however does not result in improved predictions of pulse period for the Darfield and Christchurch earthquakes. In contrast, the revised model for PGV provides a more appropriate attenuation of the pulse amplitude with distance, and does not exhibit the bias associated with previous models. Finally, the effects of near-fault directivity are explicitly included in NZ-specific probabilistic seismic hazard analysis (PSHA) using the narrowband directivity model of Shahi & Baker (2011). Seismic hazard analyses are conducted with and without considering directivity for typical sites in Christchurch and Otira. The inadequacy of the near-fault factor in the NZS1170.5: 2004 is apparent based on a comparison with the directivity amplification obtained from PSHA.

Research papers, University of Canterbury Library

This study contains an evaluation of the seismic hazard associated with the Springbank Fault, a blind structure discovered in 1998 close to Christchurch. The assessment of the seismic hazard is approached as a deterministic process in which it is necessary to establish: 1) fault characteristics; 2) the maximum earthquake that the fault is capable of producing and 3) ground motions estimations. Due to the blind nature of the fault, conventional techniques used to establish the basic fault characteristics for seismic hazard assessments could not be applied. Alternative methods are used including global positioning system (GPS) surveys, morphometric analyses along rivers, shallow seismic reflection surveys and computer modelling. These were supplemented by using multiple empirical equations relating fault attributes to earthquake magnitude, and attenuation relationships to estimate ground motions in the near-fault zone. The analyses indicated that the Springbank Fault is a reverse structure located approximately 30 km to the northwest of Christchurch, along a strike length of approximately 16 km between the Eyre and Ashley River. The fault does not reach the surface, buy it is associated with a broad anticline whose maximum topographic expression offers close to the mid-length of the fault. Two other reverse faults, the Eyrewell and Sefton Faults, are inferred in the study area. These faults, together with the Springbank and Hororata Faults and interpreted as part of a sys of trust/reverse faults propagating from a decollement located at mid-crustal depths of approximately 14 km beneath the Canterbury Plains Within this fault system, the Springbank Fault is considered to behave in a seismically independent way, with a fault slip rate of ~0.2 mm/yr, and the capacity of producing a reverse-slip earthquake of moment magnitude ~6.4, with an earthquake recurrence of 3,000 years. An earthquake of the above characteristics represents a significant seismic hazard for various urban centres in the near-fault zone including Christchurch, Rangiora, Oxford, Amberley, Kaiapoi, Darfield, Rollestion and Cust. Estimated peak ground accelerations for these towns range between 0.14 g to 0.5 g.

Research papers, University of Canterbury Library

The Amuri Earthquake of September 1, 1888 (magnitude M = 6.5 to 6.8) occurred on the Hope River Segment of the Hope Fault west of Hanmer Plains. The earthquake was felt strongly in North Canterbury and North Westland and caused considerable property damage and landsliding in the Lower Hope Valley. However, damage reports and the spatial distribution of felt intensities emphasize extreme variations in seismic effects over short distances, probably due to topographic focusing and local ground conditions. Significant variations in lateral fault displacement occurred at secondary fault segment boundaries (side-steps and bends in the fault trace) during the 1888 earthquake. This historical spatial variation in lateral slip is matched by the Late Quaternary geomorphic distribution of slip on the Hope River Segment of the Hope Fault. Trenching studies at two sites on the Hope Fault have also identified evidence for five pre-historic earthquakes of similar magnitude to the 1888 earthquake and an average recurrence interval of 134 ± 27 years between events. Magnitude estimates for the 1888 earthquake are combined with a. strong ground motion attenuation expression to provide an estimate of potential ground accelerations in Amuri District during-future earthquakes on the Hope River Segment of the Hope Fault. The predicted acceleration response on bedrock sites within 20 km of the epicentral region is between 0.23 g and 0.34 g. The close match between the historic, inferred pre-historic and geomorphic distribution of lateral slip indicates that secondary fault segmentation exerts a strong structural control on rupture propagation and the expression of fault displacement at the surface. In basement rocks at depth the spatial variations in slip are inferred to be distributed within zones of pervasive cataclastic shear, on either side of the fault segment boundaries. The large variations in surface displacement across fault segment boundaries means that one must know the geometry of the fault in order to evaluate slip-rates calculated from individual locations. The average Late Quaternary slip-rate on the Hope Fault at Glynn Wye Station is between 15.5 mm/yr and 18.25 mm/yr and the rate on the subsidiary Kakapo Fault is between 5.0 mm/yr and 7.5 mm/yr. These rates have been determined from sites which are relatively free of structural complication.

Research papers, University of Canterbury Library

Probabilistic Structural Fire Engineering (PSFE) has been introduced to overcome the limitations of current conventional approaches used for the design of fire-exposed structures. Current structural fire design investigates worst-case fire scenarios and include multiple thermal and structural analyses. PSFE permits buildings to be designed to a level of life safety or economic loss that may occur in future fire events with the help of a probabilistic approach. This thesis presents modifications to the adoption of a Performance-Based Earthquake Engineering (PBEE) framework in Probabilistic Structural Fire Engineering (PSFE). The probabilistic approach runs through a series of interrelationships between different variables, and successive convolution integrals of these interrelationships result in probabilities of different measures. The process starts with the definition of a fire severity measure (FSM), which best relates fire hazard intensity with structural response. It is identified by satisfying efficiency and sufficiency criteria as described by the PBEE framework. The relationship between a fire hazard and corresponding structural response is established by analysis methods. One method that has been used to quantify this relationship in PSFE is Incremental Fire Analysis (IFA). The existing IFA approach produces unrealistic fire scenarios, as fire profiles may be scaled to wide ranges of fire severity levels, which may not physically represent any real fires. Two new techniques are introduced in this thesis to limit extensive scaling. In order to obtain an annual rate of exceedance of fire hazard and structural response for an office building, an occurrence model and an attenuation model for office fires are generated for both Christchurch city and New Zealand. The results show that Christchurch city is 15% less likely to experience fires that have the potential to cause structural failures in comparison to all of New Zealand. In establishing better predictive relationships between fires and structural response, cumulative incident radiation (a fire hazard property) is found to be the most appropriate fire severity measure. This research brings together existing research on various sources of uncertainty in probabilistic structural fire engineering, such as elements affecting post-flashover fire development factors (fuel load, ventilation, surface lining and compartment geometry), fire models, analysis methods and structural reliability. Epistemic uncertainty and aleatory uncertainty are investigated in the thesis by examining the uncertainty associated with modelling and the factors that influence post-flashover development of fires. A survey of 12 buildings in Christchurch in combination with recent surveys in New Zealand produced new statistical data on post-flashover development factors in office buildings in New Zealand. The effects of these parameters on temperature-time profiles are evaluated. The effects of epistemic uncertainty due to fire models in the estimation of structural response is also calculated. Parametric fires are found to have large uncertainty in the prediction of post-flashover fires, while the BFD curves have large uncertainties in prediction of structural response. These uncertainties need to be incorporated into failure probability calculations. Uncertainty in structural modelling shows that the choices that are made during modelling have a large influence on realistic predictions of structural response.

Research papers, University of Canterbury Library

The overarching goal of this dissertation is to improve predictive capabilities of geotechnical seismic site response analyses by incorporating additional salient physical phenomena that influence site effects. Specifically, multidimensional wave-propagation effects that are neglected in conventional 1D site response analyses are incorporated by: (1) combining results of 3D regional-scale simulations with 1D nonlinear wave-propagation site response analysis, and (2) modelling soil heterogeneity in 2D site response analyses using spatially-correlated random fields to perturb soil properties. A method to combine results from 3D hybrid physics-based ground motion simulations with site-specific nonlinear site response analyses was developed. The 3D simulations capture 3D ground motion phenomena on a regional scale, while the 1D nonlinear site response, which is informed by detailed site-specific soil characterization data, can capture site effects more rigorously. Simulations of 11 moderate-to-large earthquakes from the 2010-2011 Canterbury Earthquake Sequence (CES) at 20 strong motion stations (SMS) were used to validate simulations with observed ground motions. The predictions were compared to those from an empirically-based ground motion model (GMM), and from 3D simulations with simplified VS30- based site effects modelling. By comparing all predictions to observations at seismic recording stations, it was found that the 3D physics-based simulations can predict ground motions with comparable bias and uncertainty as the GMM, albeit, with significantly lower bias at long periods. Additionally, the explicit modelling of nonlinear site-response improves predictions significantly compared to the simplified VS30-based approach for soft-soil or atypical sites that exhibit exceptionally strong site effects. A method to account for the spatial variability of soils and wave scattering in 2D site response analyses was developed and validated against a database of vertical array sites in California. The inputs required to run the 2D analyses are nominally the same as those required for 1D analyses (except for spatial correlation parameters), enabling easier adoption in practice. The first step was to create the platform and workflow, and to perform a sensitivity study involving 5,400 2D model realizations to investigate the influence of random field input parameters on wave scattering and site response. Boundary conditions were carefully assessed to understand their effect on the modelled response and select appropriate assumptions for use on a 2D model with lateral heterogeneities. Multiple ground-motion intensity measures (IMs) were analyzed to quantify the influence from random field input parameters and boundary conditions. It was found that this method is capable of scattering seismic waves and creating spatially-varying ground motions at the ground surface. The redistribution of ground-motion energy across wider frequency bands, and the scattering attenuation of high-frequency waves in 2D analyses, resemble features observed in empirical transfer functions (ETFs) computed in other studies. The developed 2D method was subsequently extended to more complicated multi-layer soil profiles and applied to a database of 21 vertical array sites in California to test its appropriate- ness for future predictions. Again, different boundary condition and input motion assumptions were explored to extend the method to the in-situ conditions of a vertical array (with a sensor embedded in the soil). ETFs were compared to theoretical transfer functions (TTFs) from conventional 1D analyses and 2D analyses with heterogeneity. Residuals of transfer-function- based IMs, and IMs of surface ground motions, were also used as validation metrics. The spatial variability of transfer-function-based IMs was estimated from 2D models and compared to the event-to-event variability from ETFs. This method was found capable of significantly improving predictions of median ETF amplification factors, especially for sites that display higher event-to-event variability. For sites that are well represented by 1D methods, the 2D approach can underpredict amplification factors at higher modes, suggesting that the level of heterogeneity may be over-represented by the 2D random field models used in this study.