Search

found 242 results

Research papers, University of Canterbury Library

In the last century, seismic design has undergone significant advancements. Starting from the initial concept of designing structures to perform elastically during an earthquake, the modern seismic design philosophy allows structures to respond to ground excitations in an inelastic manner, thereby allowing damage in earthquakes that are significantly less intense than the largest possible ground motion at the site of the structure. Current performance-based multi-objective seismic design methods aim to ensure life-safety in large and rare earthquakes, and to limit structural damage in frequent and moderate earthquakes. As a result, not many recently built buildings have collapsed and very few people have been killed in 21st century buildings even in large earthquakes. Nevertheless, the financial losses to the community arising from damage and downtime in these earthquakes have been unacceptably high (for example; reported to be in excess of 40 billion dollars in the recent Canterbury earthquakes). In the aftermath of the huge financial losses incurred in recent earthquakes, public has unabashedly shown their dissatisfaction over the seismic performance of the built infrastructure. As the current capacity design based seismic design approach relies on inelastic response (i.e. ductility) in pre-identified plastic hinges, it encourages structures to damage (and inadvertently to incur loss in the form of repair and downtime). It has now been widely accepted that while designing ductile structural systems according to the modern seismic design concept can largely ensure life-safety during earthquakes, this also causes buildings to undergo substantial damage (and significant financial loss) in moderate earthquakes. In a quest to match the seismic design objectives with public expectations, researchers are exploring how financial loss can be brought into the decision making process of seismic design. This has facilitated conceptual development of loss optimisation seismic design (LOSD), which involves estimating likely financial losses in design level earthquakes and comparing against acceptable levels of loss to make design decisions (Dhakal 2010a). Adoption of loss based approach in seismic design standards will be a big paradigm shift in earthquake engineering, but it is still a long term dream as the quantification of the interrelationships between earthquake intensity, engineering demand parameters, damage measures, and different forms of losses for different types of buildings (and more importantly the simplification of the interrelationship into design friendly forms) will require a long time. Dissecting the cost of modern buildings suggests that the structural components constitute only a minor portion of the total building cost (Taghavi and Miranda 2003). Moreover, recent research on seismic loss assessment has shown that the damage to non-structural elements and building contents contribute dominantly to the total building loss (Bradley et. al. 2009). In an earthquake, buildings can incur losses of three different forms (damage, downtime, and death/injury commonly referred as 3Ds); but all three forms of seismic loss can be expressed in terms of dollars. It is also obvious that the latter two loss forms (i.e. downtime and death/injury) are related to the extent of damage; which, in a building, will not just be constrained to the load bearing (i.e. structural) elements. As observed in recent earthquakes, even the secondary building components (such as ceilings, partitions, facades, windows parapets, chimneys, canopies) and contents can undergo substantial damage, which can lead to all three forms of loss (Dhakal 2010b). Hence, if financial losses are to be minimised during earthquakes, not only the structural systems, but also the non-structural elements (such as partitions, ceilings, glazing, windows etc.) should be designed for earthquake resistance, and valuable contents should be protected against damage during earthquakes. Several innovative building technologies have been (and are being) developed to reduce building damage during earthquakes (Buchanan et. al. 2011). Most of these developments are aimed at reducing damage to the buildings’ structural systems without due attention to their effects on non-structural systems and building contents. For example, the PRESSS system or Damage Avoidance Design concept aims to enable a building’s structural system to meet the required displacement demand by rocking without the structural elements having to deform inelastically; thereby avoiding damage to these elements. However, as this concept does not necessarily reduce the interstory drift or floor acceleration demands, the damage to non-structural elements and contents can still be high. Similarly, the concept of externally bracing/damping building frames reduces the drift demand (and consequently reduces the structural damage and drift sensitive non-structural damage). Nevertheless, the acceleration sensitive non-structural elements and contents will still be very vulnerable to damage as the floor accelerations are not reduced (arguably increased). Therefore, these concepts may not be able to substantially reduce the total financial losses in all types of buildings. Among the emerging building technologies, base isolation looks very promising as it seems to reduce both inter-storey drifts and floor accelerations, thereby reducing the damage to the structural/non-structural components of a building and its contents. Undoubtedly, a base isolated building will incur substantially reduced loss of all three forms (dollars, downtime, death/injury), even during severe earthquakes. However, base isolating a building or applying any other beneficial technology may incur additional initial costs. In order to provide incentives for builders/owners to adopt these loss-minimising technologies, real-estate and insurance industries will have to acknowledge the reduced risk posed by (and enhanced resilience of) such buildings in setting their rental/sale prices and insurance premiums.

Research papers, Lincoln University

Mitigating the cascade of environmental damage caused by the movement of excess reactive nitrogen (N) from land to sea is currently limited by difficulties in precisely and accurately measuring N fluxes due to variable rates of attenuation (denitrification) during transport. This thesis develops the use of the natural abundance isotopic composition of nitrate (δ15N and δ18O of NO₃-) to integrate the spatialtemporal variability inherent to denitrification, creating an empirical framework for evaluating attenuation during land to water NO₃- transfers. This technique is based on the knowledge that denitrifiers kinetically discriminate against 'heavy' forms of both N and oxygen (O), creating a parallel enrichment in isotopes of both species as the reaction progresses. This discrimination can be quantitatively related to NO₃- attenuation by isotopic enrichment factors (εdenit). However, while these principles are understood, use of NO₃- isotopes to quantify denitrification fluxes in non-marine environments has been limited by, 1) poor understanding of εdenit variability, and, 2) difficulty in distinguishing the extent of mixing of isotopically distinct sources from the imprint of denitrification. Through a combination of critical literature analysis, mathematical modelling, mesocosm to field scale experiments, and empirical studies on two river systems over distance and time, these short comings are parametrised and a template for future NO₃- isotope based attenuation measurements outlined. Published εdenit values (n = 169) are collated in the literature analysis presented in Chapter 2. By evaluating these values in the context of known controllers on the denitrification process, it is found that the magnitude of εdenit, for both δ15N and δ18O, is controlled by, 1) biology, 2) mode of transport through the denitrifying zone (diffusion v. advection), and, 3) nitrification (spatial-temporal distance between nitrification and denitrification). Based on the outcomes of this synthesis, the impact of the three factors identified as controlling εdenit are quantified in the context of freshwater systems by combining simple mathematical modelling and lab incubation studies (comparison of natural variation in biological versus physical expression). Biologically-defined εdenit, measured in sediments collected from four sites along a temperate stream and from three tropical submerged paddy fields, varied from -3‰ to -28‰ depending on the site’s antecedent carbon content. Following diffusive transport to aerobic surface water, εdenit was found to become more homogeneous, but also lower, with the strength of the effect controlled primarily by diffusive distance and the rate of denitrification in the sediments. I conclude that, given the variability in fractionation dynamics at all levels, applying a range of εdenit from -2‰ to -10‰ provides more accurate measurements of attenuation than attempting to establish a site-specific value. Applying this understanding of denitrification's fractionation dynamics, four field studies were conducted to measure denitrification/ NO₃- attenuation across diverse terrestrial → freshwater systems. The development of NO₃- isotopic signatures (i.e., the impact of nitrification, biological N fixation, and ammonia volatilisation on the isotopic 'imprint' of denitrification) were evaluated within two key agricultural regions: New Zealand grazed pastures (Chapter 4) and Philippine lowland submerged rice production (Chapter 5). By measuring the isotopic composition of soil ammonium, NO₃- and volatilised ammonia following the bovine urine deposition, it was determined that the isotopic composition of NO₃ - leached from grazed pastures is defined by the balance between nitrification and denitrification, not ammonia volatilisation. Consequently, NO₃- created within pasture systems was predicted to range from +10‰ (δ15N)and -0.9‰ (δ18O) for non-fertilised fields (N limited) to -3‰ (δ15N) and +2‰ (δ18O) for grazed fertilised fields (N saturated). Denitrification was also the dominant determinant of NO₃- signatures in the Philippine rice paddy. Using a site-specific εdenit for the paddy, N inputs versus attenuation were able to be calculated, revealing that >50% of available N in the top 10 cm of soil was denitrified during land preparation, and >80% of available N by two weeks post-transplanting. Intriguingly, this denitrification was driven by rapid NO₃- production via nitrification of newly mineralised N during land preparation activities. Building on the relevant range of εdenit established in Chapters 2 and 3, as well as the soil-zone confirmation that denitrification was the primary determinant of NO₃- isotopic composition, two long-term longitudinal river studies were conducted to assess attenuation during transport. In Chapter 6, impact and recovery dynamics in an urban stream were assessed over six months along a longitudinal impact gradient using measurements of NO₃- dual isotopes, biological populations, and stream chemistry. Within 10 days of the catastrophic Christchurch earthquake, dissolved oxygen in the lowest reaches was <1 mg l⁻¹, in-stream denitrification accelerated (attenuating 40-80% of sewage N), microbial biofilm communities changed, and several benthic invertebrate taxa disappeared. To test the strength of this method for tackling the diffuse, chronic N loading of streams in agricultural regions, two years of longitudinal measurements of NO₃- isotopes were collected. Attenuation was negatively correlated with NO₃- concentration, and was highly dependent on rainfall: 93% of calculated attenuation (20 kg NO₃--N ha⁻¹ y⁻¹) occurred within 48 h of rainfall. The results of these studies demonstrate the power of intense measurements of NO₃- stable isotope for distinguishing temporal and spatial trends in NO₃ - loss pathways, and potentially allow for improved catchment-scale management of agricultural intensification. Overall this work now provides a more cohesive understanding for expanding the use of NO₃- isotopes measurements to generate accurate understandings of the controls on N losses. This information is becoming increasingly important to predict ecosystem response to future changes, such the increasing agricultural intensity needed to meet global food demand, which is occurring synergistically with unpredictable global climate change.