Christchurch Press 22 August 2014: Section B, Page 6 (Christchurch Edition)
Articles, UC QuakeStudies
Page 6 of Section B of the Christchurch edition of the Christchurch Press, published on Friday 22 August 2014.
Page 6 of Section B of the Christchurch edition of the Christchurch Press, published on Friday 22 August 2014.
Page 2 of Section B of the Christchurch edition of the Christchurch Press, published on Tuesday 1 July 2014.
Page 10 of Section B of the Christchurch edition of the Christchurch Press, published on Friday 17 October 2014.
Page 6 of Section B of the South Island edition of the Christchurch Press, published on Friday 22 August 2014.
Page 5 of Section B of the South Island edition of the Christchurch Press, published on Monday 6 October 2014.
Page 10 of Section B of the South Island edition of the Christchurch Press, published on Friday 17 October 2014.
Page 10 of Section B of the South Island edition of the Christchurch Press, published on Thursday 9 October 2014.
Page 12 of Section B of the South Island edition of the Christchurch Press, published on Friday 1 August 2014.
As a result of the Christchurch Earthquake that occurred on 22nd February 2011 and the resultant loss of life and widespread damage, a Royal Commission of Enquiry was convened in April 2011. The Royal Commission recommended a number of significant changes to the regulation of earthquake prone building in New Zealand. Earthquake prone buildings are buildings that are deemed to be of insufficient strength to perform adequately in a moderate earthquake. In response to the Royal Commission recommendations the New Zealand Government carried out a consultative process before announcing proposed changes to the building regulations in August 2013. One of the most significant changes is the imposition of mandatory strengthening requirements for earthquake prone buildings on a national basis. This will have a significant impact on the urban fabric of most New Zealand towns and cities. The type of traditional cost benefit study carried out to date fails to measure these impacts and this paper proposes an alternative methodology based on the analysis of land use data and rating valuations. This methodology was developed and applied to a small provincial town in the form of a case study. The results of this case study and the methodology used are discussed in this paper.
None
A PDF copy of a newsletter sent by All Right? to their mailing list in September 2014.
A PDF copy of a newsletter sent by All Right? to their mailing list in June 2014.
The 2010–2011 Canterbury earthquake sequence began with the 4 September 2010, Mw7.1 Darfield earthquake and includes up to ten events that induced liquefaction. Most notably, widespread liquefaction was induced by the Darfield and Mw6.2 Christchurch earthquakes. The combination of well-documented liquefaction response during multiple events, densely recorded ground motions for the events, and detailed subsurface characterization provides an unprecedented opportunity to add well-documented case histories to the liquefaction database. This paper presents and applies 50 high-quality cone penetration test (CPT) liquefaction case histories to evaluate three commonly used, deterministic, CPT-based simplified liquefaction evaluation procedures. While all the procedures predicted the majority of the cases correctly, the procedure proposed by Idriss and Boulanger (2008) results in the lowest error index for the case histories analyzed, thus indicating better predictions of the observed liquefaction response.
Following the 2010-2011 earthquakes in Canterbury, New Zealand, the University of Canterbury (UC) was faced with the need to respond to major challenges in its teaching and learning environment. With the recognition of education as a key component to the recovery of the Canterbury region, UC developed a plan for the transformation and renewal of the campus. Central to this renewal is human capital – graduates who are distinctly resilient and broadly skilled, owing in part to their living and rebuilding through a disaster. Six desired graduate attributes have been articulated through this process: knowledge and skills of a recognized subject, critical thinking skills, the ability to interpret information from a range of sources, the ability to self-direct learning, cultural competence, and the recognition of global connections through social, ethical, and environmental values. All of these attributes may readily be identified in undergraduate geoscience field education and graduate field-based studies, and this is particularly important to highlight in a climate where the logistical and financial requirements of fieldwork are becoming a barrier to its inclusion in undergraduate curricula. Fieldwork develops discipline-specific knowledge and skills and fosters independent and critical thought. It encourages students to recognize and elaborate upon relevant information, plan ways to solve complicated problems, execute and re-evaluate these plans. These decisions are largely made by the learners, who often direct their own field experience. The latter two key graduate attributes, cultural competence and global recognition of socio-environmental values, have been explicitly addressed in field education elsewhere and there is potential to do so within the New Zealand context. These concepts are inherent to the sense of place of geoscience undergraduates and are particularly important when the field experience is viewed through the lens of landscape heritage. This work highlights the need to understand how geoscience students interact with field places, with unique implications for their cultural and socio-environmental awareness as global citizens, as well as the influence that field pedagogy has on these factors.
Using case studies from the 2010-2011 Canterbury, New Zealand earthquake sequence, this study assesses the accuracies of paleoliquefaction back-analysis methods and explores the challenges, techniques, and uncertainties associated with their application. While liquefaction-based back-analyses have been widely used to estimate the magnitudes of paleoearthquakes, their uncertain efficacies continue to significantly affect the computed seismic hazard in regions where they are relied upon. Accordingly, their performance is evaluated herein using liquefaction data from modern earthquakes with known magnitudes. It is shown that when the earthquake source location and mechanism are known, back-analysis methods are capable of accurately deriving seismic parameters from liquefaction evidence. However, because the source location and mechanism are often unknown in paleoseismic studies, and because accurate interpretation is shown to be more difficult in such cases, new analysis techniques are proposed herein. An objective parameter is proposed to geospatially assess the likelihood of any provisional source location, enabling an analyst to more accurately estimate the magnitude of a liquefaction-inducing paleoearthquake. This study demonstrates the application of back-analysis methods, provides insight into their potential accuracies, and provides a framework for performing paleoliquefaction analyses worldwide.
Active faults capable of generating highly damaging earthquakes may not cause surface rupture (i.e., blind faults) or cause surface ruptures that evade detection due to subsequent burial or erosion by surface processes. Fault populations and earthquake frequency-‐magnitude distributions adhere to power laws, implying that faults too small to cause surface rupture but large enough to cause localized strong ground shaking densely populate continental crust. The rupture of blind, previously undetected faults beneath Christchurch, New Zealand in a suite of earthquakes in 2010 and 2011, including the fatal 22 February 2011 moment magnitude (Mw) 6.2 Christchurch earthquake and other large aftershocks, caused a variety of environmental impacts, including major rockfall, severe liquefaction, and differential surface uplift and subsidence. All of these effects occurred where geologic evidence for penultimate effects of the same nature existed. To what extent could the geologic record have been used to infer the presence of proximal, blind and / or unidentified faults near Christchurch? In this instance, we argue that phenomena induced by high intensity shaking, such as rock fragmentation and rockfall, revealed the presence of proximal active faults in the Christchurch area prior to the recent earthquake sequence. Development of robust earthquake shaking proxy datasets should become a higher scientific priority, particularly in populated regions.
This poster provides a summary of the development of a 3D shallow (z<40m) shear wave velocity (Vs) model for the urban Christchurch, New Zealand region. The model is based on a recently developed Christchurch-specific empirical correlation between Vs and cone penetration test (CPT) data (McGann et al. 2014a,b) and the large high-density database of CPT logs in the greater Christchurch urban area (> 15,000 logs as of 01/01/2014). In particular, the 3D model provides shear wave velocities for the surficial Springston Formation, Christchurch Formation, and Riccarton gravel layers which generally comprise the upper 40m in the Christchurch urban area. Point-estimates are provided on a 200m-by- 200m grid from which interpolation to other locations can be performed. This model has applications for future site characterization and numerical modeling efforts via maps of timeaveraged Vs over specific depths (e.g. Vs30, Vs10) and via the identification of typical Vs profiles for different regions and soil behaviour types within Christchurch. In addition, the Vs model can be used to constrain the near-surface velocities for the 3D seismic velocity model of the Canterbury basin (Lee et al. 2014) currently being developed for the purpose of broadband ground motion simulation.
A non-destructive hardness testing method has been developed to investigate the amount of plastic strain demand in steel elements subjected to cyclic loading. The focus of this research is on application to the active links of eccentrically braced frames (EBFs), which are a commonly used seismic-resisting system in modern steel framed buildings. The 2010/2011 Christchurch earthquake series, especially the very intense February 22 shaking, which was the first earthquake worldwide to push complete EBF systems fully into their inelastic state, generating a moderate to high level of plastic strain in EBF active links, for a range of buildings from 3 to 23 storeys in height. This raised two important questions: 1) what was the extent of plastic deformation in active links; and 2) what effect does that have to post-earthquake steel properties? This project comprised determining a robust relationship between hardness and plastic strain in order to be able to answer the first question and provide the necessary input into answering the second question. A non-destructive Leeb (portable) hardness tester (model TH170) has been used to measure the hardness, in order to determine the plastic strain, in hot rolled steel universal sections and steel plates. A bench top Rockwell B was used to compare and validated the hardness measured by the portable hardness tester. Hardness was measured from monotonically strained tensile test specimens to identify the relationship between hardness and plastic strain demand. Test results confirmed a good relationship between hardness and the amount of monotonically induced plastic strain. Surface roughness was identified as an important parameter in obtaining reliable hardness readings from a portable hardness reader. A proper surface preparation method was established by using three different cleaning methods, finished with hand sanding to achieve surface roughness coefficients sufficiently low not to distort the results. This work showed that a test surface roughness (Ra) is not more than 1.6 micron meter (μm) is required for accurate readings from the TH170 tester. A case study on an earthquake affected building was carried out to identify the relationship between hardness and amount of plastic strain demand in cyclically deformed active links. Hardness was carried out from active links shown visually to have been the most affected during one of the major earthquake events. Onsite hardness test results were then compared with laboratory hardness test results. A good relationship between hardness from onsite and laboratory was observed between the test methods; Rockwell B bench top and portable Leeb tester TH170. Manufacturing induced plastic strain in the top and bottom of the webs of hot rolled sections were discovered from this research, an important result which explains why visual effects of earthquake induced active link yielding (eg cracked or flaking paint) was typically more prevalent over the middle half depth of the active link. The extent of this was quantified. It was also evident that the hardness readings from the portable hardness tester are influenced by geometry, mass effects and rigidity of the links. The final experimental stage was application of the method to full scale cyclic inelastic tested nominally identical active links subjected to loading regimes comprising constant and variable plastic strain demands. The links were cyclically loaded to achieve different plastic strain level. A novel Digital Image Correlation (DIC) technique was incorporated during the tests of this scale, to confirm the level of plastic strain achieved. Tensile test specimens were water jet cut from cyclically deformed webs to analyse the level of plastic strain. Test results show clear evidence that cyclically deformed structural steel elements show good correlation between hardness and the amount of plastic strain demand. DIC method was found to be reliable and accurate to check the level of plastic strain within cyclically deformed structural steel elements.
Since the early 1980s seismic hazard assessment in New Zealand has been based on Probabilistic Seismic Hazard Analysis (PSHA). The most recent version of the New Zealand National Seismic Hazard Model, a PSHA model, was published by Stirling et al, in 2012. This model follows standard PSHA principals and combines a nation-wide model of active faults with a gridded point-source model based on the earthquake catalogue since 1840. These models are coupled with the ground-motion prediction equation of McVerry et al (2006). Additionally, we have developed a time-dependent clustering-based PSHA model for the Canterbury region (Gerstenberger et al, 2014) in response to the Canterbury earthquake sequence. We are now in the process of revising that national model. In this process we are investigating several of the fundamental assumptions in traditional PSHA and in how we modelled hazard in the past. For this project, we have three main focuses: 1) how do we design an optimal combination of multiple sources of information to produce the best forecast of earthquake rates in the next 50 years: can we improve upon a simple hybrid of fault sources and background sources, and can we better handle the uncertainties in the data and models (e.g., fault segmentation, frequency-magnitude distributions, time-dependence & clustering, low strain-rate areas, and subduction zone modelling)? 2) developing revised and new ground-motion predictions models including better capturing of epistemic uncertainty – a key focus in this work is developing a new strong ground motion catalogue for model development; and 3) how can we best quantify if changes we have made in our modelling are truly improvements? Throughout this process we are working toward incorporating numerical modelling results from physics based synthetic seismicity and ground-motion models.
Recent experiences from the Darfield and Canterbury, New Zealand earthquakes have shown that the soft soil condition of saturated liquefiable sand has a profound effect on seismic response of buildings, bridges and other lifeline infrastructure. For detailed evaluation of seismic response three dimensional integrated analysis comprising structure, foundation and soil is required; such an integrated analysis is referred to as Soil Foundation Structure Interaction (SFSI) in literatures. SFSI is a three-dimensional problem because of three primary reasons: first, foundation systems are three-dimensional in form and geometry; second, ground motions are three-dimensional, producing complex multiaxial stresses in soils, foundations and structure; and third, soils in particular are sensitive to complex stress because of heterogeneity of soils leading to a highly anisotropic constitutive behaviour. In literatures the majority of seismic response analyses are limited to plane strain configuration because of lack of adequate constitutive models both for soils and structures, and computational limitation. Such two-dimensional analyses do not represent a complete view of the problem for the three reasons noted above. In this context, the present research aims to develop a three-dimensional mathematical formulation of an existing plane-strain elasto-plastic constitutive model of sand developed by Cubrinovski and Ishihara (1998b). This model has been specially formulated to simulate liquefaction behaviour of sand under ground motion induced earthquake loading, and has been well-validated and widely implemented in verifcation of shake table and centrifuge tests, as well as conventional ground response analysis and evaluation of case histories. The approach adopted herein is based entirely on the mathematical theory of plasticity and utilises some unique features of the bounding surface plasticity formalised by Dafalias (1986). The principal constitutive parameters, equations, assumptions and empiricism of the existing plane-strain model are adopted in their exact form in the three-dimensional version. Therefore, the original two-dimensional model can be considered as a true subset of the three-dimensional form; the original model can be retrieved when the tensorial quantities of the three dimensional version are reduced to that of the plane-strain configuration. Anisotropic Drucker-Prager type failure surface has been adopted for the three-dimensional version to accommodate triaxial stress path. Accordingly, a new mixed hardening rule based on Mroz’s approach of homogeneous surfaces (Mroz, 1967) has been introduced for the virgin loading surface. The three-dimensional version is validated against experimental data for cyclic torsional and triaxial stress paths.
None
In the last century, seismic design has undergone significant advancements. Starting from the initial concept of designing structures to perform elastically during an earthquake, the modern seismic design philosophy allows structures to respond to ground excitations in an inelastic manner, thereby allowing damage in earthquakes that are significantly less intense than the largest possible ground motion at the site of the structure. Current performance-based multi-objective seismic design methods aim to ensure life-safety in large and rare earthquakes, and to limit structural damage in frequent and moderate earthquakes. As a result, not many recently built buildings have collapsed and very few people have been killed in 21st century buildings even in large earthquakes. Nevertheless, the financial losses to the community arising from damage and downtime in these earthquakes have been unacceptably high (for example; reported to be in excess of 40 billion dollars in the recent Canterbury earthquakes). In the aftermath of the huge financial losses incurred in recent earthquakes, public has unabashedly shown their dissatisfaction over the seismic performance of the built infrastructure. As the current capacity design based seismic design approach relies on inelastic response (i.e. ductility) in pre-identified plastic hinges, it encourages structures to damage (and inadvertently to incur loss in the form of repair and downtime). It has now been widely accepted that while designing ductile structural systems according to the modern seismic design concept can largely ensure life-safety during earthquakes, this also causes buildings to undergo substantial damage (and significant financial loss) in moderate earthquakes. In a quest to match the seismic design objectives with public expectations, researchers are exploring how financial loss can be brought into the decision making process of seismic design. This has facilitated conceptual development of loss optimisation seismic design (LOSD), which involves estimating likely financial losses in design level earthquakes and comparing against acceptable levels of loss to make design decisions (Dhakal 2010a). Adoption of loss based approach in seismic design standards will be a big paradigm shift in earthquake engineering, but it is still a long term dream as the quantification of the interrelationships between earthquake intensity, engineering demand parameters, damage measures, and different forms of losses for different types of buildings (and more importantly the simplification of the interrelationship into design friendly forms) will require a long time. Dissecting the cost of modern buildings suggests that the structural components constitute only a minor portion of the total building cost (Taghavi and Miranda 2003). Moreover, recent research on seismic loss assessment has shown that the damage to non-structural elements and building contents contribute dominantly to the total building loss (Bradley et. al. 2009). In an earthquake, buildings can incur losses of three different forms (damage, downtime, and death/injury commonly referred as 3Ds); but all three forms of seismic loss can be expressed in terms of dollars. It is also obvious that the latter two loss forms (i.e. downtime and death/injury) are related to the extent of damage; which, in a building, will not just be constrained to the load bearing (i.e. structural) elements. As observed in recent earthquakes, even the secondary building components (such as ceilings, partitions, facades, windows parapets, chimneys, canopies) and contents can undergo substantial damage, which can lead to all three forms of loss (Dhakal 2010b). Hence, if financial losses are to be minimised during earthquakes, not only the structural systems, but also the non-structural elements (such as partitions, ceilings, glazing, windows etc.) should be designed for earthquake resistance, and valuable contents should be protected against damage during earthquakes. Several innovative building technologies have been (and are being) developed to reduce building damage during earthquakes (Buchanan et. al. 2011). Most of these developments are aimed at reducing damage to the buildings’ structural systems without due attention to their effects on non-structural systems and building contents. For example, the PRESSS system or Damage Avoidance Design concept aims to enable a building’s structural system to meet the required displacement demand by rocking without the structural elements having to deform inelastically; thereby avoiding damage to these elements. However, as this concept does not necessarily reduce the interstory drift or floor acceleration demands, the damage to non-structural elements and contents can still be high. Similarly, the concept of externally bracing/damping building frames reduces the drift demand (and consequently reduces the structural damage and drift sensitive non-structural damage). Nevertheless, the acceleration sensitive non-structural elements and contents will still be very vulnerable to damage as the floor accelerations are not reduced (arguably increased). Therefore, these concepts may not be able to substantially reduce the total financial losses in all types of buildings. Among the emerging building technologies, base isolation looks very promising as it seems to reduce both inter-storey drifts and floor accelerations, thereby reducing the damage to the structural/non-structural components of a building and its contents. Undoubtedly, a base isolated building will incur substantially reduced loss of all three forms (dollars, downtime, death/injury), even during severe earthquakes. However, base isolating a building or applying any other beneficial technology may incur additional initial costs. In order to provide incentives for builders/owners to adopt these loss-minimising technologies, real-estate and insurance industries will have to acknowledge the reduced risk posed by (and enhanced resilience of) such buildings in setting their rental/sale prices and insurance premiums.