Search

found 342 results

Research papers, University of Canterbury Library

Research on human behaviour during earthquake shaking has identified three main influences of behaviour: the environment the individual is located immediately before and during the earthquake, in terms of where the individual is and who the individual is with at the time of the earthquake; individual characteristics, such as age, gender, previous earthquake experience, and the intensity and duration of earthquake shaking. However, little research to date has systematically analysed the immediate observable human responses to earthquake shaking, mostly due to data constraints and/or ethical considerations. Research on human behaviour during earthquakes has relied on simulations or post-event, reflective interviews and questionnaire studies, often performed weeks to months or even years following the event. Such studies are therefore subject to limitations such as the quality of the participant's memory or (perceived) realism of a simulation. The aim of this research was to develop a robust coding scheme to analyse human behaviour during earthquake shaking using video footage captured during an earthquake event. This will allow systematic analysis of individuals during real earthquakes using a previously unutilized data source, thus help develop guidance on appropriate protective actions. The coding scheme was developed in a two-part process, combining a deductive and inductive approach. Previous research studies of human behavioral response during earthquake shaking provided the basis for the coding scheme. This was then iteratively refined by applying the coding scheme to a broad range of video footage of people exposed to strong shaking during the Canterbury earthquake sequence. The aim of this was to optimise coding scheme content and application across a broad range of scenarios, and to increase inter-coder reliability. The methodology to code data will enhance objective observation of video footage to allow cross-event analysis and explore (among others): reaction time, patterns of behaviour, and social, environmental and situational influences of behaviour. This can provide guidance for building configuration and design, and evidence-based recommendations for public education about injury-preventing behavioural responses during earthquake shaking.

Research papers, The University of Auckland Library

The Screw Driving Sounding (SDS) method developed in Japan is a relatively new insitu testing technique to characterise soft shallow sites, typically those required for residential house construction. An SDS machine drills a rod into the ground in several loading steps while the rod is continuously rotated. Several parameters, such as torque, load and speed of penetration, are recorded at every rotation of the rod. The SDS method has been introduced in New Zealand, and the results of its application for characterising local sites are discussed in this study. A total of 164 SDS tests were conducted in Christchurch, Wellington and Auckland to validate/adjust the methodologies originally developed based on the Japanese practice. Most of the tests were conducted at sites where cone penetration tests (CPT), standard penetration tests (SPT) and borehole logs were available; the comparison of SDS results with existing information showed that the SDS method has great potential as an in-situ testing method for classifying the soils. By compiling the SDS data from 3 different cities and comparing them with the borehole logs, a soil classification chart was generated for identifying the soil type based on SDS parameters. Also, a correlation between fines content and SDS parameters was developed and a procedure for estimating angle of internal friction of sand using SDS parameters was investigated. Furthermore, a correlation was made between the tip resistance of the CPT and the SDS data for different percentages of fines content. The relationship between the SPT N value and a SDS parameter was also proposed. This thesis also presents a methodology for identifying the liquefiable layers of soil using SDS data. SDS tests were performed in both liquefied and non-liquefied areas in Christchurch to find a representative parameter and relationship for predicting the liquefaction potential of soil. Plots were drawn of the cyclic shear stress ratios (CSR) induced by the earthquakes and the corresponding energy of penetration during SDS tests. By identifying liquefied or unliquefied layers using three different popular CPT-based methods, boundary lines corresponding to the various probabilities of liquefaction happening were developed for different ranges of fines contents using logistic regression analysis, these could then be used for estimating the liquefaction potential of soil directly from the SDS data. Finally, the drilling process involved in screw driving sounding was simulated using Abaqus software. Analysis results proved that the model successfully captured the drilling process of the SDS machine in sand. In addition, a chart to predict peak friction angles of sandy sites based on measured SDS parameters for various vertical effective stresses was formulated. As a simple, fast and economical test, the SDS method can be a reliable alternative insitu test for soil and site characterisation, especially for residential house construction.

Research papers, University of Canterbury Library

The UC CEISMIC Canterbury Earthquakes Digital Archive was built following the devastating earthquakes that hit the Canterbury region in the South Island of New Zealand from 2010 – 2012. 185 people were killed in the 6.3 magnitude earthquake of February 22nd 2011, thousands of homes and businesses were destroyed, and the local community endured over 10,000 aftershocks. The program aims to document and protect the social, cultural, and intellectual legacy of the Canterbury community for the purposes of memorialization and enabling research. The nationally federated archive currently stores 75,000 items, ranging from audio and video interviews to images and official reports. Tens of thousands more items await ingestion. Significant lessons have been learned about data integration in post-disaster contexts, including but not limited to technical architecture, governance, ingestion process, and human ethics. The archive represents a model for future resilience-oriented data integration and preservation products.

Research papers, University of Canterbury Library

Abstract This study provides a simplified methodology for pre-event data collection to support a faster and more accurate seismic loss estimation. Existing pre-event data collection frameworks are reviewed. Data gathered after the Canterbury earthquake sequences are analysed to evaluate the relative importance of different sources of building damage. Conclusions drawns are used to explore new approaches to conduct pre-event building assessment.

Research papers, University of Canterbury Library

Decision making on the reinstatement of the Christchurch sewer system after the Canterbury (New Zealand) earthquake sequence in 2010–2011 relied strongly on damage data, in particular closed circuit television (CCTV). This paper documents that process and considers how data can influence decision making. Data are analyzed on 33,000 pipes and 13,000 repairs and renewals. The primary findings are that (1) there should be a threshold of damage per pipe set to make efficient use of CCTV; (2) for those who are estimating potential damage, care must be taken in direct use of repair data without an understanding of the actual damage modes; and (3) a strong correlation was found between the ratio of faults to repairs per pipe and the estimated peak ground velocity. Taken together, the results provide evidence of the extra benefit that damage data can provide over repair data for wastewater networks and may help guide others in the development of appropriate strategies for data collection and wastewater pipe decisions after disasters.

Research papers, University of Canterbury Library

Geospatial liquefaction models aim to predict liquefaction using data that is free and readily-available. This data includes (i) common ground-motion intensity measures; and (ii) geospatial parameters (e.g., among many, distance to rivers, distance to coast, and Vs30 estimated from topography) which are used to infer characteristics of the subsurface without in-situ testing. Since their recent inception, such models have been used to predict geohazard impacts throughout New Zealand (e.g., in conjunction with regional ground-motion simulations). While past studies have demonstrated that geospatial liquefaction-models show great promise, the resolution and accuracy of the geospatial data underlying these models is notably poor. As an example, mapped rivers and coastlines often plot hundreds of meters from their actual locations. This stems from the fact that geospatial models aim to rapidly predict liquefaction anywhere in the world and thus utilize the lowest common denominator of available geospatial data, even though higher quality data is often available (e.g., in New Zealand). Accordingly, this study investigates whether the performance of geospatial models can be improved using higher-quality input data. This analysis is performed using (i) 15,101 liquefaction case studies compiled from the 2010-2016 Canterbury Earthquakes; and (ii) geospatial data readily available in New Zealand. In particular, we utilize alternative, higher-quality data to estimate: locations of rivers and streams; location of coastline; depth to ground water; Vs30; and PGV. Most notably, a region-specific Vs30 model improves performance (Figs. 3-4), while other data variants generally have little-to-no effect, even when the “standard” and “high-quality” values differ significantly (Fig. 2). This finding is consistent with the greater sensitivity of geospatial models to Vs30, relative to any other input (Fig. 5), and has implications for modeling in locales worldwide where high quality geospatial data is available.

Research papers, The University of Auckland Library

This thesis presents the application of data science techniques, especially machine learning, for the development of seismic damage and loss prediction models for residential buildings. Current post-earthquake building damage evaluation forms are developed for a particular country in mind. The lack of consistency hinders the comparison of building damage between different regions. A new paper form has been developed to address the need for a global universal methodology for post-earthquake building damage assessment. The form was successfully trialled in the street ‘La Morena’ in Mexico City following the 2017 Puebla earthquake. Aside from developing a framework for better input data for performance based earthquake engineering, this project also extended current techniques to derive insights from post-earthquake observations. Machine learning (ML) was applied to seismic damage data of residential buildings in Mexico City following the 2017 Puebla earthquake and in Christchurch following the 2010-2011 Canterbury earthquake sequence (CES). The experience showcased that it is readily possible to develop empirical data only driven models that can successfully identify key damage drivers and hidden underlying correlations without prior engineering knowledge. With adequate maintenance, such models have the potential to be rapidly and easily updated to allow improved damage and loss prediction accuracy and greater ability for models to be generalised. For ML models developed for the key events of the CES, the model trained using data from the 22 February 2011 event generalised the best for loss prediction. This is thought to be because of the large number of instances available for this event and the relatively limited class imbalance between the categories of the target attribute. For the CES, ML highlighted the importance of peak ground acceleration (PGA), building age, building size, liquefaction occurrence, and soil conditions as main factors which affected the losses in residential buildings in Christchurch. ML also highlighted the influence of liquefaction on the buildings losses related to the 22 February 2011 event. Further to the ML model development, the application of post-hoc methodologies was shown to be an effective way to derive insights for ML algorithms that are not intrinsically interpretable. Overall, these provide a basis for the development of ‘greybox’ ML models.

Research papers, University of Canterbury Library

Smart cities utilise new and innovative technology to improve the function of the city for governments, citizens and businesses. This thesis offers an in-depth discussion on the concept of the smart city and sets the context of smart cities internationally. It also examines how to improve a smart city through public engagement, as well as, how to implement participatory research in a smart city project to improve the level of engagement of citizens in the planning and implementation of smart projects. This thesis shows how to incentivise behaviour change with smart city technology and projects, through increasing participation in the planning and implementation of smart technology in a city. Meaningful data is created through this process of participation for citizens in the city, by engaging the citizens in the creation of the data, therefore the information created through a smart city project is created by and for the citizens themselves. To improve engagement, a city must understand its specific context and its residents. Using Christchurch, New Zealand, and the Christchurch City Council (CCC) Smart City Project as a case study, this research engages CCC stakeholders in the Smart City Project through a series of interviews, and citizens in Christchurch through a survey and focus groups. A thorough literature review has been conducted, to illuminate the different definitions of the smart city in academia, business and governments respectively, and how these definitions vary from one another. It provides details of a carefully selected set of relevant smart cities internationally and will discuss how the Christchurch Earthquake Sequence of 2010 and 2011 has affected the CCC Smart City Project. The research process, alongside the literature review, shows diverse groups of citizens in the city should be acknowledged in this process. The concept of the smart city is redefined to incorporate the context of Christchurch, its citizens and communities. Community perceptions of smart cities in Christchurch consider the post-disaster environment and this event and subsequent rebuild process should be a focus of the smart city project. The research identified that the CCC needs to focus on participatory approaches in the planning and implementation of smart projects, and community organisations in Christchurch offer an opportunity to understand community perspectives on new smart technology and that projects internationally should consider how the context of the city will affect the participation of its residents. This project offers ideas to influence the behaviour change of citizens through a smart city project. Further research should consider other stakeholders, for instance, innovation and technology-focused business in the city, and to fully engage citizens, future research must continue the process of participatory engagement, and target diverse groups in the city, including but not limited to minority groups, older and younger generations, and those with physical and mental disabilities.

Research papers, University of Canterbury Library

SeisFinder is an open-source web service developed by QuakeCoRE and the University of Canterbury, focused on enabling the extraction of output data from computationally intensive earthquake resilience calculations. Currently, SeisFinder allows users to select historical or future events and retrieve ground motion simulation outputs for requested geographical locations. This data can be used as input for other resilience calculations, such as dynamic response history analysis. SeisFinder was developed using Django, a high-level python web framework, and uses a postgreSQL database. Because our large-scale computationally-intensive numerical ground motion simulations produce big data, the actual data is stored in file systems, while the metadata is stored in the database.