Search

found 6 results

Research papers, University of Canterbury Library

Natural catastrophes are increasing worldwide. They are becoming more frequent but also more severe and impactful on our built environment leading to extensive damage and losses. Earthquake events account for the smallest part of natural events; nevertheless seismic damage led to the most fatalities and significant losses over the period 1981-2016 (Munich Re). Damage prediction is helpful for emergency management and the development of earthquake risk mitigation projects. Recent design efforts focused on the application of performance-based design engineering where damage estimation methodologies use fragility and vulnerability functions. However, the approach does not explicitly specify the essential criteria leading to economic losses. There is thus a need for an improved methodology that finds the critical building elements related to significant losses. The here presented methodology uses data science techniques to identify key building features that contribute to the bulk of losses. It uses empirical data collected on site during earthquake reconnaissance mission to train a machine learning model that can further be used for the estimation of building damage post-earthquake. The first model is developed for Christchurch. Empirical building damage data from the 2010-2011 earthquake events is analysed to find the building features that contributed the most to damage. Once processed, the data is used to train a machine-learning model that can be applied to estimate losses in future earthquake events.

Research papers, University of Canterbury Library

Unreinforced masonry (URM) structures comprise a majority of the global built heritage. The masonry heritage of New Zealand is comparatively younger to its European counterparts. In a country facing frequent earthquakes, the URM buildings are prone to extensive damage and collapse. The Canterbury earthquake sequence proved the same, causing damage to over _% buildings. The ability to assess the severity of building damage is essential for emergency response and recovery. Following the Canterbury earthquakes, the damaged buildings were categorized into various damage states using the EMS-98 scale. This article investigates machine learning techniques such as k-nearest neighbors, decision trees, and random forests, to rapidly assess earthquake-induced building damage. The damage data from the Canterbury earthquake sequence is used to obtain the forecast model, and the performance of each machine learning technique is evaluated using the remaining (test) data. On getting a high accuracy the model is then run for building database collected for Dunedin to predict expected damage during the rupture of the Akatore fault.

Research papers, University of Canterbury Library

After a high-intensity seismic event, inspections of structural damages need to be carried out as soon as possible in order to optimize the emergency management, as well as improving the recovery time. In the current practice, damage inspections are performed by an experienced engineer, who physically inspect the structures. This way of doing not only requires a significant amount of time and high skilled human resources, but also raises the concern about the inspector’s safety. A promising alternative is represented using new technologies, such as drones and artificial intelligence, which can perform part of the damage classification task. In fact, drones can safely access high hazard components of the structures: for instance, bridge piers or abutments, and perform the reconnaissance by using highresolution cameras. Furthermore, images can be automatically processed by machine learning algorithms, and damages detected. In this paper, the possibility of applying such technologies for inspecting New Zealand bridges is explored. Firstly, a machine-learning model for damage detection by performing image analysis is presented. Specifically, the algorithm was trained to recognize cracks in concrete members. A sensitivity analysis was carried out to evaluate the algorithm accuracy by using database images. Depending on the confidence level desired,i.e. by allowing a manual classification where the alghortim confidence is below a specific tolerance, the accuracy was found reaching up to 84.7%. In the second part, the model is applied to detect the damage observed on the Anzac Bridge (GPS coordinates -43.500865, 172.701138) in Christchurch by performing a drone reconnaissance. Reults show that the accuracy of the damage detection was equal to 88% and 63% for cracking and spalling, respectively.

Research papers, University of Canterbury Library

The Canterbury Earthquake Sequence (CES), induced extensive damage in residential buildings and led to over NZ$40 billion in total economic losses. Due to the unique insurance setting in New Zealand, up to 80% of the financial losses were insured. Over the CES, the Earthquake Commission (EQC) received more than 412,000 insurance claims for residential buildings. The 4 September 2010 earthquake is the event for which most of the claims have been lodged with more than 138,000 residential claims for this event only. This research project uses EQC claim database to develop a seismic loss prediction model for residential buildings in Christchurch. It uses machine learning to create a procedure capable of highlighting critical features that affected the most buildings loss. A future study of those features enables the generation of insights that can be used by various stakeholders, for example, to better understand the influence of a structural system on the building loss or to select appropriate risk mitigation measures. Previous to the training of the machine learning model, the claim dataset was supplemented with additional data sourced from private and open access databases giving complementary information related to the building characteristics, seismic demand, liquefaction occurrence and soil conditions. This poster presents results of a machine learning model trained on a merged dataset using residential claims from the 4 September 2010.

Research papers, The University of Auckland Library

This thesis presents the application of data science techniques, especially machine learning, for the development of seismic damage and loss prediction models for residential buildings. Current post-earthquake building damage evaluation forms are developed for a particular country in mind. The lack of consistency hinders the comparison of building damage between different regions. A new paper form has been developed to address the need for a global universal methodology for post-earthquake building damage assessment. The form was successfully trialled in the street ‘La Morena’ in Mexico City following the 2017 Puebla earthquake. Aside from developing a framework for better input data for performance based earthquake engineering, this project also extended current techniques to derive insights from post-earthquake observations. Machine learning (ML) was applied to seismic damage data of residential buildings in Mexico City following the 2017 Puebla earthquake and in Christchurch following the 2010-2011 Canterbury earthquake sequence (CES). The experience showcased that it is readily possible to develop empirical data only driven models that can successfully identify key damage drivers and hidden underlying correlations without prior engineering knowledge. With adequate maintenance, such models have the potential to be rapidly and easily updated to allow improved damage and loss prediction accuracy and greater ability for models to be generalised. For ML models developed for the key events of the CES, the model trained using data from the 22 February 2011 event generalised the best for loss prediction. This is thought to be because of the large number of instances available for this event and the relatively limited class imbalance between the categories of the target attribute. For the CES, ML highlighted the importance of peak ground acceleration (PGA), building age, building size, liquefaction occurrence, and soil conditions as main factors which affected the losses in residential buildings in Christchurch. ML also highlighted the influence of liquefaction on the buildings losses related to the 22 February 2011 event. Further to the ML model development, the application of post-hoc methodologies was shown to be an effective way to derive insights for ML algorithms that are not intrinsically interpretable. Overall, these provide a basis for the development of ‘greybox’ ML models.

Research papers, University of Canterbury Library

High-quality ground motion records are required for engineering applications including response history analysis, seismic hazard development, and validation of physics-based ground motion simulations. However, the determination of whether a ground motion record is high-quality is poorly handled by automation with mathematical functions and can become prohibitive if done manually. Machine learning applications are well-suited to this problem, and a previous feed-forward neural network was developed (Bellagamba et al. 2019) to determine high-quality records from small crustal events in the Canterbury and Wellington regions for simulation validation. This prior work was however limited by the omission of moderate-to-large magnitude events and those from other tectonic environments, as well as a lack of explicit determination of the minimum usable frequency of the ground motion. To address these shortcomings, an updated neural network was developed to predict the quality of ground motion records for all magnitudes and all tectonic sources—active shallow crustal, subduction intraslab, and subduction interface—in New Zealand. The predictive performance of the previous feed-forward neural network was matched by the neural network in the domain of small crustal records, and this level of predictive performance is now extended to all source magnitudes and types in New Zealand making the neural network applicable to global ground motion databases. Furthermore, the neural network provides quality and minimum usable frequency predictions for each of the three orthogonal components of a record which may then be mapped into a binary quality decision or otherwise applied as desired. This framework provides flexibility for the end user to predict high-quality records with various acceptability thresholds allowing for this neural network to be used in a range of applications.