Search

found 2 results

Research papers, The University of Auckland Library

The rapid classification of building damage states or placards after an earthquake is vital for enabling an efficient emergency response and informed decision-making for rehabilitation and recovery purposes. Traditional methods rely heavily on inspector-led on-site surveys, which are often time-consuming, resource-intensive, and susceptible to human error. This study introduces a machine learning-supported surrogate model designed to streamline the assessment of building damage, focusing on the automated assignment of damage placards within the context of New Zealand's post-earthquake evaluation frameworks. The study evaluates two key safety evaluation protocols—Rapid Building Assessment (RBA) and Detailed Damage Evaluation (DDE)—and integrates corresponding databases derived from the 2010–2011 Canterbury Earthquake Sequence (CES) in Christchurch. Six ML classifiers—Multilayer Perceptron (MLP), Random Forest (RF), Support Vector Machine (SVM), K-Nearest Neighbours (KNN), Gradient Boosting Classifier (GBC), and Gradient Bagging (GBag)—were rigorously tested across both databases. The results indicate that the RF-based surrogate model outperforms the other classifiers across both RBA and DDE protocols. Two distinct sets of critical predictors have been further identified for each protocol, allowing for the rapid retrieval of essential data for future on-site surveys, while retaining the RF model's predictive accuracy. The developed surrogate model provides a pragmatic tool for practising engineers to rapidly assign placards to damaged structures and for policymakers and building owners to make informed recovery decisions for earthquake-affected buildings.

Research papers, University of Canterbury Library

Semi-empirical models based on in-situ geotechnical tests have become the standard of practice for predicting soil liquefaction. Since the inception of the “simplified” cyclic-stress model in 1971, variants based on various in-situ tests have been developed, including the Cone Penetration Test (CPT). More recently, prediction models based soley on remotely-sensed data were developed. Similar to systems that provide automated content on earthquake impacts, these “geospatial” models aim to predict liquefaction for rapid response and loss estimation using readily-available data. This data includes (i) common ground-motion intensity measures (e.g., PGA), which can either be provided in near-real-time following an earthquake, or predicted for a future event; and (ii) geospatial parameters derived from digital elevation models, which are used to infer characteristics of the subsurface relevent to liquefaction. However, the predictive capabilities of geospatial and geotechnical models have not been directly compared, which could elucidate techniques for improving the geospatial models, and which would provide a baseline for measuring improvements. Accordingly, this study assesses the realtive efficacy of liquefaction models based on geospatial vs. CPT data using 9,908 case-studies from the 2010-2016 Canterbury earthquakes. While the top-performing models are CPT-based, the geospatial models perform relatively well given their simplicity and low cost. Although further research is needed (e.g., to improve upon the performance of current models), the findings of this study suggest that geospatial models have the potential to provide valuable first-order predictions of liquefaction occurence and consequence. Towards this end, performance assessments of geospatial vs. geotechnical models are ongoing for more than 20 additional global earthquakes.