Search

found 2 results

Research papers, University of Canterbury Library

Geospatial liquefaction models aim to predict liquefaction using data that is free and readily-available. This data includes (i) common ground-motion intensity measures; and (ii) geospatial parameters (e.g., among many, distance to rivers, distance to coast, and Vs30 estimated from topography) which are used to infer characteristics of the subsurface without in-situ testing. Since their recent inception, such models have been used to predict geohazard impacts throughout New Zealand (e.g., in conjunction with regional ground-motion simulations). While past studies have demonstrated that geospatial liquefaction-models show great promise, the resolution and accuracy of the geospatial data underlying these models is notably poor. As an example, mapped rivers and coastlines often plot hundreds of meters from their actual locations. This stems from the fact that geospatial models aim to rapidly predict liquefaction anywhere in the world and thus utilize the lowest common denominator of available geospatial data, even though higher quality data is often available (e.g., in New Zealand). Accordingly, this study investigates whether the performance of geospatial models can be improved using higher-quality input data. This analysis is performed using (i) 15,101 liquefaction case studies compiled from the 2010-2016 Canterbury Earthquakes; and (ii) geospatial data readily available in New Zealand. In particular, we utilize alternative, higher-quality data to estimate: locations of rivers and streams; location of coastline; depth to ground water; Vs30; and PGV. Most notably, a region-specific Vs30 model improves performance (Figs. 3-4), while other data variants generally have little-to-no effect, even when the “standard” and “high-quality” values differ significantly (Fig. 2). This finding is consistent with the greater sensitivity of geospatial models to Vs30, relative to any other input (Fig. 5), and has implications for modeling in locales worldwide where high quality geospatial data is available.

Research papers, University of Canterbury Library

Natural catastrophes are increasing worldwide. They are becoming more frequent but also more severe and impactful on our built environment leading to extensive damage and losses. Earthquake events account for the smallest part of natural events; nevertheless seismic damage led to the most fatalities and significant losses over the period 1981-2016 (Munich Re). Damage prediction is helpful for emergency management and the development of earthquake risk mitigation projects. Recent design efforts focused on the application of performance-based design engineering where damage estimation methodologies use fragility and vulnerability functions. However, the approach does not explicitly specify the essential criteria leading to economic losses. There is thus a need for an improved methodology that finds the critical building elements related to significant losses. The here presented methodology uses data science techniques to identify key building features that contribute to the bulk of losses. It uses empirical data collected on site during earthquake reconnaissance mission to train a machine learning model that can further be used for the estimation of building damage post-earthquake. The first model is developed for Christchurch. Empirical building damage data from the 2010-2011 earthquake events is analysed to find the building features that contributed the most to damage. Once processed, the data is used to train a machine-learning model that can be applied to estimate losses in future earthquake events.