Search

found 3 results

Research papers, University of Canterbury Library

The use of post-earthquake cordons as a tool to support emergency managers after an event has been documented around the world. However, there is limited research that attempts to understand the use, effectiveness, inherent complexities, impacts and subsequent consequences of cordoning once applied. This research aims to fill that gap by providing a detailed understanding of first, the cordons and associated processes, and their implications in a post-earthquake scenario. We use a qualitative method to understand cordons through case studies of two cities where it was used in different temporal and spatial scales: Christchurch (2011) and Wellington (Kaikōura earthquake 2016), New Zealand. Data was collected through 21 expert interviews obtained through purposive and snowball sampling of key informants who were directly or indirectly involved in a decision-making role and/or had influence in relation to the cordoning process. The participants were from varying backgrounds and roles i.e. emergency managers, council members, business representatives, insurance representatives, police and communication managers. The data was transcribed, coded in Nvivo and then grouped based on underlying themes and concepts and then analyzed inductively. It is found that cordons are used primarily as a tool to control access for the purpose of life safety and security. But cordons can also be adapted to support recovery. Broadly, it can be synthesized and viewed based on two key aspects, ‘decision-making’ and ‘operations and management’, which overlap and interact as part of a complex system. The underlying complexity arises in large part due to the multitude of sectors it transcends such as housing, socio-cultural requirements, economics, law, governance, insurance, evacuation, available resources etc. The complexity further increases as the duration of cordon is extended.

Research papers, University of Canterbury Library

The University of Canterbury is known internationally for the Origins of New Zealand English (ONZE) corpus (see Gordon et al 2004). ONZE is a large collection of recordings from people born between 1851 and 1984, and it has been widely utilised for linguistic and sociolinguistic research on New Zealand English. The ONZE data is varied. The recordings from the Mobile Unit (MU) are interviews and were collected by members of the NZ Broadcasting service shortly after the Second World War, with the aim of recording stories from New Zealanders outside the main city centres. These were supplemented by interview recordings carried out mainly in the 1990s and now contained in the Intermediate Archive (IA). The final ONZE collection, the Canterbury Corpus, is a set of interviews and word-list recordings carried out by students at the University of Canterbury. Across the ONZE corpora, there are different interviewers, different interview styles and a myriad of different topics discussed. In this paper, we introduce a new corpus – the QuakeBox – where these contexts are much more consistent and comparable across speakers. The QuakeBox is a corpus which consists largely of audio and video recordings of monologues about the 2010-2011 Canterbury earthquakes. As such, it represents Canterbury speakers’ very recent ‘danger of death’ experiences (see Labov 2013). In this paper, we outline the creation and structure of the corpus, including the practical issues involved in storing the data and gaining speakers’ informed consent for their audio and video data to be included.

Research papers, University of Canterbury Library

Geospatial liquefaction models aim to predict liquefaction using data that is free and readily-available. This data includes (i) common ground-motion intensity measures; and (ii) geospatial parameters (e.g., among many, distance to rivers, distance to coast, and Vs30 estimated from topography) which are used to infer characteristics of the subsurface without in-situ testing. Since their recent inception, such models have been used to predict geohazard impacts throughout New Zealand (e.g., in conjunction with regional ground-motion simulations). While past studies have demonstrated that geospatial liquefaction-models show great promise, the resolution and accuracy of the geospatial data underlying these models is notably poor. As an example, mapped rivers and coastlines often plot hundreds of meters from their actual locations. This stems from the fact that geospatial models aim to rapidly predict liquefaction anywhere in the world and thus utilize the lowest common denominator of available geospatial data, even though higher quality data is often available (e.g., in New Zealand). Accordingly, this study investigates whether the performance of geospatial models can be improved using higher-quality input data. This analysis is performed using (i) 15,101 liquefaction case studies compiled from the 2010-2016 Canterbury Earthquakes; and (ii) geospatial data readily available in New Zealand. In particular, we utilize alternative, higher-quality data to estimate: locations of rivers and streams; location of coastline; depth to ground water; Vs30; and PGV. Most notably, a region-specific Vs30 model improves performance (Figs. 3-4), while other data variants generally have little-to-no effect, even when the “standard” and “high-quality” values differ significantly (Fig. 2). This finding is consistent with the greater sensitivity of geospatial models to Vs30, relative to any other input (Fig. 5), and has implications for modeling in locales worldwide where high quality geospatial data is available.