| Title | Developing emergency preparedness indices for local governments |
| Publication Type | thesis |
| School or College | College of Social & Behavioral Science |
| Department | Geography |
| Author | Smith, Kathryn L |
| Date | 2010 |
| Description | Emergency preparedness refers to steps taken prior to the onset of an emergency in an effort to improve the ability to respond to and recover from potential disasters. Local governments need a means of assessing their emergency preparedness in order to better focus their future efforts. Indices have been used in other aspects of hazard research as a means of summarizing complex information into an easily understood value, but have not yet been applied to emergency preparedness. The process to develop an emergency preparedness index is discussed, and three possible indices are presented: fire stations per thousand, a composite index, and a scale-adjusted regression approach. A sample analysis of each method was conducted using data from fire stations for the 29 counties in Utah to illustrate the feasibility of each method and the usefulness of the results. Strengths and weaknesses of each method are discussed in regards to which method is most appropriate to aid local governments in evaluating and monitoring their emergency preparedness. The scale-adjusted regression approach was found to be the most effective for comparing communities with widely diverging populations and resources (e.g., urban versus rural), as well as being the most practical for local officials to implement and interpret. |
| Type | Text |
| Publisher | University of Utah |
| Subject | Emergency; Index; Preparedness |
| Dissertation Institution | University of Utah |
| Dissertation Name | MS |
| Language | eng |
| Rights Management | ©Kathryn L. Smith |
| Format | application/pdf |
| Format Medium | application/pdf |
| Format Extent | 2,319,704 bytes |
| Source | original in Marriott Library Special Collections ; HV15.5 2010 .S55 |
| ARK | ark:/87278/s6z03pn4 |
| DOI | https://doi.org/doi:10.26053/0H-KHAB-ZB00 |
| Setname | ir_etd |
| ID | 192689 |
| OCR Text | Show DEVELOPING EMERGENCY PREPAREDNESS INDICES FOR LOCAL GOVERNMENTS by Kathryn L. Smith A thesis submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Master of Science Department of Geography The University of Utah August 2010 Copyright © Kathryn L. Smith 2010 All Rights Reserved The Unive rs i ty of Utah Graduat e School STATEMENT OF THESIS APPROVAL The thesis of Kathryn L. Smith has been approved by the following supervisory committee members: Thomas J. Cova , Chair May 5, 2010 Date Approved George F. Hepner , Member May 5, 2010 Date Approved Tom M. Kontuly , Member May 21, 2010 Date Approved and by Harvey J. Miller , Chair of the Department of Geography and by Charles A. Wight, Dean of The Graduate School. ABSTRACT Emergency preparedness refers to steps taken prior to the onset of an emergency in an effort to improve the ability to respond to and recover from potential disasters. Local governments need a means of assessing their emergency preparedness in order to better focus their future efforts. Indices have been used in other aspects of hazard research as a means of summarizing complex information into an easily understood value, but have not yet been applied to emergency preparedness. The process to develop an emergency preparedness index is discussed, and three possible indices are presented: fire stations per thousand, a composite index, and a scale-adjusted regression approach. A sample analysis of each method was conducted using data from fire stations for the 29 counties in Utah to illustrate the feasibility of each method and the usefulness of the results. Strengths and weaknesses of each method are discussed in regards to which method is most appropriate to aid local governments in evaluating and monitoring their emergency preparedness. The scale-adjusted regression approach was found to be the most effective for comparing communities with widely diverging populations and resources (e.g., urban versus rural), as well as being the most practical for local officials to implement and interpret. TABLE OF CONTENTS ABSTRACT ................................................................................................................................ iii LIST OF TABLES .......................................................................................................................... v LIST OF FIGURES ....................................................................................................................... vi 1. INTRODUCTION .................................................................................................................... 1 1.1 Background ....................................................................................................... 1 1.2 Indices ............................................................................................................... 2 1.3 Research Objectives ......................................................................................... 3 2. LITERATURE REVIEW ............................................................................................................. 5 2.1 Historical Use of Indices ................................................................................... 5 2.2 Indices for Hazards Research ............................................................................ 6 3. METHODOLOGY .................................................................................................................. 11 3.1 Study Area ...................................................................................................... 11 3.2 Data Collection ............................................................................................... 13 3.3 Index Design and Development ...................................................................... 14 4. ANALYSIS ............................................................................................................................. 22 4.1 Stations Per Thousand .................................................................................... 22 4.2 Composite Index ............................................................................................. 28 4.3 Scale-Adjusted Regression .............................................................................. 30 5. DISCUSSION ........................................................................................................................ 42 5.1 Limitations ...................................................................................................... 42 5.2 Stations Per Thousands .................................................................................. 44 5.3 Composite Index ............................................................................................. 45 5.4 Scale-Adjusted Regression .............................................................................. 47 5.5 Utah Preparedness Assessment ..................................................................... 49 6. CONCLUSION ...................................................................................................................... 56 REFERENCES ............................................................................................................................ 57 LIST OF TABLES Table Page 4.1 SPT results sorted by county name and by ranked score .......................................... 23 4.2 Resources correlation matrix ..................................................................................... 27 4.3 Composite Index results sorted by county name and by ranked score ..................... 29 4.4 Scale-Adjusted Regression Index results .................................................................... 32 4.5 SPT Index Versus Scale-Adjusted Regression Index ranked results ........................... 34 4.6 Scale-Adjusted Regression by state name and ranked score ..................................... 41 LIST OF FIGURES Figure Page 3.1 Study area ................................................................................................................... 12 3.2 Composite Index framework ...................................................................................... 16 4.1 Stations Per Thousand Index chart comparisons ....................................................... 24 4.2 County resource comparisons .................................................................................... 26 4.3 Stations versus population for rural Utah counties ................................................... 27 4.4 Stations versus population for urban Utah counties ................................................. 27 4.5 Scale-Adjusted Regression results scatterplot ........................................................... 31 4.6 Scale-Adjusted Regression residuals map .................................................................. 33 4.7 SPT versus Scale-Adjusted Regression scatterplot ..................................................... 35 4.8 SPT versus Scale-Adjusted Regression line graph ...................................................... 36 4.9 Scale-Adjusted Regression Moran's I ......................................................................... 37 4.10 SPT Moran's I .............................................................................................................. 38 4.11 Scale-Adjusted Regression results by state scatterplot ............................................. 40 4.12 Scale-Adjusted Regression State residuals map ......................................................... 40 1. INTRODUCTION 1.1 Background Every community is confronted with a unique set of potential threats including natural, technological and social hazards which have the potential to damage property, cause injury or loss of life (Burton et al. 1993). The impact of these hazards can result in a state of emergency when the ability to respond to the event exceeds a community's capabilities. Actions taken prior to such a disaster can lessen the effects of emergencies that do occur, as well as provide for a more efficient response and recovery. Preparedness, the ability of communities to respond and recover from disasters, varies geographically. Local governments, such as cities and counties, are the first line of defense and are primarily responsible for responding to emergencies resulting from the impact of these threats (Lindell et al. 2007). They provide most of the immediate emergency response resources and have the authority to enact policies to minimize potential threats prior to an emergency. They also facilitate proper community recovery following an emergency. However, not all governments are equally prepared to fulfill these responsibilities. Although local governments are responsible for their community's emergency preparedness, they frequently lack an adequate means by which to measure their state of readiness or level of preparedness. Preparedness is difficult, if not impossible, to define and measure in absolute terms, such as a specific numerical value. It is not realistic to expect a community to achieve a state when they can declare they are "prepared" for any type of emergency event. However, there is a need for local governments, emergency managers, and 2 emergency responders to assess their level of preparedness in relative or comparative terms based on representative characteristics. A relative measure of preparedness will help identify needed improvements to preparedness efforts, allow comparisons between communities, and provide a means to monitor progress of emergency preparedness programs over time (Simpson and Katirai 2006). 1.2 Indices Indices are a common means of summarizing large amounts of complex information into a simplified value. They are commonly based on a limited number of indicators, or variables, that serve as a proxy or simplification of phenomena that are not directly measureable or that change over time (Cobb and Rixford 1998; Dwyer et al. 2004). Indices have been used in many contexts as a means of monitoring various aspects of society including the economy, quality of life, and other public trends. Indices have also been used in hazards research, but only to a limited extent with regard to emergency preparedness because it is a new research topic. A preparedness index has the potential to provide a means of assessing emergency capabilities and monitoring progress over time and space. A relative assessment of preparedness would allow local governments to identify gaps or weaknesses in preparedness efforts where future effort and funding can be focused. Results from indices are often fairly simple to understand, and can therefore be used by a variety of parties including emergency responders, emergency managers, and government officials. Because emergency preparedness efforts do not often produce tangible results, it is particularly useful to gain support from political officials by providing measurable results of progress. 3 1.3 Research Objectives The three research objectives for this thesis are as follows: Develop a set of indices composed of key variables that can be used by local government emergency planners to assess and monitor their level of emergency preparedness. Evaluation of emergency preparedness programs through an index, or suite of indices, may improve understanding of what factors most contribute to preparedness and provide a means of identifying shortcomings in preparedness efforts. This information can be used to inform responders, the public, and policy makers about conditions that need to be improved to better provide for the safety of the community. This knowledge may lead to improvements or modifications in preparedness activities. This thesis will discuss the process by which each index is created and provide the reasoning for the indicators selected to represent overall preparedness. The intended result is a collection of indices that represent an initial assessment of preparedness that is easily interpreted and applicable across different geographic areas and over time. The preparedness assets measured will largely be based on a community level emergency, or one in which the response is handled entirely by local resources. However, these measures are also relevant to more extreme disaster events, as they represent the initial response capabilities that the community would rely on in the initial hours after the event prior to the arrival of state or federal aid. Conduct a case study that applies the proposed indices. A case study is necessary to demonstrate the ability of the proposed indices to effectively identify geographic variation in emergency preparedness. This portion of the thesis examines variations in preparedness in different geographic areas. The baseline established by this study allows for the assessment of changes over time in a future research study. Furthermore, this discussion of the case study 4 provides an example of how the index can be applied, the responses analyzed, and the results presented in a manner that could motivate positive change in local emergency preparedness strategies. Identify indicators that account for variations in preparedness scores and suggest measures for improvement in areas with low preparedness scores. Common trends in the effectiveness of preparedness efforts may be identified through analyzing the results of the case study. Similarities may be found among various types of communities that commonly contribute to higher or lower preparedness scores. For example, smaller communities may have access to fewer resources, and therefore score lower in preparedness. Analyzing the input for the index may provide insight into what factors contribute to either a high or low preparedness score. This information can then be utilized by local governments to determine where to focus their preparedness efforts. Those factors that contribute to a relatively high preparedness score should be made a priority in areas where they have not been addressed previously. Low preparedness scores may be the result of neglect of preparedness resources or tasks which also need to be given higher priority. Because emergency preparedness programs often have limited resources, this knowledge will help to focus efforts where they will lead to the greatest amount of improvement. 2. LITERATURE REVIEW This section will establish the foundation for this research by providing a review of the historical uses of indices as a research tool. This review will first discuss several early uses of indices and describe the progression of index-based research into other disciplines leading up to their use in hazards research. The second section will describe the most common forms of indices in hazards research. These include risk indices, those which measure the threat or magnitude of impact of a particular hazard, as well as vulnerability indices, which measure the social characteristics that contribute to a community's vulnerability to hazards. Finally, the need for additional research specifically relating to measuring preparedness will be established. 2.1 Historical Use of Indices Indices have been used for a variety of purposes for many years. It is suggested that the first documented use of indicators was in the 1830s in Europe and the United States as physicians and statisticians began to use social and demographic data to try to link disease with poverty and other social conditions (Cobb and Rixford 1998). Cobb and Rixford (1998) summarize subsequent studies sought to identify links between individual characteristics with various societal concerns including crime, poor labor conditions, education, health, and general social conditions. A shift in focus occurred In the 1930s, spurred by the huge social challenges brought on by the Great Depression, and research centered on the economy, business cycles, unemployment, and living conditions. A shortcoming in many of these early attempts were that 6 results were often strictly descriptive and offered little insight into understanding underlying problems, and thus did little to inspire change (Cobb and Rixford 1998). Despite initial research challenges, economic indices grew increasingly successful in influencing policymaking by the 1960s, and the need for adequate measures to monitor rapidly changing social conditions inspired a movement in social indicator research from the 1960s to early 1970s. This period also was a time of debate over research procedures and theories. Some researchers believed that the theoretical framework behind social indices was premature and under-developed, and that basic research and better data were needed before indices could be used to guide public policy. The implementation of an inductive approach of gathering descriptive information followed by the development of meaningful generalizations and analysis of social trends was also necessary. Another change in this era was the inclusion of subjective measurements, such as personal interpretations, as measurements of in indices (Cobb and Rixford 1998). The social indicators movement stalled during the 1980s, due to criticism that indicators were of limited usefulness, and other changes in political trends. However, other agencies continued to pursue the use of indices, including the Organization of Economic Co-operation and Development's (OECD) publication of Living Conditions in OECD Countries, the United Nations Human Development Index, the World Health Organization assessments in public health, as well as environmental monitoring by the Environmental Protection Agency and Council on Economic Quality (Cobb and Rixford 1998). 2.2 Indices for Hazards Research 2.2.1 Risk Indices Indices were not utilized in hazards research until relatively recently. Most focus on: 1) assessing the level of risk a community faces from a particular hazard, 2) social characteristics 7 that contribute to vulnerability, or 3) a combination of risk and vulnerability. For example, the Earthquake Disaster Risk Index (EDRI) (Davidson and Shah 1997) identifies factors that contribute to earthquake risk to allow comparison of overall earthquake risk of cities worldwide. The index addressed five key factors including the geophysical hazard level, exposure, vulnerability, external context such as economic and political impacts, and emergency response and recovery capabilities. The EDRI demonstrates that although the index may show that two cities both have high levels of risk, the factors responsible for risk may vary, such as differences in response capabilities, the frequency of occurrence of earthquakes, or the number of people or structures exposed. A similar approach was taken in creating the Hurricane Disaster Risk Index (HDRI) (Davidson and Lambert 2001). The Munich Re Group (2003) developed a natural hazard index to measure the loss potential for the world's major metropolitan centers. These areas have high concentrations of people and property values and therefore a high potential for hazard related losses which could have widespread economic impacts. In contrast to the EDRI and HDRI, this study considered several natural hazards at once, including earthquake, flood, and windstorms. Furthermore, whereas the EDRI was a relative comparison of risk, this study attempted to include the order of magnitude of the absolute loss potential. The study measured average annual losses and probable maximum losses, vulnerability in terms of quality and density of construction, and other measures of exposure such as the average value of households, gross domestic product, and global economic significance of the city. The authors' findings show that the index is more heavily influenced by the value of what is exposed to the hazard than actual hazard vulnerability. In other words high concentrations of people and property values have higher potential for losses (Munich Re 2003). 8 Cardona (2005) proposed a collection of four composite indicators rather than a single index. The composite indicators address a country's financial ability to cope with a disaster, the impact of small scale recurrent or chronic events on local areas, socially and economically vulnerable populations, as well as risk management. Rather than an overall comparison of disaster risk, comparisons can be made on specific aspects. The United Nations Development Program (UNDP) (2004) investigated the effects of natural hazards on developing countries. Disaster losses can interact or possibly aggravate other financial, political, health, or environmental challenges that afflict developing countries. This study calls attention to these potential complications which require increased focus on disaster risk reduction in development policy and planning. 2.2.2 Vulnerability Indices Other studies have turned focus away from the geophysical risks, and instead have taken an in-depth look at human vulnerability to hazards. For example, Cutter et al. (2003) present social inequalities that influence human susceptibility to harm and the ability to respond to an event. These traits include gender, age, race, socioeconomic status, and housing and lifeline quality. The index scores were mapped to demonstrate geographic differences in social vulnerability, with general trends showing higher vulnerability scores in area with greater ethnic and racial inequalities and/or rapid population growth. Dwyer et al. (2004) conducted a study to analyze similar human characteristics based on individual perceptions of what makes a person vulnerable to a hazard rather than direct data on socioeconomic characteristics. Through the use of decision tree analysis, it was determined that one variable alone typically did not determine vulnerability. Rather, two or more variables combined led to greater vulnerability. Adger et al. (2004) propose an additional methodology for examining vulnerability and adaptive capacity within the context of risks as a result of climate change. 9 A different perspective on hazard vulnerability was presented by the South Pacific Applied Geoscience Commission (Kaly et al. 1999). The Environmental Vulnerability Index (EVI) looks at the effects of natural hazards on the environment in Small Island Developing States (SIDS) with the assumption that risks to the environment will eventually translate into risks to humans through their dependence on the natural resources. Similar to Cardona's (2005) risk index, the EVI is composed of three subindices which are later aggregated to give an overall measure of vulnerability based on indicators of risk, resilience and environmental integrity or degradation. 2.2.3 Preparedness Indices Most indices place little emphasis on preparedness efforts taken prior to the onset of a disaster. Although emergency or risk management programs have been included as contributing factors in some indices (Davidson and Shah 1997, Cardona 2005, Davidson and Lambert 2001), they are rarely the focus of the study. A notable exception was one of the earlier uses of an index in hazards research. The State Capabilities Assessment for Readiness (CAR) was a pilot study undertaken jointly by the Federal Emergency Management Agency (FEMA) and National Emergency Management Association (NEMA) in 1997. The CAR was a designed as a self-assessment of performance for state emergency management agencies. States ranked their abilities in characteristics grouped into 13 categories, or Emergency Management Functions (EMFs), which provided a profile of their capabilities. A summary report identified national priority emphasis areas for improvement, as well as national emergency management strengths. A 100 percent return rate of the CAR assessment indicated substantial support for the project. However, the project was not continued after 2000. Different versions of assessments have been utilized in subsequent years; however, because of a lack of 10 consistency in methodology and in the capabilities being assessed, it is difficult to compare results over time. The Risk Management Index (RMI) proposed by Carreno et al. (2007) is another assessment with specific focus on the effectiveness or performance of risk management programs. Similar in concepts used by Cardona (2005), this index considers perception and identification of risk, risk reduction efforts, response and recovery capabilities, and governance and financial protection practices to compare the performance of risk management agencies. Another example is the Disaster Resiliency Index (DRi) proposed by Simpson and Katirai (2006), in which multiple measures of preparedness are evaluated against the vulnerability of the community. This draws attention to the concept that the level of preparedness is incomplete without some understanding of the risks and vulnerabilities a community faces. By addressing preparedness in context with vulnerability, the author can demonstrate that greater disaster resilience could be a result of either greater capacity (preparedness) or low exposure (vulnerability). Although these efforts show strides towards better preparedness assessments, there is still much room for improvement. Indices of preparedness remain a relatively untapped approach, lacking in a broad, robust research foundation. The need for a methodology that provides a geographic assessment of preparedness and that is administered consistently to allow for comparisons over time and space has not yet been met. In addition, previous indices have largely focused at state or national levels, and have not addressed the more immediate preparedness needs of local governments. Given these deficiencies, the development of preparedness indices warrants further research efforts. 3. METHODOLOGY This chapter outlines the procedures that will be used to develop three possible preparedness indices. The basic framework for each method will be identified. The study area and data sources used to test and evaluate the proposed indices will be described, as well as criteria for selecting indicators to represent preparedness. For the proposed composite index, additional steps for mathematical combination of multiple variables into a single index value will be discussed. 3.1 Study Area The State of Utah, located in the Rocky Mountains of the western United States, is composed of 29 counties that will serve as the case study for developing the proposed indices. Counties are an appropriate assessment level for this research as many preparedness activities, such as emergency management, are administered at this government level, particularly in the rural counties of Utah. There are significant differences in topography, climate, and underlying geology across the state, which contribute to a wide variety in potential natural hazards communities face. There are also variations in the risk of technological hazards. Salt Lake County and the other urban counties along the Wasatch Front are believed to have a higher terrorist threat than rural counties such as Kane or Garfield. However, all counties are likely susceptible to technological hazards such as a hazardous material release along a highway or railroad. These counties also exhibit a variety of social characteristics and community types from major urban centers to small, isolated rural towns. County populations range from one 12 thousand to over one million (Figure 3.1), and are further distinguished by the varying availability of preparedness resources. These counties provide a useful context to assess whether the proposed indices can be effectively applied over a variety of geographic areas, and particularly whether they can handle the challenge of comparing widely varying populations. This initial series of index results can also serve as a baseline for future assessments to monitor change in preparedness over time. Figure 3.1 Study area 13 3.2 Data Collection An initial step in designing an index is determining which variables, or indicators, will be included. A review of the literature provided numerous criteria for identifying robust indicators that will be considered when developing the proposed preparedness index (Dwyer et al. 2004, Carreno et al. 2007, Malczewski 2000). Of primary importance is the need for each indicator to adequately address the research objectives and accurately represent local government preparedness. The indicators should attempt to capture important elements of preparedness while avoiding an excessive number of measurements. In order to meet the objectives identified for this research, the indicators must be applicable at different scales and sensitive to change over time. Furthermore, indicators should be independent (i.e., not measure the same thing) as well as quantitative or measurable by a readily understood method. Finally they should be easy to understand and interpret while adequately reflecting the complexity of the concepts represented. To accomplish this, the appropriate data must be available and of good quality. The methods for developing indices selected for this study were driven primarily by the research questions but were also influenced by data availability. Effective emergency preparedness is dependent on multiple parties including fire, police, emergency medical services, emergency management planning, and other administrative and governmental functions including budgeting and the legal context. Data on many of these elements are not readily available. Although primary data collection could be utilized to acquire information on all emergency preparedness elements, this study focuses on available secondary data sources to reduce the cost of the proposed approach. The National Fire Station Database Survey conducted by Explore Information Services, LLC in the spring of 2009 represents a valuable source to assess the effectiveness of the 14 proposed preparedness indices. The survey achieved a 70 percent response rate nationally, and 72 percent in Utah. The responses from all participating fire stations are aggregated to the county level by Explore Information Services prior to their providing the data, so as to avoid sharing personal identifying information about an individual fire station or its personnel. These data can be used to represent a subset of elements that would contribute to an overall measure of a community's preparedness in order to demonstrate the proposed methodology. The survey results contain information on the number of fire stations, the number and type of response vehicles, and the number of personnel at each station. Additional information on the personnel includes information on whether they are full-time or volunteers, as well as the level of training they have in specialized skills such as emergency medical services and hazardous material response. Other details include the type of dispatch facility, participation in mutual-aid or first alarm aid agreements, presence of backup power to critical systems, and other equipment such as two-way radios. These represent many of the fundamental elements of a good emergency preparedness program identified in the literature review and provide a valuable information source for testing the proposed indices. Although the focus of this study is fire and paramedic response, it could easily be adapted to include other emergency preparedness elements. These data were sufficient to demonstrate the proposed methodology. 3.3 Index Design and Development Three index methodologies are developed and compared to determine which best meets the research objectives. The first is Stations Per Thousand, a simple measure of a key response resource relative to the population served. A second is a composite index based on several subindices, each comprised of multiple variables. In addition to a description of the multiple variables incorporated in this index, options for scaling and weighting the variables will 15 also be addressed. Third, a variation of the stations per thousand (SPT) method is presented that relies on a scale-adjusted regression comparison of stations and population. 3.3.1 Fire Stations Per Thousand One method of creating a preparedness index is to derive a formula that describes the relationship between a population and its preparedness resources. In theory, a larger population should have more resources to serve its needs to be considered equally prepared, and two administrative units of the same population should have a similar level of resources. Population estimates from the 2008 U.S. Census Bureau can be used in conjunction with the fire station resources to derive two such formulas. The first and simpler method is to calculate a Stations Per Thousand (SPT) value. This SPT index allows local governments to observe how many preparedness resources they have available for the population they serve and compare that ratio with other counties. Their resource ratio should be similar to counties with similar population sizes. Counties with larger populations are expected to have a proportionately higher number of resources. This index can be computed for three key resources, including stations, response vehicles, and personnel as follows: 3.3.2 Composite Index The Stations Per Thousand index is dependent on a single proxy measure representing preparedness. An alternative method is to construct on composite index based on multiple indicators, similar to the process used for suitability mapping (Hepner 1984). A preliminary framework for the composite index design will be adapted from that used by Davidson and Shah (1) 16 (1997). As shown in Figure 3.2, it is proposed that an index be composed of three subindices based on primary contributors to preparedness or ability to respond to an emergency: resources, personnel, sustainability. Resources are physical assets such as vehicles and communication equipment. The personnel subindex measures the presence of trained staff. Sustainability refers to additional measures or capabilities for extra-ordinary response needs such as backup power, adequate water supply for sustained fire suppression, and agreements with other agencies for additional response assistance. Several indicators will be normalized by population to consider preparedness efforts in context with the vulnerability of the community. Figure 3.2 Composite Index framework Indicators Subindices Index Preparedness Resources (R) Stations/ 1000 Population Vehicles/ 1000 Population Radios Dispatch Center Personnel (P) Personnel/ 1000 Population EMS Training HAZMAT Training Staffing Type Sustainability (S) Mutual Aid Agreements First Alarm Aid Agreements Gallons Per Minute / 2 hrs Dispatch Backup Radio Repeater Backup Power Station Backup Power 17 A series of scaling techniques will be utilized to assign each station a score between 0 and 1 for each indicator. The average of the station scores will be calculated for each county to determine an overall county score for each indicator. The sum of these indicator scores will then be calculated for each subindex. A final composite index score will represent the sum of the three subindices, or an overall preparedness score. 3.2.2.1 Scaling considerations. Indices composed of a multiple variables, such as the proposed composite index, must address issues relating to the wide variety of data types and formats. In order to summarize the data in an index, they must be standardized into common values or made "unitless." The fire station data contained several different levels of measurement, including binary (a particular resource or attribute is present or is not), ordinal, and interval values. A variety of scaling approaches have been used in other indices, including linear, percentage, and statistical methods (Simpson and Katirai 2006). Malczewski (1999, 2000) discusses several multicriteria decision analysis techniques that can also be utilized for aggregating geographical data into unidimensional values, including linear scale transformation such as the score range procedure, the midvalue method of value function curves, probability, and fuzzy set membership. The score range procedure (Malczewski 1999, Kaly et al. 1999) is expected to be an appropriate scaling measure for this study. This method is relatively simple and meets the requirement for allowing data to be expressed in similar magnitudes. This is accomplished by utilizing the maximum and minimum observed values and deriving a value between 0 and 1 according to equation 3, (2) 18 where is the standardized score for the ith county and the jth indicator, is the raw score for indicator j in county i, is the minimum score for the jth indicator, and is the range of values for that indicator over all counties. The higher (or closer to 1) the value is, the more favorable it is. For indicators with binary values, such as presence or absence of mutual aid agreements, a 0 is assigned to all "no" responses and a 1 for each "yes". The Stations Per Thousand index proposed previously will also be considered as a component within the composite index. This will be used as a means of putting preparedness efforts in the context of the needs of the community, as counties with larger populations require a greater number of resources. As shown in Equation 3.1, the values for number of fire stations, personnel, and response vehicles will be divided by the total population in their respective county to derive a "resource per thousand" value. These per thousand values will then be transformed onto the 0 to 1 scale as described in the previous section, and then included in their respective subindex total. 3.2.2.2 Weighting considerations. It is common in index design to assign weights to the indicators or subindices (Davidson and Shah 1997; Carreno et al. 2006). A weight is a value assigned to an evaluation criterion that indicates its importance relative to other criteria under consideration (Malczewski 1999). Weights allow great amounts of flexibility by enabling the index to account for variations in the amount a certain indictor is believed to contribute to the overall observed phenomena. Weights are typically designed to sum to 1, with larger weights indicating a higher level of importance and greater contribution to the outcome (Malczewski 1999). (3) 19 Weights are inherently subjective because an analyst must decide which variables will be assigned various weights, often with input from decision makers. A variety of methods to ensure structured subjectivity in weighting methods have been developed, such that appropriate justification can be made for the weights selected. These include several ranking and rating methods in both linear and nonlinear mathematical combinations, pairwise comparisons through the analytic hierarchy process (AHP), and the trade-off analysis method such as the swing weights technique (Hopkins 1977, Malczewski 1999, Malczewski 2000). Comparisons of these methods presented by Hopkins (1977) and Malczewski (1999) elaborate on the circumstances in which each method is most suitable. A researcher may also appeal to additional expert opinions to validate weighting choices (Davidson and Shah 1997). Quantitative methods such as regression analysis or factor analysis can also be used to minimize the researcher's bias (Simpson and Katirai 2006, Dwyer et al. 2004). Some researchers choose not to use weights in order to observe the equally weighted outcomes of their model or to avoid these subjective decisions in the absence of a defensible method for assigning weights (Cutter et al. 2003). Weighting will only be introduced in this study in the composite scaled index. Several indicators from the fire station survey are ordinal in nature, such as personnel training levels in emergency medical services (EMS) or hazardous material (HAZMAT) response. Four sequential levels of training were reported for both EMS and HAZMAT. There is an inherent ranking to the training certification levels because each successive level requires additional skill, knowledge and responsibility. However, this ranking has not been defined in a specific numerical scale. In order to consider these indicators with on the same 0 to 1 scale as the remainder of the composite index, a weighting system will be introduced. 20 Four training levels could easily break down into equal interval categories that demonstrate the increasing value of higher levels of training. Emergency Medical Services Training HAZMAT Training 0 No training 0 No training .25 EMS First Responder .25 Awareness .50 EMS Basic .5 Operations .75 Advanced Life Support, ALS .75 Technician 1 ALS Paramedic 1 Technician w/ WMD training However, expert opinions from Utah training officials can be consulted to determine whether the value from additional training do in fact increase in equal increments, or if certain levels were more rigorous and believed to provide more benefit. Based on the information provided by the training officials, the number of hours required for each level of training provides an objective reference for refining the training scale. A combination of the hours of training and the expert opinions of the value of each level resulted in the following scales. EMS Training HAZMAT Training 0 No training 0 No Training .1 EMS First Responder .1 Awareness .3 EMS Basic .4 Operations .5 Advanced Life Support .8 Technician 1 ALS Paramedic 1 Tech. w/WMD training 3.3.3 Scale-Adjusted Regression An alternative method of population-based index, similar to the SPT, is a scale-adjusted regression method. This method also similarly seeks to describe the relationship between population and fire station resources. However, the scale-adjusted regression index will be introduced to address potential issues arising from widely ranging population values between 21 urban and rural counties in Utah. Based on the framework utilized by Sutton (2002) in developing a scale-free urban sprawl index, log-log regression can be utilized to provide a means for comparing communities with widely diverging populations on a similar scale based on equation 4. Although this approach has not previously been utilized in hazards research, it is anticipated that this will be an appropriate and useful application of Sutton's methodology. Officials will be able to assess their relative preparedness based on their position above or below the regression line. + e This chapter introduced the study area, data and proposed methods for developing three preparedness indices. The proposed methods to be compared are the SPT, the composite index, and the scale-adjusted regression index. The next chapter will describe the results as each proposed method is carried out. (4) 4. ANALYSIS The stations per thousand (SPT), composite, and scale-adjusted regression indices were implemented according to the methodology described in the previous section. The first section displays the results from the SPT index, the next shows the composite index results, and finally the scale-adjusted regression index results are presented. The final section also includes a comparison between the SPT and scale-adjusted regression results, a brief discussion on spatial statistics for the regression index, and an evaluation of Utah resources in context with national fire station resources. 4.1 Stations Per Thousand (SPT) The first index to be considered was a comparison of key resources, such as stations, response vehicles, and personnel, to the population they serve. Stations per thousand (SPT), for example, is a simple assessment of a key emergency response resource that provides coarse yet meaningful information about a county's preparedness. Counties with a higher ratio of stations for their population would be considered to be better prepared because they are better equipped to respond to an emergency. The results of this index are presented in Table 4.1. These resource variables were considered in a series of scatter plots to evaluate any trends with the number of resources compared with each county's population. However, as shown in Figure 4.1, these charts were strongly influenced by the outlying values of the six most populous counties: Salt Lake, Utah, Davis, Weber, Washington, and Cache. 23 Table 4.1 SPT results sorted by county name and by ranked score Stations Per Thousand Stations Per Thousand Sorted by county name Ranked by SPT Score County SPT County SPT BEAVER 0.77 WAYNE 2.28 BOX ELDER 0.29 DAGGETT 2.08 CACHE 0.12 PIUTE 2.07 CARBON 0.25 GARFIELD 1.98 DAGGETT 2.08 RICH 1.76 DAVIS 0.05 KANE 0.90 DUCHESNE 0.36 MILLARD 0.89 EMERY 0.75 BEAVER 0.77 GARFIELD 1.98 EMERY 0.75 GRAND 0.64 JUAB 0.70 IRON 0.24 GRAND 0.64 JUAB 0.70 SAN JUAN 0.53 KANE 0.90 SEVIER 0.39 MILLARD 0.89 SANPETE 0.37 MORGAN 0.21 DUCHESNE 0.36 PIUTE 2.07 SUMMIT 0.33 RICH 1.76 BOX ELDER 0.29 SALT LAKE 0.06 TOOELE 0.26 SAN JUAN 0.53 CARBON 0.25 SANPETE 0.37 IRON 0.24 SEVIER 0.39 MORGAN 0.21 SUMMIT 0.33 UINTAH 0.20 TOOELE 0.26 WASHINGTON 0.19 UINTAH 0.20 WASATCH 0.18 UTAH 0.06 CACHE 0.12 WASATCH 0.18 WEBER 0.09 WASHINGTON 0.19 UTAH 0.06 WAYNE 2.28 SALT LAKE 0.06 WEBER 0.09 DAVIS 0.05 24 a. b. c. Figure 4.1 Stations Per Thousand Index chart comparison a) Stations versus population, b) Personnel versus population, and c) Vehicles versus population Cache Davis Salt Lake Utah Washington Weber 0 10 20 30 40 50 60 70 0 200 400 600 800 1000 1200 Stations Population (in Thousands) Cache Davis Salt Lake Utah Washington Weber 0 100 200 300 400 500 600 700 800 900 0 200 400 600 800 1,000 1,200 Personnel Population (In Thousands) Cache Davis Salt Lake Utah Washington Weber 0 10 20 30 40 50 60 70 80 90 100 0 200 400 600 800 1,000 1,200 Vehicles Population (In Thousands) 25 Not only should the amount of resources be appropriate for the size of a county's population, but cross-referencing them with each other helps determine if staffing, vehicle needs, and stations are all reasonably proportionate. It was assumed that counties with more stations should have similarly larger numbers of personnel and vehicles to operate the stations, which Figures 4.2 show is generally the case. Due to the outlying population values, the six most populous counties were excluded in these results, with the exception of Weber County. This was to draw attention to the observation that Weber County has a noticeably lower number of vehicles and personnel compared to its number of stations, and a low number of vehicles for the number of personnel. Because of the similarity in the results between stations, personnel, and vehicles, a correlation matrix was utilized to determine whether each variable contributed additional insight into each county's preparedness or if they are simply a reflection or reinforcement of each other. There is typically little variation in the number of personnel it takes to staff a fire station and operate its respective vehicles. The correlation matrix results in Table 4.2 indicate a strong relationship between each of the variables, which suggests that there may not be a significant benefit in considering these indicators independently. Based on this relationship, the Personnel Per Thousand and Vehicles Per Thousand were excluded from further analysis, and Stations Per Thousand will be the only index using this method referenced in the remainder of this study. The stations versus population chart was reconstructed to create two new charts: one showing only the rural counties (Figure 4.3), and another showing the six most populous counties (Figure 4.4). The rural counties chart show a much more even distribution, with a general trend for counties with larger populations to have a higher number of resources. A trendline was added to each graph for further aid in interpreting these results. This 26 a. b. c. Figure 4.2 County resource comparisons Weber 0 50 100 150 200 250 0 5 10 15 20 Personnel Stations Personnel per Station Weber 0 5 10 15 20 25 30 35 40 0 5 10 15 20 Vehicles Stations Vehicles per Station Weber 0 5 10 15 20 25 30 35 40 0 50 100 150 200 250 Vehicles Personnel Personnel Per Vehicle 27 Table 4.2 Resources correlation matrix Station Crew Trucks Pop Station 1 Crew 0.89 1 Trucks 0.90 0.85 1 Pop 0.94 0.82 0.85 1 Figure 4.3 Stations versus population for rural Utah counties Figure 4.4 Stations versus population for urban Utah counties Uintah Millard 0 2 4 6 8 10 12 14 16 0 10 20 30 40 50 60 70 Stations Population (in Thousands) Cache Washington Weber Davis Utah Salt Lake 0 10 20 30 40 50 60 70 0 200 400 600 800 1000 1200 Stations Population (in Thousands) 28 trendline is a means of facilitating the comparison of each county's resources relative to the other counties. In Figure 4.3, Millard County's position above the line shows that it has a comparatively high number of stations for its population. Conversely, Uintah County's position below the line shows that it has less resources relative to its population. Millard County could be considered resource rich in terms of preparedness, whereas Uintah County could demonstrate they are more in need of additional resources to be brought up to the mean in Utah. For the urban counties, Washington County has a high number of stations for its population, while Davis County has comparatively few. 4.2 Composite Index As described in Section 3, a series of scaling procedures were executed to bring the data into comparable units of measurement, and then summarized into a composite preparedness score. Please note that Daggett County was excluded from this index due to lack of data for most of the indicators. Table 4.3 shows the results of the subindices and the composite index. This allows for a comparison of how counties performed in each of the three categories. For example, many of the rural counties score very well in the resources index, but less well in the personnel index, due either to lower training scores or lower scoring staffing types for stations with volunteer rather than career firefighters. In general, the most populous counties fall in the upper half of the ranked scores. However, there are several rural counties that also fall in the upper ranks. For example, Garfield and Wayne counties both rank high in preparedness as defined here, largely due to high scores in the Resources and Sustainability subindices. Several of the lower ranking counties, such as Emery and San Juan, have particularly low values in the Personnel index. This indicates a need to focus future efforts on staffing and training needs in those counties. This index demonstrates the usefulness of considering multiple preparedness indicators, because it allows officials to 29 Table 4.3 Composite Index results sorted by county name and by ranked score County Resources Subtotal Personnel Subtotal Sustain-ability Subtotal Composite Index- By County County Composite Index - Ranked BEAVER 2.44 0.72 3.00 6.16 SALT LAKE 10.61 BOX ELDER 2.21 1.23 3.65 7.08 GARFIELD 9.54 CACHE 2.08 1.66 4.87 8.61 WEBER 8.89 CARBON 1.89 1.64 4.75 8.28 DAVIS 8.78 DAVIS 2.02 0.19 0.00 1.10 WAYNE 8.68 DUCHESNE 2.32 2.09 4.67 8.78 TOOELE 8.62 EMERY 2.53 1.42 2.67 6.40 CACHE 8.61 GARFIELD 3.19 0.70 2.75 5.97 GRAND 8.55 GRAND 2.82 1.44 4.92 9.54 CARBON 8.28 IRON 2.19 1.24 4.50 8.55 WASATCH 8.03 JUAB 2.05 1.45 3.89 7.53 RICH 7.98 KANE 2.65 1.48 2.17 5.70 UTAH 7.73 MILLARD 2.78 1.00 3.00 6.65 SUMMIT 7.70 MORGAN 2.25 1.39 2.83 7.01 MORGAN 7.62 PIUTE 2.92 1.24 4.50 7.98 WASHINGTON 7.59 RICH 3.60 0.84 3.00 6.76 IRON 7.53 SALT LAKE 2.00 1.14 3.00 7.73 SANPETE 7.28 SAN JUAN 2.21 2.82 5.80 10.61 BOX ELDER 7.08 SANPETE 2.34 0.62 2.25 5.08 SEVIER 7.01 SEVIER 2.21 1.27 3.67 7.28 PIUTE 6.88 SUMMIT 2.28 1.34 3.33 6.88 MILLARD 6.76 TOOELE 2.15 1.85 3.48 7.62 KANE 6.65 UINTAH 2.20 1.87 4.60 8.62 BEAVER 6.40 UTAH 1.92 1.67 2.33 6.20 DUCHESNE 6.20 WASATCH 2.31 1.86 3.93 7.70 EMERY 6.16 WASHINGTON 2.08 2.03 3.69 8.03 UINTAH 5.97 WAYNE 3.20 1.84 3.67 7.59 JUAB 5.70 WEBER 1.98 2.15 3.33 8.68 SAN JUAN 5.08 isolate program elements that contribute to their overall preparedness, as well as identify which may be most in need of improvement relative to similar jurisdictions. An issue that arose during the construction of the composite index was that the Sustainability subindex tended to have much larger scores than the other two indices. This is partially because this index contained a larger number of indicators. In addition, many of these indicators were binary values with values of 1, so there were more indicators with large values being summed. In an attempt to balance the weight of the three subindices more evenly, 30 the composite index was re-evaluated by introducing weights into the other two subindices according to equation 5. 2(R) + 2 (P) + S = Preparedness This equation resulted in similar ranges of values for each subindex so that each subindex would carry approximately equal weight in the final index total. Although this did change the final preparedness scores, there was not a significant difference in the rankings of the counties with this method. Several counties shifted up or down one or two places, but in general they fall in the same area as the rankings without weights. 4.3 Scale-Adjusted Regression The impact of outliers on the SPT Index emphasized the need for an index methodology that would be appropriate for a variety of population sizes. The scale-adjusted regression index utilized a log-log regression to allow for comparison of urban and rural areas on the same scale. Similar to the SPT Index, the number of stations per thousand was selected as a proxy measure of preparedness. The regression line on the scatterplot (Figure 4.5) represents the relationship between population and the number of stations in Utah. The R-squared value of 0.73 (Table 4.4) indicates that this relationship is relatively strong. The regression line can be thought of as a "Preparedness Line," or an expected level of resources for a given population size. Points above the Preparedness Line have higher than expected resources for their population which indicates a greater level of preparedness. Points below the line have fewer resources than expected per thousand. A choropleth map (Figure 4.6) of the regression residuals allows for geographic comparison of preparedness levels. The blue tones emphasize counties that have a higher number of stations than expected based on their population, while red tones show counties (5) 31 Figure 4.5 Scale-Adjusted Regression results scatterplot with lower than expected stations per thousand. The lower resource counties are concentrated more in the northeast portion of the state, the higher resource counties are in the northwest and south-central part of the state. The neutral counties, those that are close to 0 or near the expected number of stations are mainly in the south. Note that urban and rural counties both fall into a mix of high and low resource areas. This indicates that urban counties do not necessarily have a greater likelihood of having more resources or better preparedness than the rural counties or vice versa. The ranked results from the stations per thousand analysis were shown in a side by side comparison with the ranked residuals from the regression analysis (Table 4.5). The results are quite different between the two methods, which illustrates the impact of utilizing the scale-adjusted regression index versus the nonscaled SPT Index. Figure 4.7 is a scatterplot with the SPT results on the horizontal axis compared against the Scale-Adjusted Index residuals on the vertical axis. The middle horizontal line represents the 0 on the Regression Index. Equivalent to the "Preparedness Line" described previously counties above the line have a relatively high number of stations, and counties below having comparatively low numbers of stations. The y = 0.39x - 1.80 R² = 0.74 0 1 2 3 4 5 5 6 7 8 9 10 11 12 13 14 15 Ln(Stations) Ln(Population) 32 Table 4.4 Scale-Adjusted Regression Index results Regression S tatistics Multiple R 0.86 R Square 0.74 Adjusted R Square 0.73 Standard Error 0.40 Observations 29.00 Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Lower 95.0% Upper 95.0% Intercept -1.80 0.45 -3.98 0.00 -2.73 -0.87 -2.73 -0.87 X Variable 1 0.39 0.04 8.84 0.00 0.30 0.49 0.30 0.49 Observation Predicted Y Residuals Standard Residuals 1 1.66 -0.05 -0.12 B eaver 2 2.45 0.19 0.48 Box Elder 3 2.78 -0.21 -0.54 Cache 4 2.10 -0.49 -1.24 Carbon 5 0.90 -0.21 -0.54 Daggett 6 3.17 -0.40 -1.01 Davis 7 2.03 -0.24 -0.61 Duchesne 8 1.85 0.23 0.58 Emery 9 1.56 0.75 1.89 Garfield 10 1.80 -0.01 -0.02 Grand 11 2.43 -0.03 -0.08 Iron 12 1.83 0.12 0.30 Juab 13 1.67 0.13 0.32 Kane 14 1.95 0.54 1.37 Millard 15 1.81 -1.12 -2.84 Morgan 16 1.06 0.03 0.09 Piute 17 1.24 0.14 0.36 Rich 18 3.65 0.47 1.20 Salt Lake 19 1.99 0.09 0.22 San Juan 20 2.22 0.09 0.22 Sanpete 21 2.11 -0.03 -0.08 Sevier 22 2.37 0.19 0.49 Summit 23 2.52 0.19 0.48 Tooele 24 2.27 -0.47 -1.20 Uintah 25 3.38 0.08 0.21 Utah 26 2.15 -0.77 -1.95 Wasatch 27 2.88 0.42 1.06 Washington 28 1.30 0.49 1.25 Wayne 29 3.05 -0.11 -0.28 Weber 33 Figure 4.6 Scale-Adjusted Regression residuals map BOX ELDER TOOELE JUAB Scale-Adjusted Regression _ -2.84 - -2.00 D 0.01 - 0.25 _ -1.99 - -1 .50 D 0.26 - 0.50 _ -1.49 --1.00 _ 0.51 - 1.00 -0.99 - -0.50 _ 1.01 - 1.50 D -0.49 - -0.25 _ 1.51 - 2.00 D -0.24 - 0.00 DUCHESNE UINTAH GRAND 34 Table 4.5 SPT Index versus Scale-Adjusted Regression Index ranked results Stations Per Thousand Scale-Adjusted Regression 2.28 WAYNE 1.89 GARFIELD 2.07 DAGGETT 1.37 MILLARD 2.07 PIUTE 1.25 WAYNE 1.98 GARFIELD 1.20 SALT LAKE 1.76 RICH 1.06 WASHINGTON 0.90 KANE 0.58 EMERY 0.89 MILLARD 0.49 SUMMIT 0.77 BEAVER 0.48 BOX ELDER 0.75 EMERY 0.48 TOOELE 0.70 JUAB 0.36 RICH 0.64 GRAND 0.32 KANE 0.53 SAN JUAN 0.30 JUAB 0.39 SEVIER 0.22 SAN JUAN 0.37 SANPETE 0.22 SANPETE 0.36 DUCHESNE 0.21 UTAH 0.33 SUMMIT 0.09 PIUTE 0.29 BOX ELDER -0.02 GRAND 0.26 TOOELE -0.08 SEVIER 0.25 CARBON -0.08 IRON 0.24 IRON -0.12 BEAVER 0.21 MORGAN -0.28 WEBER 0.20 UINTAH -0.54 DAGGETT 0.19 WASHINGTON -0.54 CACHE 0.18 WASATCH -0.61 DUCHESNE 0.12 CACHE -1.01 DAVIS 0.08 WEBER -1.20 UINTAH 0.06 UTAH -1.24 CARBON 0.06 SALT LAKE -1.95 WASATCH 0.05 DAVIS -2.84 MORGAN vertical dashed line designates the mean of the SPT values, which increase from left to right. Counties farther to the right have a higher ratio of stations per thousand. The two lines in the chart create something akin to four quadrants, from which generalizations can be made about the difference in results from the two index methods for each county. Generally, counties in the upper-right quadrant performed well on both indices. The counties in the lower-left quadrant ranked relatively low in both indices. The upper-left and lower-right ranked well in one index, but not as favorably in the other. For example, Garfield County is well above the "Preparedness Line" on the Regression Index, and is also far to the 35 Figure 4.7 SPT versus Scale-Adjusted Regression scatterplot right which shows it scored high on the SPT Index as well. Salt Lake County is far to the left, indicating a low SPT score. However, Salt Lake is also well above the regression "Preparedness Line," indicating that the regression index scored Salt Lake County very positively. This extreme change in rank is one of the most dramatic examples of the impact of using a scale-adjusted measure rather than the SPT index. An alternative visualization tool is provided in Figure 4.8. In this figure, a line graph shows the value of both indices for each county. This chart can also be awkward to interpret as the SPT Index is composed of only positive values, while the Scale-Adjusted Index has both positive and negative values. Each index has a horizontal line that represents the expected BEAVER BOX ELDER CACHE CARBON DAGGETT DAVIS DUCHESNE EMERY GARFIELD IRON GRAND JUAB KANE MILLARD MORGAN PIUTE RICH SALT LAKE SAN JUAN SANPETE SEVIER TOOELE SUMMIT UINTAH UTAH WASATCH WASHINGTON WAYNE WEBER -4.0 -3.0 -2.0 -1.0 0.0 1.0 2.0 3.0 0.0 0.5 1.0 1.5 2.0 2.5 Scale-Adjusted Regression Index SPT Index 36 value for that county based on its population, which allows for interpretation of whether a county scored above or below the expected result for that index. For example, Beaver County is near the line for both indices, which implies that it scored about average on each index, or rather that its resources are well suited for its population. Several counties rank similarly in both indices. Garfield County scored high in both indices, whereas Morgan County was comparatively low in both. However, this chart also illustrates that some counties experienced improved scores using the scale-adjusted index, including Box Elder, Salt Lake, Summit, Tooele, and Washington counties. On the contrary, Daggett County performed well in the SPT index, but ranks much lower on the Scale-Adjusted Index. The counties with improved scores are among the most populous counties in the state, whereas Daggett County is the least populous county. This demonstrates the importance of the scale-adjusted methodology in locations with large extremes in population values. Because the scale-adjusted index utilizes a statistical regression analysis, it is useful to evaluate the presence of any spatial correlation in the index values. The GeoDa program Figure 4.8 SPT versus Scale-Adjusted Regression line graph -3 -2 -1 0 1 2 3 BEAVER BOX ELDER CACHE CARBON DAGGETT DAVIS DUCHESNE EMERY GARFIELD GRAND IRON JUAB KANE MILLARD MORGAN PIUTE RICH SALT LAKE SAN JUAN SANPETE SEVIER SUMMIT TOOELE UINTAH UTAH WASATCH WASHINGTON WAYNE WEBER SPT Regression 37 (Anselin, 2004) was used to create weights for each of the counties in Utah based on a first order Queen contiguity, and also with a Nearest Neighbor contiguity based on six neighbors. Univariate Moran's I was calculated based on the scale-adjusted index results (regression residuals) and the results for both the Queen and Nearest Neighbor contiguity weights are shown in Figure 4.9. These results do not indicate a strong spatial correlation in these values. These steps were repeated for the SPT Index results, and the results are displayed in Figure 4.10. The SPT results indicate a slightly stronger presence of spatial correlation. Additional insight can be gained by evaluating how the results for Utah compare in a larger context. An additional analysis was undertaken to assess the fire station resource capabilities in other states. The United States Fire Administration provides data on the number of fire stations for each state. Using results from the National Fire Department Census Database and US Census Bureau 2009 population estimates, the scale-adjusted regression methodology a. b. Moran's I = 0.02 Moran's I = 0.13 Figure 4.9 Scale-Adjusted Regression Moran's I: a) Queen 1st order, b) Nearest Neighbor -3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 W-Residuals Residuals -3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 W-Residuals Residual 38 a. b. Moran's I = 0.10 Moran's I = 0.19 Figure 4.10 SPT Moran's I: a) Queen's 1st order, b) Nearest Neighbor was used to generate a nation-wide measure of fire station resources. Figure 4.11 shows a scatterplot of the resulting residuals and Preparedness Line. An R-squared value of 0.66 shows a fairly good relationship between stations and population at the national level. Table 4.6 presents the index results sorted by state and by ranked score. Based on these rankings, officials could determine that Utah overall has lower resources for its population than much of the country, which justifies investment in future resources. The choropleth map in Figure 4.12 allows for a geographic comparison of these results. Many of the southern states, central, and northern intermountain states have a high number of station resources. Neutral states, or those having a well balanced number of stations for their population, are mostly found on the west coast, in north-central and north-eastern states, as -3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 W_SPT SPT -3 -2 -1 0 1 2 3 -3 -1 1 3 W_SPT SPT 39 well as Texas and Florida. Areas that would most benefit from additional resources include several north-eastern states and western states such as Utah, Nevada, and Arizona. This chapter presented the results of the SPT, composite, and scale-adjusted regression indices. A comparison of these results provides the opportunity to evaluate the strengths and weaknesses of each method. The advantages and disadvantages of each method will be discussed in the following chapter. 40 Figure 4.11 Scale-Adjusted Regression results by state scatterplot Figure 4.12 Scale-Adjusted Regression State residuals map R² = 0.66 y = 0.51x - 2.45 0 2 4 6 8 10 14 16 18 20 22 ln(Stations) ln(Population) 41 Table 4.6 Scale-Adjusted Regression by state name and ranked score State Scale-Adjusted Regression State Scale-Adjusted Regression Arkansas -0.20 Georgia 1.73 Alabama 0.57 Missouri 1.36 Arkansas 1.20 Tennessee 1.27 Arizona -1.56 Arkansas 1.20 California 0.17 Louisiana 1.18 Colorado 0.17 Kansas 1.05 Connecticut -1.35 Virginia 0.97 Dist. Of Columbia -2.21 Kentucky 0.88 Delaware -2.94 Mississippi 0.85 Florida 0.33 Oklahoma 0.70 Georgia 1.73 Idaho 0.69 Hawaii -1.24 Wyoming 0.66 Iowa 0.66 Iowa 0.66 Idaho 0.69 North Dakota 0.63 Illinois -0.88 North Carolina 0.59 Indiana 0.58 Indiana 0.58 Kansas 1.05 Alabama 0.57 Kentucky 0.88 Nebraska 0.54 Louisiana 1.18 South Carolina 0.53 Massachusetts -1.15 Montana 0.53 Maryland -0.86 South Dakota 0.49 Maine -0.21 New Mexico 0.35 Michigan -0.21 Texas 0.33 Minnesota -0.05 Florida 0.33 Missouri 1.36 West Virginia 0.32 Mississippi 0.85 Wisconsin 0.20 Montana 0.53 California 0.17 North Carolina 0.59 Colorado 0.17 North Dakota 0.63 Ohio 0.17 Nebraska 0.54 Oregon 0.17 New Hampshire -1.14 Washington 0.01 New Jersey -0.68 Pennsylvania -0.04 New Mexico 0.35 New York -0.04 Nevada -1.22 Minnesota -0.05 New York -0.04 Arkansas -0.20 Ohio 0.17 Maine -0.21 Oklahoma 0.70 Michigan -0.21 Oregon 0.17 Vermont -0.53 Pennsylvania -0.04 New Jersey -0.68 Rhode Island -2.30 Maryland -0.86 South Carolina 0.53 Illinois -0.88 South Dakota 0.49 Utah -1.07 Tennessee 1.27 New Hampshire -1.14 Texas 0.33 Massachusetts -1.15 Utah -1.07 Nevada -1.22 Virginia 0.97 Hawaii -1.24 Vermont -0.53 Connecticut -1.35 Washington 0.01 Arizona -1.56 Wisconsin 0.20 Dist. of Columbia -2.21 West Virginia 0.32 Rhode Island -2.30 Wyoming 0.66 Delaware -2.94 5. DISCUSSION The SPT, composite index, and scale-adjusted regression are three potential indices local governments could use to assess their emergency preparedness programs. Limitations of this research as well as possible future research considerations are presented in this chapter. The evaluation of the strengths and weakness of each index, identified based on the application of each of the three indices to Utah fire stations, can aid local governments in choosing an approach that will best suit their needs. The final results of each method, a ranking of counties by preparedness level, differed markedly, which calls to attention the need to carefully consider which methodology is most appropriate. These results also allow for conclusions to be drawn regarding the level of preparedness achieved in the case study area from which suggestions for improvement can be made. 5.1 Limitations There are several limitations inherent with the use of indices (Simpson and Katirai, 2006). Because they are composed of a finite number of indicators that serve as proxies for the true phenomenon, an index cannot complete describe the phenomenon being measured. In addition, there can be significant challenges in acquiring the desired data. This can lead to a risk that the index value will be too heavily influenced by the availability of datasets rather than the desired proxy measures. Also, as mentioned previously, a large amount of subjectivity can be introduced through the indicator selection process as well as by assigning weights to them as in the composite index. The results of an index can also be difficult to scientifically validate. Many of these potential pitfalls can be minimized with carefully structured research design. Despite 43 the aforementioned challengers, the value of summarizing complex phenomena in understandable terms based on key contributing factors allows for a simplified geographic comparison which makes indices an extremely useful approach. An element to consider that was not evaluated in these indices is the age or condition of preparedness resources. A county make rank high in the number of stations for its population, but some facilities may be in need of repairs or renovation in order to maintain a quality standard of care. Local officials would need to keep these additional considerations in mind while evaluating their resource needs. An aspect of stations per thousand not included in this study is the physical distribution of stations relative to the population they serve. This was due to the unavailability of data regarding the geographic location of the stations. A simple measure would be to determine what percentage of a county's population is located within a one-mile radius of a station. A more sophisticated method could also consider what percentage of the population (or actual structures if data is available) is located within a certain driving distance of a station via the road network. Despite the lack of data to attempt these methods, it is reasonable to assume, that stations would be most likely located in the population centers, particularly in rural areas. Another dimension for future consideration is the actual risk faced by these counties. All counties may not face the same types of hazards, and therefore may require different types of response resources. The method presented here would be most appropriate in terms of assessing preparedness to respond to structural fires and other emergencies handled by local fire stations such as traffic accidents and emergency medical calls. However, these resources are also relevant to more extreme events because communities will rely on these initial response capabilities for the first few hours to days after the event before state or federal aid can be supplied. Furthermore, some areas are more prone to either larger magnitude or more 44 frequent emergencies than others depending on their local hazard conditions. A county may receive a relatively low preparedness score from the proposed indices, but may also have a relatively low level of risk to hazards and does not require extensive preparedness efforts. This could be addressed in a future study by comparing a risk index for a particular hazard in a scatterplot with the proposed preparedness indices. This would provide an assessment of whether a county's preparedness level is appropriate given the nature of the hazards they face. 5.2 Stations Per Thousand Calculating fire stations per thousand is the simplest method for assessing overall preparedness. Though at first glance it could be seen as an over-simplification to consider only a single variable, the strong correlation between stations and other preparedness indicators, such as personnel and vehicles, indicates that this simple approach may individually capture much of the variation in preparedness across counties. The primary advantages to local government officials are that it is extremely easy to compute and interpret, and the data is also relatively easy to collect. A simple scatter plot can show where a county lies relative to other counties. This method can be used to garner support to procure additional resources if a county can easily demonstrate that they are in greater need for those resources than other counties. The primary concern with this method is that the results are strongly affected by outliers, in this case a few major population centers, which reduces the usefulness of this tool to do assessments across urban and rural areas. One possible solution would be to group counties using U.S. Census Bureau Statistical areas or a similar designation, and only compare between counties within each category, but this does not address the problem of comparing the preparedness of urban and rural counties. It is important to note in interpreting the results of this index that an SPT below the mean does not necessarily imply that a county has poor emergency response. This index does 45 not rate counties on an absolute scale of preparedness; it only shows how that county compares with others in the assessment. It is possible that all of these counties could have very excellent response, but half the counties would still fall below the mean. It is intended to allow officials to easily determine the most appropriate direction to focus their resources and attention. 5.3 Composite Index The composite index allows for multiple preparedness program elements to be assessed collectively, which may best represent the many factors that contribute toward a better state of preparedness. In addition, it allows for some interpretation of the quality of the services provided, as demonstrated with the scoring of the different training certification levels. This method best meets the research objective of providing local governments the means to identify very specific aspects of their program to focus efforts on, whether it is a need for additional training, procurement of additional physical resources, improved planning, and so forth. However, an assessment of this detail requires considerable more effort in collecting and preparing the data. Very little of the data the researchers hoped to evaluate using this index method were readily available. The case study examples focused solely on fire response resources because information on other response and emergency management organizations was not available. The assessment was also limited to indicators available from that particular dataset, as opposed to a wider range of preparedness indicators proposed in the original study methodology. This can increase the temptation to shape the index to fit the data, as opposed to analyzing data that supports a well-designed index. Some disadvantages to this method are that the range between the minimum and maximum observed values will be different for different time periods and samples, which can affect the consistency of comparisons over time. However, it is not anticipated that this will affect the validity of the proposed index. It must be remembered that the final index is 46 intended to be a relative comparison of preparedness between local governments, rather than an absolute preparedness score. Although the range values may vary over time, if a consistent methodology is used each time, a relative comparison between counties will still be accurate in showing how a particular county compares to others in the study. Other scaling methods were considered, but there were limitations with their suitability for use with binary indicators, and the level of complexity in calculating the index and interpreting the results. By scaling all values on a 0 to 1 scale, it is straightforward for users to determine the magnitude of differences between county results. Therefore, in order to provide an easily interpreted result, and to ensure that the method is appropriate for the data types identified for this study, the score range method was the preferred option. The scaling of data to a common range of values takes additional effort and requires additional decisions on how data should be scored. As shown in this study, much of the data was ordinal, and it was necessary to determine how to convert these to numeric values by implementing a weighting scheme. This introduced some subjectivity into the analysis. This subjectivity could become problematic over longer periods of time because later implementations of the index could differ greatly from the interpretation used in the baseline study, particularly if the assessment is not completed by the same person each year. Also, if this assessment is conducted by individual counties, there could be large differences in individual interpretation unless a more specific standard was adopted at the state or national level on how particular assets should be scored. As mentioned previously the case study data contained many binary variables in which a fire department either had a particular capability or asset or they did not. Although these easily translate to a 0 to 1 scale, 1 for present and 0 for absent, it resulted in undesirable weight, or emphasis being given to these program elements. Although an example of introducing weights 47 at the subindex level to attempt to balance the three subindices, this had little effect on the composite index results. Further research is needed to evaluate the appropriate weights to apply to each subindex as some elements may actually provide more benefit in an emergency than others. Although extensive effort was made to bring the individual indicators to a 0 to 1 scale, the composite index is a summary value of many indicators resulting in values ranging from 5 to 10. These scores do not really have an inherent value that can be understood outside the context of this assessment. Perhaps a further step should be taken to scale these ranges on a similar 0 to 1 scale to improve the ease of interpretation. Although the ability to evaluate multiple preparedness indicators collectively is considered one of the strengths of this index methodology, the presence of strong correlation between several major variables, stations, vehicles, and personnel, suggests that there may not be a significant benefit from considering many variables at once as each additional variable may not add new information. More rigorous analysis of the indicators, possibly through partial correlation analysis or principal components analysis, is needed to ensure that each indicator is contributing new and meaningful information toward the index. This in consideration with the added complications of gathering and preparing the data for this index indicate that this is likely not the optimal index tool for local governments. 5.4 Scale-Adjusted Regression This technique is a modification of the simple Stations Per Thousand index. It maintains the advantage of being moderately easy to compute and interpret, while providing a good solution to the problems caused by outliers. The use of the natural log of population and stations creates a scale-free index that allows comparison between rural and urban communities. The process of interpreting the results is similar to the SPT. The key difference is 48 that in the case of the scale-adjusted regression approach, a county is assessed based on the level of preparedness it should have for its population (residual to regression line) rather than the mean stations per thousand value in the study area. Those below the line are the counties that would most benefit from investment in additional resources for their population, and those above the line enjoy a greater amount of resources than the regression line would predict. It is interesting to note that although the SPT and the Scale-Adjusted Regression are quite similar in that they both consider the number of stations relative to a county's population, the results are noticeably different. For example, Salt Lake County is ranked quite low in the SPT index, but is high in the rankings using on the scale-adjusted regression index (Table 4.5). Salt Lake County is the most populous in the state, with over twice as many residents as the next closest county. The SPT is strongly influenced by extremes in population, which resulted in Salt Lake County appearing lower in the rankings than expected. Because the scale-adjusted regression index allows for a scale-free comparison between urban and rural counties, this is likely a more accurate indication of their preparedness efforts. In short, Salt Lake County is a relatively prepared county in terms of resources to serve its population. Caution should be used in interpreting the "Preparedness Line" derived from this index. Although this is a handy reference by which officials can gauge their county's preparedness, one must bear in mind that there is no exact threshold for preparedness and that this is intended to be a relative comparison. It should also be anticipated that if investments are made in preparedness resources, this would shift the Preparedness Line in future assessments. Furthermore, regression results are sometimes described as a dependent variable, preparedness, being predicted or explained by an independent variable, population. However, it is not reasonable to say that population can predict or explain the preparedness level. In this 49 case, preparedness is related to population in that additional resources are needed to meet the needs of increasing population size, but is not necessarily predicted by population. Similar to the SPT, this index is also reliant on a single variable, which some may consider to be insufficient to entirely encompass a complete preparedness program. A future study could consider selecting a small number of additional variables representing preparedness, applying this index to each, and comparing the results. The scale-adjusted regression index is recommended as the most suitable approach of the three indices evaluated. It is based on data that is readily available and is fairly easy to compute and interpret. It adequately represents preparedness capabilities without becoming over-burdensome to complete. Perhaps the most important advantage of this method is the ability to compare preparedness capabilities of county with vastly different population sizes in a scale-free environment. Removing the influence of population yields a more reliable assessment of each county's resources and increases the usefulness of the index for comparison over large geographic areas. 5.5 Utah Preparedness Assessment The final objective of this research was to identify variations in preparedness scores for each index, highlight achievements, and make recommendations as to areas most in need of improvement. This section will explore the results for each index and the practical implications for future decision making for local government officials in Utah. Because the results vary widely based on the index used, officials should bear in mind the strengths and weaknesses of each index methodology when utilizing these results and recommendations for decision making. Although most counties varied widely in their rankings between the three indices, it is interesting to note that Garfield and Wayne counties consistently scored well in all three. Regardless of the weaknesses of each approach, there is good evidence that these counties are 50 well equipped to respond to an emergency. Uintah County is the only one that scored consistently at the lower end of all indices, so this would likely be a good candidate for funding additional resources. All other counties would have to base the future resource funding decisions on an individual index. Based on the Stations Per Thousand index, the five smallest counties in terms of population are the highest ranking in fire station resources. Furthermore, their scores are noticeably higher than all remaining counties. Rich County for example, ranks fifth in SPT with a score of 1.756. Kane County is the next closest, which scores only 0.900. Stations in the more rural counties tend to be located at the population centers or municipalities, typically one station for one community. However, the size of population in these rural communities can vary greatly. For example, the communities in Rich County range between 188-483 residents. In contrast, the Kane County's municipalities range between 138- 3528 residents. Wasatch County ranks the lowest of the rural counties, with only 4 stations for its 22,845 population. So although it is typical for rural counties to have one fire station for each community, some simply have more residents to serve. This helps explain the large gap in scores between the five highest ranking counties and the remaining rural counties. In contrast, the four most populous counties rank the lowest in SPT. In Weber, Utah, Salt Lake, and Davis counties, the cities are tightly packed together, with several stations for each city to provide service to the large population. Although the lower SPT scores suggest that these communities could benefit from additional fire station resources, because there are fewer stations in proportion to the population compared to the rural counties, one should consider that there are more of stations distributed across a smaller area. In general, this means that while they have a larger population to serve, with a potential for more emergency calls, they are well positioned to reach a large portion of the population quickly. 51 Washington County ranks the highest in SPT for the large population counties. Although it is fifth in population size, it has the third highest number of stations. Its 27 stations well outnumbers 16 in Davis County, 19 in Weber County, and 13 in Cache. This suggests that they have made significant investments in their public safety resources relative to their population. As noted previously, the SPT index results are strongly influenced by the extremes in population values included in this study. This may not be the most suitable index on which to base future spending decisions. However, if used, it may be more suitable to compare urban and rural counties separately. For example, if Washington, Cache, Weber, Utah, Salt Lake, and Davis counties are considered as a group, and the remaining counties are considered as a separate group, one could draw more equitable conclusions as to which could benefit most from investment in additional resources. For the rural counties, Wasatch, Uintah, and Morgan counties are at the lower end of the scale. Even though they do not rank lowest in the statewide comparison, comparing them just to other rural counties demonstrates that they would receive the most benefit for additional resources. For the urban counties, Davis and Salt Lake are the most in need of additional resources. The Composite Index is much less defined by urban to rural trends. Although the top six most populous counties all score in the top half of the rankings, there are several rural counties intermingled. Salt Lake scored the highest in this index, particularly owning to its high scores in the Personnel and Sustainability subindices. The high Personnel score can be attributed to both high scores in emergency medical and hazardous materials training, which indicates that more stations have personnel with the highest skill levels, as well as high scores in staffing levels because more staff are full-time versus volunteer. They also achieved high scores for each of the Sustainability indicators. In general, the other urban counties such as Davis, Utah, and 52 Weber counties performed less well in the stations, vehicles, and personnel per thousand categories, but did well in training, staffing type, and sustainability resources. Garfield County, which ranks second in this index, also scored well in Sustainability but had lower scores in Personnel due to lower training skill levels and fewer full-time fire personnel. However, Garfield County scored high in Resources. Wayne County, ranked third, did less well in Sustainability, but similarly scores well in Resources. It ranks higher than most rural counties in the Personnel subindex largely due to a very high personnel per thousand population value, as well as high skill levels in hazardous materials response. In general, the rural counties score well in Resources but not as well in Personnel and Sustainability. Many of them have lower staffing scores because more rural stations have volunteer rather than full-time staff. They also have lower training scores, perhaps because volunteer staff does not have as much opportunity or funding for training as full-time staff. There is quite a lot of variation in the sustainability capabilities, but the scores for first alarm aid appear to be consistently lower for most rural counties. First alarm or automatic aid is an agreement that a neighboring community will automatically be dispatched to assist in any response. This sort of agreement is impractical in widely dispersed rural communities. The Scale-Adjusted Regression index, though based on similar indicators to the SPT, shows quite different results. Counties with extremes in population size appear to be the impacted the most by using the scale-adjusted approach. Table 4.5 shows Salt Lake County was next to last in the SPT index, but ranks fourth in the Scale-Adjusted Regression. On the other hand Daggett County scored second in the SPT, but is much lower in the regression index. As discussed in the Analysis section, those counties with positive residuals are those that have the most resources available for their population, including Garfield, Millard, Wayne, Salt Lake, and Washington counties. Those close to 0 are those that have an average number of resources, 53 and that these resources are well proportioned for their population such as Utah, Piute, Grand, Sevier, Iron, and Beaver counties. Those with negative residuals are most in need of additional resources, particularly Morgan, Wasatch, Carbon, and Uintah counties. There are no noticeable trends between urban and rural counties in the Scale-Adjusted Regression approach. This supports the premise behind the approach, which was to allow comparisons between all counties without the dominating effects of population size. The scale-adjusted approach is particularly critical when evaluating resources on a national level. Without it, the resource comparisons between states with extreme differences in population could not be undertaken. The comparison of state level preparedness resources in Table 4.6 further demonstrates the usefulness of this methodology. By diminishing the influence of population extremes, a resource comparison can be made between states as diverse as California and Rhode Island. Similar to the county assessment, examples can be found of both densely and sparsely populated areas having a relatively high number of resources and vice versa. Many of the lowest scoring states are those with the smallest populations; however, some of these are small area but densely populated states such as Connecticut and Delaware while others are large, sparsely populated states such as Nevada. Illinois has a fairly large population yet it also scores within the low resource category. Several of the states that have the largest population fall in a neutral category where their resources are relatively well-balanced with their population such as California, Texas, and New York. Neutral states also include those with fairly small populations such as South Dakota and West Virginia. Those with the highest number of resources include states with large populations such as Georgia or North Carolina, as well as states with small, scattered populations such as Idaho or Wyoming. Many other high scoring states are those with moderate population sizes such as Missouri and Louisiana. 54 The diversity of the state results further demonstrates the effectiveness of this methodology in removing the impacts of population extremes and allowing comparisons to be made across all states. The scale-adjusted regression scores reliably reflect the resource capabilities of the various states and enable decision makers draw conclusions as to which areas are in most need of additional resources. Additional meaning is given to the Utah case study results by evaluating how Utah compares in resource capabilities relative to other parts of the country. Out of the 51 states and districts evaluated Utah ranks 42nd. This draws attention to the low number of resources in Utah relative to other states. Although some counties within the state may fare better than others, Utah in general has fewer resources than other states and would benefit from additional resource funding. This information could be particularly useful if local governments wished to compete for federal grant funding for additional resource by making it possible to demonstrate a greater need compared to other regions. As noted previously, Figure 4.12 shows some geographic patterns in resource levels, such as a concentration of low scores in some of the north-eastern states, and concentration of high scores in the south and south central states. It is difficult to determine the cause for these trends without further analysis. One possible area for future investigation might be the frequency of emergency events or perhaps even the perceived threat of events in these areas. For example, are the southern states more likely to have more resources based on their frequent exposure to severe weather hazards? Perhaps areas with more frequent impacts from hazards, or under the threat of higher magnitude hazards, are better motivated or better funded to strengthen their preparedness resources. This chapter discussed the limitations of hazard research, described the strengths and weaknesses of three indices, and provided some conclusions regarding the level of 55 preparedness for the Utah counties based on each index. This evaluation demonstrates the effectiveness of indices as a useful tool for summarizing preparedness characteristics into useable information that officials can use in making decisions regarding future preparedness efforts. 6. CONCLUSION Preparing a community for the myriad of potential hazards they face is a challenging mission. This task is further complicated by the difficulty in knowing whether preparedness efforts are sufficient to address the potential needs during an emergency. The indices presented here show possible ways of simplifying the complex elements of a preparedness program into an understandable measure. These tools allow local governments to monitor their progress over time and relative to other geographic areas. While each method presented meets this basic objective, the scale-adjusted regression index seems to offer the optimal solution. This index adequately reflects the complexity of the preparedness concepts, is scale-adjusted to facilitate comparison between rural and urban areas, is easy to interpret, and is the most efficient to complete. Based on the results from this index, it is a straightforward task to identify communities that can best benefit from additional preparedness resources. By allocating funding and other resources according to the needs demonstrated by the index, local governments will be able to strengthen preparedness programs in a more efficient manner. REFERENCES Adger, W.N., N. Brooks, M. Kelly, G. Bentham, M. Agnew, and S. Eriksen. 2004. "New Indicators of Vulnerability and Adaptive Capacty." Tyndall Center for Climate Change Research: Technical Report 17. Anselin, L. GeoDa 0.9.5-i5. Urbana, IL: Spatial Analysis Laboratory University of Illinois, 2004. Burton, I., Kates, R.W., and White, G.F. 1993. The Environment as Hazard, 2nd ed. New York, Guilford Press. Cardona, O.D. 2005. Indicators of Disaster Risk and Risk Management. Inter-American Development Bank, Sustainable Development Department, 2005. Carreno, M.L., Cardona, O.D., and Barbat, A.H. 2007. "A disaster risk management performance index." Natural Hazards. 41 (1):1-20. Cobb, C. W., and Rixford, C. 1998. "Lessons learned from the history of social indicators." Redefining Progress. San Francisco. http://www.rprogress.org/publications/1998/SocIndHist.pdf 12-18-07 Cutter, S.L., B.J. Boruff, and W.L. Shirley. 2003. "Social Vulnerability to Environmental Hazards." Social Science Quarterly. 84 (2): 242-261. Davidson, R.A., and K.B. Lambert. 2001. "Comparing the Hurricane Disaster Risk of U.S. Coastal Counties." Natural Hazards Review. 2 (3): 132-142. Davidson, R.A., and Shah, H.C. 1997. An urban earthquake disaster risk index: Report No. 121. John A. Blume Earthquake Engineering Center. Department of Civil and Environmental Engineering: Stanford University. Dwyer, A., Zoppou, C., Nielson, O., Day, S., and Roberts, S. 2004. "Quantifying Social Vulnerability: A methodology for identifying those at risk to natural hazards." Geoscience Australia Record 2004/14. Hopkins, L.D. 1977. "Methods for Generating Land Suitability Maps: A Comparative Evaluation." Journal for American Institute of Planners. 43 (4): 386-400. 58 Hepner, G. F. 1984. "Use of value functions as a possible suitability scaling procedure in automated composite mapping." The Professional Geographer. 36 (4): 468-472. Lindell, M.K., Prater, C.S., and Perry, R.W. 2007. Introduction to Emergency Management. Hoboken, NJ: John Wiley & Sons, Inc. Malczewski, J. 1999. GIS and Multicriteria Decision Analysis. New York: John Wiley & Sons, Inc. Malczewski, J. 2000. "On the Use of Weighted Linear Combination Method in GIS: Common and Best Practice Approaches." Transactions in GIS. 4 (1): 5-22. Munich Re Group. 2003. Topics Annual Revies: Natural Catastrophes 2002. (302-03631). Munich: Munich Re Group. Kaly, U., Briguglio, L, McLeod, H., Schmall, S., Pratt, C., and Pal, R. 1999. "Environmental Vulnerability Index (EVI) to summarise national environmental vulnerability profiles." SOPAC Technical Report 275. Simpson, D.M. and M. Katirai. 2006. Indicator Issues and Proposed Framework for a Disaster Preparedness Index (Dpi): Working Paper. Louisville, KY. Center for Hazards Research and Policy Development; School of Urban and Public Affairs, University of Louisville. Simpson, D.M. 2008. "Disaster preparedness measures: a test case development and application". Disaster Prevention and Management. 17 (5): 645-661. Sutton, P.C. 2003. "A scale-adjusted measure of "Urban sprawl" using nighttime satellite imagery." Remote Sensing of the Environment. 86 (3): 353-369. United Nations Development Programme. 2004. Reducing disaster risk: A Challenge for Development. New York: John S. Swift Co. United States Federal Emergency Management Agency (FEMA) and National Emergency Management Association (NEMA). 1997. State Capabilities Assessment for Readiness. |
| Reference URL | https://collections.lib.utah.edu/ark:/87278/s6z03pn4 |



