| Publication Type | honors thesis |
| School or College | College of Humanities |
| Department | Philosophy |
| Faculty Mentor | Margaret Battin |
| Creator | Goel, Divyam |
| Title | Evidence-based medicine in times of crises: what we can learn from covid-19 to adapt early on in a pandemic |
| Date | 2022 |
| Description | Evidence-based medicine (EBM) is the current philosophical paradigm by which contemporary healthcare practices are guided. A related field, evidence-based public health (EBPH), similarly advises practices in the field of public health. However, the COVID-19 pandemic has demonstrated weaknesses in the current application of evidence-based medical principles, particularly the emphasis and reliance on statistical forms of evidence such as randomized clinical trials (RCTs). These issues include the ethics of control groups in public health settings, the pace and availability of statistically high-powered trials, and the disregard for explanatory evidence. This thesis evaluates the shortcomings associated with evidence-based principles, particularly in the context of the early periods of a global pandemic, and offers potential solutions including pre-and-hibernated trials and policy-making on the basis of mechanistic evidence using a risk-assessment framework. |
| Type | Text |
| Publisher | University of Utah |
| Subject | evidence-based medicine; public health policy; COVID-19 pandemic |
| Language | eng |
| Rights Management | © Divyam Goel |
| Format Medium | application/pdf |
| Permissions Reference URL | https://collections.lib.utah.edu/ark:/87278/s64t3n8d |
| ARK | ark:/87278/s6f2j94s |
| Setname | ir_htoa |
| ID | 2106182 |
| OCR Text | Show EVIDENCE-BASED MEDICINE IN TIMES OF CRISES: WHAT WE CAN LEARN FROM COVID-19 TO ADAPT EARLY ON IN A PANDEMIC by Divyam Goel A Senior Honors Thesis Submitted to the Faculty of The University of Utah In Partial Fulfillment of the Requirements for the Honors Degree in Bachelor of Science In Philosophy Approved: ______________________________ Margaret Battin, PhD, MFA & TW Jones, MD, Thesis Advisors _____________________________ Eric Hutton, PhD, Department of Philosophy _______________________________ Anne Peterson, PhD, Honors Faculty Advisor _____________________________ Sylvia D. Torti, PhD Dean, Honors College December 2022 Copyright © 2022 All Rights Reserved ii ABSTRACT Evidence-based medicine (EBM) is the current philosophical paradigm by which contemporary healthcare practices are guided. A related field, evidence-based public health (EBPH), similarly advises practices in the field of public health. However, the COVID-19 pandemic has demonstrated weaknesses in the current application of evidence-based medical principles, particularly the emphasis and reliance on statistical forms of evidence such as randomized clinical trials (RCTs). These issues include the ethics of control groups in public health settings, the pace and availability of statistically high-powered trials, and the disregard for explanatory evidence. This thesis evaluates the shortcomings associated with evidence-based principles, particularly in the context of the early periods of a global pandemic, and offers potential solutions including pre-andhibernated trials and policy-making on the basis of mechanistic evidence using a riskassessment framework. iii TABLE OF CONTENTS ABSTRACT ii INTRODUCTION 1 BRIEF HISTORY OF THE EVIDENCE-BASED PARADIGM 2 EBM ADDRESSES EPISTEMILOGICAL QUESTIONS 4 WHERE EVIDENCE-BASED MEDICINE FALLS SHORT 6 WHERE EVIDENCE-BASED PUBLIC HEALTH FALLS SHORT 9 PANDEMICS 12 SOLUTIONS 16 CONCLUSIONS 20 REFERENCES 21 INTRODUCTION In October 2022, the IDWeek conference, which is the "joint annual meeting of the Infectious Diseases Society of America (IDSA), the Society for Healthcare Epidemiology of America (SHEA), the HIV Medical Association (HIVMA), the Pediatric Infectious Diseases Society (PIDS) and the Society of Infectious Diseases Pharmacists (SIDP)", prominently featured a four-hour workshop on using evidence-based medicine (EBM) to develop healthcare guidelines (IDWeek, 2022). These days a staple in medical education and clinical practice, EBM gained traction and its reputation in the 1990s. It was also during this time Sackett and Rosenberg (1996) canonically defined evidence-based medicine as the "conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients". In another publication, they both further explain that EBM is the “ability to track down, critically appraise (for its validity and usefulness), and incorporate this rapidly growing body of evidence into one's clinical practice” (Sackett & Rosenberg, 1995). As demonstrated by these definitions, the fundamental principle of evidence-based medicine is regularly consulting and integrating the best available scientific evidence into one's clinical practice and clinical decision-making. The COVID-19 pandemic has quickly become a defining global event for the 21st century. At the time of writing, the estimated death toll of this virus is over 6.5 million, with more than 1 million deaths in the United States alone. The coronavirus pandemic has demonstrated weaknesses in both American and global health infrastructures that have, at this point, been widely discussed through virtually all mediums of communication at the scholarly level. Critiques of public health infrastructure and inadequacies in our response as a country and a global community have often pointed toward politics, funding issues, 2 and rooted socioeconomic determinants of health which have inequitably placed the burden of SARS-CoV-2 on certain groups of people (Wenham, 2021; Hurley, 2020). This thesis examines the drawbacks associated with the global response to COVID19 through the different lens of the evidence-based medical paradigm, and argues that the pandemic has challenged the utility of these principles as the primary philosophy behind how healthcare is administered. In particular, the thesis criticizes the one-dimensional emphasis on randomized clinical trials (RCTs) and highly statistical forms of evidence, focuses on the early phase of a pandemic, and offers solutions that will acknowledge evidence-based medicine’s merits while working around its shortcomings. BRIEF HISTORY OF THE EVIDENCE-BASED PARADIGM The exact origin of evidence-based medicine and the RCT is nebulous. However, one of the first recorded uses of randomization in a clinical study was in the late 1940s, when the British Medical Research Council used a form of randomization to measure the intervention (streptomycin and bed rest) against a control (bed-rest only) for the treatment of pulmonary tuberculosis in 541 patients (Long & Ferebee, 1950). Through the latter half of the twentieth century, such clinical trials grew in usage and, by the early 1990s, the FDA began slowly asking for evidence-based data for new pharmaceutical agents. The evolution of scientific reporting towards widely available publications with detailed methodologies was a catalyst for this change, as was the medical community’s growing appreciation for data comparing outcomes with interventions with those of a control group (Kennedy, 1999). The development of more powerful statistical methods, as well as the Second World War’s role in shifting the predominant practice model of science from single scientists 3 working in simple settings to inter-institutional research groups obtaining federal, military, and public funding for large-scale research projects further helped the paradigm of clinical research shift (Nadav & Dani, 2006). Pioneering figures in this movement were the United Kingdom’s Archie Cochrane, one of the fathers of EBM whose books advocating for RCTs began attracting attention in the 1970s and whose work led to the creation of the Cochrane organization, and Canada’s David Sackett, who helped write the authoritative definitions of EBM cited above (Claridge & Fabian, 2005). Guyatt’s (1992) JAMA publication provides one of the more emphatic and detailed examples of EBM in practice in a clinical setting. “Evidence-based medicine deemphasizes intuition, unsystematic clinical experience, and pathophysiologic rationale as sufficient grounds for clinical decision making and stresses the examination of evidence from clinical research”, Guyatt writes. EBM “requires new skills of the physician, including efficient literature searching and the application of formal rules of evidence evaluating the clinical literature”. Guyatt continues by setting up the example of a 43-yearold seizure patient who, post-treatment, worries about his risk for a recurrent seizure. In the “Way of the Past”, as Guyatt terms it, the patient’s treating resident would be told by her seniors that the risk of recurrence is high, but no exact number is available. The resident would explain this to the patient who “leaves in a state of vague trepidation about his risk of subsequent seizure”. In the “Way of the Future”, the resident instead conducts a computerized literature search using keywords that return 25 articles, the title of one of which is directly relevant. The results of the study in this paper indicate that the risk of seizure recurrence varies by time following the primary seizure, but drops below 20% after 18 months. Using this information, the resident advises the patient to take his medications 4 and review his need to continue medication if he suffers no recurrence in 18 months. “The patient leaves with a clear idea of his likely prognosis”. Evidence-based medicine has also led to the development of the adjacent practice of evidence-based public health (EBPH). While historical accounts of EBPH are not as robust as what is found in the literature for EBM, a PubMed database search of “evidencebased public health” shows the first result is from 1997 which indicates that EBM and EBPH blossomed with similar timelines. Similar to EBM, EBPH has several definitions which are more or less differently worded but touch on the same ideas. Lhachimi, Bala & Vanagas (2016) write that “in its most straightforward definition, evidence-based public health (EBPH) means applying the principles of evidence-based medicine to the field of public health”. Such principles include hierarchies of evidence and the integration of the best evidence into decision-making, in this case at the public health level. EBM ADDRESSES EPISTEMOLOGICAL QUESTIONS The day-to-day applications of evidence-based medicine are often in regard to evidence-based clinical practice. For example, Jenicek (1997, p.188) puts forth the following steps to practice evidence-based medicine in the healthcare setting. 1. formulation of a clear clinical question from a patient's problem which has to be answered 2. searching the literature for relevant articles and other sources of information 3. critical appraisal (evaluation) of the evidence (information provided by original research or by research synthesis, i.e. meta-analysis) 5 4. selection of the best evidence (or useful findings) for clinical decision; 5. linking evidence with clinical experience, knowledge, and practice 6. implementation of useful findings in clinical practice 7. evaluation of the implementation and the overall performance of the EBM practitioner teaching others how to practice EBM At several steps in the above process, evidence-based principles require the physician to determine which evidence to use and how to go about seeking it. Hence, evidence-based medicine raises epistemological questions because its practice defines the optimal ways to develop the scientific and medical knowledge—or evidence—that clinicians are to use (Tonelli, 1998). For instance, consider Jenicek’s first step (formulation of a clear clinical question from a patient's problem which has to be answered). Because any healthcare practitioner practicing EBM follows their question with a detailed search of all available evidence, the structure and intent of the question must correspond to the structure and design of the scientific evidence and clinical trials available for review. For this, the use of the PICO framework is common in clinical EBM and medical education (Speckman & Friedly, 2019). In this framework, PICO stands for population, intervention, comparison, and outcome. A clear clinical question should explicitly mention each without overlap. A PICO question for the streptomycin clinical trial discussed earlier could be: in otherwise healthy adults with pulmonary tuberculosis (P), what is the effect of streptomycin and bed rest (I) compared to bed rest only (C) on overall survival (O)? This PICO question can be used to 6 set up a clinical trial, as was done in the 1940s, and clinicians considering potential treatments for a patient with pulmonary tuberculosis can use such a question to guide their review of the evidence (steps 2-3 above) to determine the best and most evidence-based course of action (steps 4-6 above). Physicians routinely employ this rigorous protocol for posing evidence-based clinical questions, which reflects the evidence-based paradigm’s attempts to resolve epistemological questions in biomedical science. The hierarchy of evidence is another key example of an epistemological approach in EBM (Friesen, 2019). Jenicek mentions critical appraisal of the evidence available, followed by selection of the best evidence, and guidelines on what constitutes the ‘best’ evidence have been defined by establishments such as the Oxford Centre for EvidenceBased Medicine. Such hierarchies place statistical studies, particularly randomized clinical trials (RCTs), at the very top. Systematic reviews which take data from many different RCTs to develop answers to clinical questions with a high degree of statistical power, are given even more weight (Clarke et al., 2013). The structure of evidence-based medicine indeed permits using scientific and clinical evidence that is lower on the hierarchy, but this is not the preferable outcome; proponents of EBM stress using statistically powered forms of evidence, particularly those that employ randomization. As Greenhalgh (2021) explains, the World Health Organization’s published guideline development process “emphasizes EBM’s hierarchy of evidence, RCTs, and the overriding need to eliminate bias, but makes no reference to explanatory theory”. WHERE EVIDENCE-BASED MEDICINE FALLS SHORT 7 The principles of evidence-based practice in both medicine and public health are improvements over doctrines of the past. However, the value placed on highly statistical forms of evidence and RCTs leads to serious limitations that are both practical and philosophical in nature. A common concern with evidence-based medicine is the feasibility aspect in the clinical setting. Jenicek’s in-practice model of EBM highlighted above, as well as the case of the seizure patient and resident, both involve the medical professional interrupting their clinical duties to devise one or more PICO questions, conduct a thorough literature search, and bring back data from the most recent clinical trials and systematic reviews for decisionmaking; if RCTs are unavailable, the clinician must diligently work through what is available and discern what meets the criteria for best available evidence before moving forward. As Doherty (2005) writes, “a half-day of clinical practice [can raise] 16 clinically important questions to be searched” and dutifully following the above protocol for each one can be impractical and much too time-consuming. The nature of clinical research also raises questions about availability. Publication biases notably result in trials featuring negative data being less likely to be published. Trials sponsored by the pharmaceutical industry are also more likely to be published (Doherty, 2005). These documented facts have caused concerns about research agendas and proper motivations behind randomized clinical trials. Clinical research, especially a large-scale randomized trial, is resource, time, and money-heavy; investigators and research staff may pose questions that are more likely to produce a positive result, be sponsored by pharmaceutical companies, and/or secure publication in prestigious and high-readership journals, all of which compromise the perceived truth behind the collection of medical evidence that is available to physicians. 8 Many more epistemically pernicious issues with evidence-based principles and RCTs abound. For example, the efficacy paradox describes a curious problem that, although uncommon, represents the imperfect nature of the conclusions that are obtained from randomized clinical trials (Walach, 2001). To understand this paradox, consider two interventions, A and B. Intervention A leads to a high benefit but is deemed statistically insignificant because A’s placebo arm benefit, while lower than A’s, is still quite high. In contrast, intervention B has a lower benefit than A but a statistically significant improvement over its placebo. Even though the overall benefit of A is higher, the RCT can only consider B as a supported intervention (Zhang & Doherty, 2018). There is also apprehension about the epistemological merit of the statistics generated in RCTs and corresponding systematic reviews. Nadav and Dani (2006) go as far as to say that RCTs provide “at best, a relative unbiased probability statement of the relation between two events” (in this case intervention and the outcome), and Clarke et al. (2013) suggest that statistical studies do not solve the problem of confounding (“an observed dependence between A and B may be attributable to variation in B's other causes, rather than variation in A”). The generalizability of clinical research data is another major limitation. The major strength of the RCT, which is the statistical power it generates through a randomized placebo and intervention group both in strong numbers, is also its major epistemic weakness. Patient privacy in clinical research, as well as proper randomization, all ensure that data analysts, publishers, editors, and any readers of the study are unable to distinguish identifying features of any research subjects. However, clinical medicine is different from clinical research; it is personalized and humanitarian work, and clinicians strive to develop 9 closer relationships and understandings with their patients which help them give tailored treatments and diagnoses. As such, extrapolating the results of clinical trials, the subjects of which are unidentifiable beyond certain biological details, to individual patients is a deeply theory-laden endeavor (Clarke et al., 2013; Guyatt, 1992). The purpose of randomization and other features of well-designed clinical trials is to reduce bias, but this comes at the cost of universal applicability. This issue is often called the paradox of internal and external validity, where RCTs have a high degree of internal validity but limited external validity (patients outside of the group of research subjects in the trial). Moreover, even if the clinician has sufficient reason to believe the results of a particular study can be translated to their patient, the interventions and courses of action the study explores the use of may not even be available in many settings and to many clinicians. RCTs are enormous research efforts with hundreds of thousands of dollars in funding and are conducted interinstitutionally or at facilities with ample resources to manage the recruitment, communication, application of the study, and data processing that goes into the trial. Interventions tested at this level may not be possible in resource-poor settings (Nadav & Dani, 2006). WHERE EVIDENCE-BASED PUBLIC HEALTH FALLS SHORT While EBM and EBPH are similar in the principles and types of evidence their practice encourages, certain remarkable distinctions exist between them in practice. While this is not a strict rule, the clinical application of EBM is generally focused on evaluating the efficacy of a particular intervention such as a medication or a procedure. In the scheme of public health, there is unlikely to be a single intervention for any issue in consideration. 10 Rather, public health often runs through programs and initiatives at the community level which weave numerous interventions together. Additionally, population-based studies at the public health level usually take much longer than laboratory studies or randomized clinical trials (Brownson et al., 2009). A program aimed to reduce childhood obesity, for example, can easily take years to produce appreciable results with statistical significance. Furthermore, medicine and public health are both about the health of individuals, but public health also deals with community health overall. While individual and community health overlap, there are occasions where distinct goals in each domain can cause friction (Tonelli, 1998). One such example is of vaccines, which benefit the community immensely but whose individual use can have grey areas such as medical or religious exceptions. Because of this, EBPH falls into the unfortunate circumstance of having all the same limitations of EBM as well as additional limitations unique to itself. Studies such as RCTs are designed to determine whether a particular intervention leads to a certain outcome. In the medical setting, the intervention and outcome can be reasonably defined. However, in the public health setting, interventions and outcomes are both much more subjective and less rigorously defined. As aforementioned, public health studies and initiatives rarely have a single testable intervention; they are usually more complex, layered, and have significantly longer timelines. For these reasons, problems with internal and external validity are compounded while drawing and applying RCT-based conclusions. Factors that are not necessarily medical and biological have an equal, or arguably greater, hand in shaping public health problems, interventions, and outcomes. These include political ideology and governance, which influence which interventions can even be tested, let alone implemented. Brownson, Fielding & Maylahn (2009) mention 11 water fluoridation and needle exchange programs as examples of interventions that may never see an effect in some communities due to political interference. Moreover, compliance is another issue unique to EBPH studies. Public health interventions are at the community level and do not explicitly recruit research participants, meaning individuals in the area of concern may choose to not follow or cooperate with the interventional programs being tested. This may lead to data that suggests the program is ineffective even though there are preventative factors at play. Non-medical and biological factors also closely shape causal relationships between interventions and outcomes. While this is true for clinical questions, it is amplified with public health ones. Nadav and Dani (2006) explain that “causation in the field of public health is not only biological but also behavioral, social and cultural”, all factors that are difficult to grasp with statistical methods. An example of the theory-ladenness of data obtained through public health interventions, and the fallacies associated with face-value statistics from which conclusions are drawn, is of childhood immunizations given by Victora, Habicht & Bryce (2004). “A public health intervention with a long and complex causal pathway is the immunization of children against vaccine-preventable disease”, they write. “Successful immunization minimally requires that health workers are trained to deliver the correct dose to children within specific age ranges; that health workers have syringes, needles, and viable vaccines available at the delivery site; and that mothers know when and where to take their child for vaccination and have the means and motivation to get there. Only after the successful completion of these steps can the biological agent be delivered to the target population”. When measuring the outcome of lower child mortality because of vaccine-preventable illnesses, the numbers that are statistically generated do not have the means to convey such 12 complexities and, at face value, can discredit useful interventions when unaccompanied by important contextual clues such as what is given above. PANDEMICS The above examples highlight the constraints of the evidence-based paradigm in its current form in both the clinical and public health settings. In spite of this, evidence-based principles are challenged further when applied during public health crises such as global pandemics, especially because of the close intertwining of clinical medicine and community-level public health that is seen in a pandemic. The COVID-19 pandemic has been a clear and recent example of EBM and EBPH’s shortcomings in play. In the clinical setting, the urgency and scale seen with pandemic disease compromise nearly each step in Jenicek’s evidence-based medicine framework. Sudden deaths due to an uncharacterized pathogen cause stress, anxiety, and pressure on medical and public health interventions that are, as Brownson, Fielding, & Green (2018) point out, usually planned and employed over months or years. Practitioners grapple with unknown disease pathophysiology, undefined treatment protocols, medications, and keeping themselves and healthcare staff protected while delivering emergent patient care. Fang & Schooley (2021) describe how during the rise of COVID-19, “clinicians understandably [were] making the calculation that their patient is deteriorating and may not live long enough for the results of carefully performed and analyzed studies to be available—and so they decide[d] to take a chance” by “treating patients on the basis of non–peer-reviewed preprints”. 13 The aspect of feasibility is exacerbated for clinicians who are dealing with the unknown and do not have the bandwidth to conduct extensive literature searches for each patient. Even if a physician is able to make time to conduct a thorough search for evidence, what they will find and whether it will be of any use is yet another problem. The global nature of a pandemic can lead to multiple similar studies being run simultaneously. For example, Chalmers (2020) approximates that “more than 150 unique trials of hydroxychloroquine were initiated during the early part of the [COVID-19] pandemic”. These trials have also led to a wave of systematic reviews that only further bloated up publishing pipelines; estimates for just one database suggests as many as 21 evidence syntheses were published per day since the WHO declared SARS-CoV-2 a pandemic (Nature, 2021). The editorial behind this calculation states, “doctors, policymakers, and others who are desperate for authoritative reviews of evidence can struggle to find what they need”. Diseases present differently depending on geographical, social, economic, and other non-biological factors, and inclusion and exclusion criteria for trials worldwide can differ substantially depending on how the disease manifests in that particular population. Even more so, the medication or the treatment protocol tested may not even be available for application. COVID-19 had immense variability in its clinical manifestations because patients were known to be “profoundly different from one another with regard to both severity and pathophysiology”, underscoring the limited external validity seen with statistically powered research in a pandemic (Fang & Schooley, 2021). The shortcomings of applying evidence-based principles in the setting of a pandemic are revealed most when attempting to answer research questions that are not 14 amenable to intervention vs control studies which are optimized to deliver statistical output. Greenhalgh (2020) gives three examples of such questions in the context of COVID-19: 1) were care home deaths avoidable? 2) why did the global supply chain for personal protective equipment break down? And 3) what role does health system resilience play in controlling the pandemic? Research on prevention, transmission, and testing efforts is similarly nuanced and complicated, and answering these types of questions is most pressing in the early part of the pandemic when public health and epidemiological efforts to characterize the pathogen, populations most at-risk, detection and tracing strategies, and diagnostic testing are most urgent and their need is the most. The limitations of EBPH are already complex, particularly due to social, behavioral, cultural, political, religious, and more factors that make it impossible for individuals in a community to be treated like compliant research subjects. Non-critical public health interventions can attempt to predict which non-biological factors may play a role and design a study that anticipated these variables. For example, decades of research have identified several social, cultural, financial, and related factors that may affect a person’s smoking habits or risk for certain cancers. Yet, the uncertainty that comes with a pandemic, particularly early on, does not allow for such factors to be accounted for. Another important limitation of trying to conduct statistically powered studies in the context of a pandemic is that randomization, the prized attribute of evidence-based principles, is usually lost. As Greenhalgh (2021) clarifies, “a [pandemic intervention] study design that requires the random allocation of people to the intervention and control groups and their follow-up to measure particular predefined outcomes may be impractical”. Such a study design is likely also underpowered, unable to be conducted blind, and may not 15 generate results in the needed amount of time. Consider examples such as school and community closures, lockdowns, positivity tracking, and vaccination rates which do not lend themselves to randomization because there are no recruited study participants who can be assigned to an intervention or control arm during registration. Rather, the participants are community members often unaware that a study is even taking place. Furthermore, trying to establish control groups to determine the efficacy of prevention measures for a pandemic can be considered unethical. In a traditional clinical trial, the process of the trial is carefully explained to eligible patients, who then consent to their participation and affirm that they may be in the placebo arm. This opportunity is generally not given to participants in the public health setting. Perhaps the most relevant example of fallacies associated with statistical reliance to the detriment of finding prevention solutions is the case of face masks. The merit of face masks was ambiguous for much of the early phase of the pandemic and much of this ambiguity can be credited to the lack of randomized controlled trials providing authoritative and statistical data on their benefits, which is why guideline developers such as the WHO and CDC were initially reluctant to issue statements vouching for their use. For example, the DELVE initiative by the United Kingdom's Royal Society conducted a review of any evidence on the use of face masks for respiratory virus protection in May 2020. The reception to this review was altogether lackluster, with one of the medical experts who reacted to the report being quoted as saying: “That is not a piece or [sic] research. That is a non-systematic review of anecdotical [sic] and non-clinical studies. The evidence we need before we implement public interventions involving billions of people, must come ideally from randomized controlled trials at population level or at least from observational follow-up studies 16 with comparison groups. This will allow us to quantify the positive and negative effects of wearing masks. Based on what we now know about the dynamics of transmission and the pathophysiology of COVID-19, the negative effects of wearing masks outweigh the positive” (Science Media Centre, 2020). Greenhalgh (2020) summarizes the reactions of many quoted in the reaction article, saying “the report was criticized by epidemiologists for being “non-systematic” and for recommending policy action in the absence of a quantitative estimate of effect size from robust randomized controlled trials”. Nowadays, masks are considered to be the most effective community-level prevention strategy due to the rapid emergence of COVID-19 variants which evade the full protection initially offered by coronavirus vaccines. SOLUTIONS Evidence generated from randomized clinical trials is useful to consult, especially for clearly clinical questions concerning medications, medical procedures, and treatment protocols. However, the time and resources these trials take are prohibitive—especially early in a pandemic—and the limited external validity that is seen with dozens of small trials can be a hurdle. However, in the constraints of a pandemic these problems are tolerable, currently unavoidable, and eventually fix themselves because one of the natural solutions to the problem of limited external validity is simply more studies with different subject groups which are then fed into comparative reviews. This was seen with the COVID-19 pandemic, where hundreds of small clinical trials indeed overwhelmed the publishing pipeline and exhausted physicians who were trying to keep up with the waves of new data each day, yet led to the discovery of the medications used regularly in therapy today, the development of the highly effective COVID-19 vaccines, and the ruling out of 17 exciting but ineffective drugs such as hydroxychloroquine which was shown to be ineffective for patients consistently through many trials. As such, the most productive solution to improve the current availability of clinical trial data is to speed up their development and increase their applicability and patient scope. A recent innovation that maintains the integrity of RCTs while targeting these improvements is the pre-designed and hibernated clinical trial. In pre-designed and hibernated trials, the design and logistical work that can often be the largest hurdle before activation is taken care of in advance after which the trial is set aside and preserved (“hibernated”), ready to be rapidly activated and put into action once the time calls (Brittain et al., 2015; Simpson et al., 2019). This pre-work includes Institutional Review Board (IRB) approval, scope and eligibility design, and more. Such hibernated trials are reviewed for compliance and use periodically, and are generally multi-site, thereby significantly increasing their coverage to be able to yield high amounts of data quickly once activated. Furthermore, multi-site trials have a larger net and catch a wide variety of patients, which increases the external validity of the data they obtain, will also help medical centers consolidate their efforts away from hundreds of small-scope trials to fewer trials with larger scope, and reduce pressure on the publishing pipeline. Currently, a small number of such trials exist for influenza pandemics. The major pitfall of the pre-designed and hibernated trial approach is that it makes significant assumptions about the nature of the pandemic for which it will be activated. An appropriate trial can only be designed once researchers knows what intervention(s) to test, which is nearly impossible for an unexpected and uncharacterized future event. However, this problem applies more to research on medical interventions and treatments, and less to 18 prevention, transmission, and other public health measures. Current surveillance data as well as scientific knowledge on modes of disease transmission provide many pieces of data with which to reasonably predict characteristics of the next pandemic. For example, a 2018 report by Johns Hopkins University predicted that microorganisms of ‘global catastrophic biological risk’ would have a respiratory mode of spread. This report came a full year prior to the first detected case of SARS-CoV-2 which would eventually become causative agent behind the COVID-19 global pandemic. Current literature also places heavy emphasis on zoonotic modes of transmission, or diseases acquired by interacting with non-domesticated animals (Barclay, 2008; Morse et al., 2012). As such, there are enough hints available to at least develop some prediction-based trials in advance with confidence. Lastly, evidence-based principles must facilitate adaptation in times of crisis. Mechanistic evidence, which is generally research providing explanatory theory on biological plausibility, currently has little to no room under the evidence-based paradigm even though it is plentiful. This is especially true for early on during a pandemic, for research on prevention and transmission, and particularly for research for which a statistical study may not be possible or suitable. For instance, a lower grade cohort study on the use of face masks for COVID-19 during peak levels of community transmission should be prioritized over a randomized trial from many years’ prior on a different disease with unspecified levels of community transmission, even though the latter carries higher statistical power. As time goes on, the pandemic evolves, and research on the current disease starts to look more like the statistical studies that would move high up the hierarchy of evidence they should naturally be prioritized. But the unavailability of high-powered 19 statistical research early on does not warrant removing mechanistic or less statistically powered studies from consideration simply due to the nature of their structure. Risk-assessment can be one method used to enable the use of mechanistic evidence, especially any evidence that is explanatory or causality-based in regard to prevention and transmission. Risk-assessment is a common strategy employed by clinicians when providing recommendations or possible mitigation measures for their patients. If a particular home remedy, physical manipulation, or lifestyle change may have upsides with little-to-no negative risk, a physician may recommend its use even if there is limited evidence backing its effectiveness. A similar philosophy can be used for pandemics, albeit at a much more controlled level. For example, early in the COVID-19 pandemic handwashing and fomite, or surface-spread, precautions were universally advised before any concrete data on their legitimacy or the SARS-CoV-2 virus’ mode of spread were documented. While these two interventions were later shown to be rather ineffective when compared to masking, social distancing, public closures, and vaccination, their implementation carried such little risk and essentially no potential for backfire, making them solid attempts at controlling the spread of the virus. As such, if mechanistic evidence on a particular prevention strategy becomes widely available during a future pandemic, policy-makers should be encouraged to adopt its use despite the lack of statistical evidence behind it if the intervention carries low risk. 20 CONCLUSIONS The COVID-19 pandemic has demonstrated weaknesses in the evidence-based framework healthcare relies on in its current form. The rising threat of new pandemics has prompted discussion on EBM and EBPH’S usage in their current form. It is clear that evidence-based principles have major shortcomings when applied in response to a global pandemic, especially early on. However, these shortcomings do not recommend a complete abandonment of the evidence-based paradigm. The principle of surveying and utilizing the best available evidence for clinical and public health practice is not the cause for concern; rather, it is the one-dimensional philosophy behind what is considered the ‘best’ evidence, the unyielding weighting towards controlled, single-intervention statistical studies, and what Greenhalgh (2020) calls an “over-application” of the hierarchy of evidence. Efforts to both speed up and widen the applicability of randomized clinical trials must be taken, and the evidence-based paradigm’s insistence on statistical forms of research must concurrently evolve to allow room for different types of research to have a say when appropriate. This thesis encourages clinicians and decision-makers educated in the time of EBM to re-assign the value they place on different forms of evidence and allow grace and consideration for research that is lower in statistical power, but may be more relevant to the disease in question. This discretion should be especially true for research focused on prevention and transmission, and research that suggests interventions that have considerably lower risk if applied. This thesis also expresses support for novel clinical trial structures that may deliver high-powered results in a shorter time frame with wider applicability such as pre-designed and hibernated trials. 21 REFERENCES Adalja, A. A., Watson, M., Toner, E. S., Cicero, A., & Inglesby, T. V. (2018). (publication). The Characteristics of Pandemic Pathogens. Barclay, E. (2008). Predicting the next pandemic. The Lancet, 372(9643), 1025–1026. https://doi.org/10.1016/s0140-6736(08)61425-7 Brittain, C., Meakin, G., Childs, M., Duley, L., & Lim, W. S. (2015). Conducting a randomised trial during an influenza pandemic: An example of a trial set up and ‘hibernated’ ready to activate and recruit within 4 weeks. Trials, 16(S2). https://doi.org/10.1186/1745-6215-16-s2-o14 Brownson, R. C., Fielding, J. E., & Green, L. W. (2018). Building capacity for evidencebased Public Health: Reconciling the pulls of practice and the push of Research. Annual Review of Public Health, 39(1), 27–53. https://doi.org/10.1146/annurevpublhealth-040617-014746 Brownson, R. C., Fielding, J. E., & Maylahn, C. M. (2009). Evidence-based Public Health: A fundamental concept for public health practice. Annual Review of Public Health, 30(1), 175–201. https://doi.org/10.1146/annurev.publhealth.031308.100134 Carley, S., Horner, D., Body, R., & Mackway-Jones, K. (2020). Evidence-based medicine and covid-19: What to believe and when to change. Emergency Medicine Journal, 37(9), 572–575. https://doi.org/10.1136/emermed-2020-210098 Chalmers, J. D. (2020). Pandemic trials: Evidence-based medicine on steroids. European Respiratory Journal, 56(6), 2004116. https://doi.org/10.1183/13993003.041162020 22 Claridge, J. A., & Fabian, T. C. (2005). History and development of evidence-based medicine. World Journal of Surgery, 29(5), 547–553. https://doi.org/10.1007/s00268-005-7910-1 Clarke, B., Gillies, D., Illari, P., Russo, F., & Williamson, J. (2013). The evidence that evidence-based medicine omits. Preventive Medicine, 57(6), 745–747. https://doi.org/10.1016/j.ypmed.2012.10.020 Del Mar, C. B., & Anderson, J. N. (2003). Epitaph for the EBM in action series. Medical Journal of Australia, 178(11), 535–536. https://doi.org/10.5694/j.13265377.2003.tb05352.x Doherty, S. (2005). Evidence-based medicine: Arguments for and against. Emergency Medicine Australasia, 17(4), 307–313. https://doi.org/10.1111/j.17426723.2005.00753.x Fang, F. C., & Schooley, R. T. (2021). Treatment of coronavirus disease 2019— evidence-based or personalized medicine? Clinical Infectious Diseases, 74(1), 149–151. https://doi.org/10.1093/cid/ciaa996 Friesen, P. (2019). Mesmer, the placebo effect, and the efficacy paradox: Lessons for Evidence Based Medicine and Complementary and Alternative Medicine. Critical Public Health, 29(4), 435–447. https://doi.org/10.1080/09581596.2019.1597967 Greenhalgh, T. (2020). Will COVID-19 be evidence-based Medicine’s nemesis? PLOS Medicine, 17(6). https://doi.org/10.1371/journal.pmed.1003266 Greenhalgh, T. (2021). Miasmas, mental models and Preventive Public Health: Some philosophical reflections on science in the COVID-19 pandemic. Interface Focus, 11(6). https://doi.org/10.1098/rsfs.2021.0017 23 Guyatt, G. (1992). Evidence-based medicine. JAMA, 268(17), 2420. https://doi.org/10.1001/jama.1992.03490170092032 Hurley, Z. (2020). Review of Richard Horton (2020). the COVID-19 catastrophe: What’s gone wrong and how to stop it happening again. Postdigital Science and Education, 2(3), 1015–1019. https://doi.org/10.1007/s42438-020-00171-y IDWeek. (2022, October 1). About Us. IDWeek. Retrieved October 11, 2022, from https://idweek.org/about-us/ Jenicek, M. (1997). Epidemiology, evidenced-based medicine, and evidence-based Public Health. Journal of Epidemiology, 7(4), 187–197. https://doi.org/10.2188/jea.7.187 Kennedy, H. L. (1999). The importance of randomized clinical trials and evidence-based medicine: A clinician's perspective. Clinical Cardiology, 22(1), 6–12. https://doi.org/10.1002/clc.4960220106 Lhachimi, S. K., Bala, M. M., & Vanagas, G. (2016). Evidence-based Public Health. BioMed Research International, 2016, 1–2. https://doi.org/10.1155/2016/5681409 Long, E. R., & Ferebee, S. H. (1950). A controlled investigation of streptomycin treatment in pulmonary tuberculosis. Public Health Reports (1896-1970), 65(44), 1421. https://doi.org/10.2307/4587521 Morse, S. S., Mazet, J. A. K., Woolhouse, M., Parrish, C. R., Carroll, D., Karesh, W. B., Zambrana-Torrelio, C., Lipkin, W. I., & Daszak, P. (2012). Prediction and prevention of the next pandemic zoonosis. The Lancet, 380(9857), 1956–1965. https://doi.org/10.1016/s0140-6736(12)61684-5 Nadav, D., & Dani, F. (2006). Reconstructing data: evidence-based medicine and evidence-based public health in context. Dynamis, 26, 287–306. 24 Nature Editorial (2021). Evidence-based medicine: How covid can drive positive change. Nature, 593(7858), 168–168. https://doi.org/10.1038/d41586-021-01255-w Sackett, D. L., & Rosenberg, W. M. (1995). The need for evidence-based medicine. Journal of the Royal Society of Medicine, 88(11), 620–624. https://doi.org/10.1177/014107689508801105 Sackett, D. L., Rosenberg, W. M., Gray, J. A., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn't. BMJ, 312(7023), 71–72. https://doi.org/10.1136/bmj.312.7023.71 Science Media Centre. (2020). Expert reaction to review of evidence on face masks and face coverings by the Royal Society Delve initiative. Science Media Centre. Retrieved October 13, 2022, from https://www.sciencemediacentre.org/expertreaction-to-review-of-evidence-on-face-masks-and-face-coverings-by-the-royalsociety-delve-initiative/ Serpa Neto, A., & Hodgson, C. (2020). Will evidence-based medicine survive the COVID-19 pandemic? Annals of the American Thoracic Society, 17(9), 1060– 1061. https://doi.org/10.1513/annalsats.202006-587ed Simpson, C. R., Beever, D., Challen, K., De Angelis, D., Fragaszy, E., Goodacre, S., Hayward, A., Lim, W. S., Rubin, G. J., Semple, M. G., & Knight, M. (2019). The UK's Pandemic Influenza Research Portfolio: A Model for future research on emerging infections. The Lancet Infectious Diseases, 19(8). https://doi.org/10.1016/s1473-3099(18)30786-2 25 Speckman, R. A., & Friedly, J. L. (2019). Asking structured, answerable clinical questions using the population, intervention/comparator, outcome (PICO) framework. PM&R, 11(5), 548–553. https://doi.org/10.1002/pmrj.12116 Tonelli, M. R. (1998). The philosophical limits of evidence-based medicine. Academic Medicine, 73(12), 1234–40. https://doi.org/10.1097/00001888-199812000-00011 Victora, C. G., Habicht, J.-P., & Bryce, J. (2004). Evidence-based public health: Moving beyond randomized trials. American Journal of Public Health, 94(3), 400–405. https://doi.org/10.2105/ajph.94.3.400 Walach, H. (2001). The efficacy paradox in randomized controlled trials of CAM and elsewhere: Beware of the placebo trap. The Journal of Alternative and Complementary Medicine, 7(3), 213–218. https://doi.org/10.1089/107555301300328070 Wenham, C. (2021). What went wrong in the global governance of covid-19? BMJ. https://doi.org/10.1136/bmj.n303 Zhang, W., & Doherty, M. (2018). Efficacy paradox and proportional contextual effect (PCE). Clinical Immunology, 186, 82–86. https://doi.org/10.1016/j.clim.2017.07.018 26 Name of Candidate: Divyam Goel Date of Submission: December 20, 2022 |
| Reference URL | https://collections.lib.utah.edu/ark:/87278/s6f2j94s |



