Title | Validity of International Classification of Diseases Codes for Identifying Neuro-Ophthalmic Disease in Large Data Sets: A Systematic Review |
Creator | Ali G. Hamedani, MD, MHS; Lindsey B. De Lott, MD, MS; Tatiana Deveney, MD; Heather E. Moss, MD, PhD |
Affiliation | Department of Neurology (AGH), Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; Department of Neurology and Ophthalmology & Visual Sciences (LBDL, TD), University of Michigan, Ann Arbor, Michigan; and Department of Ophthalmology and Neurology & Neurological Sciences (HEM), Stanford University, Palo Alto, California |
Abstract | Administrative health claims data have been used for research in neuro-ophthalmology, but the validity of International Classification of Diseases (ICD) codes for identifying neuro-ophthalmic conditions is unclear. |
Subject | Databases, Factual; Eye Diseases / classification; Humans; Neurology / standards; Neurology / statistics & numerical data; Ophthalmology / statistics & numerical data |
OCR Text | Show State-of-the-Art Review Section Editors: Fiona Costello, MD, FRCP(C) Sashank Prasad, MD Validity of International Classification of Diseases Codes for Identifying Neuro-Ophthalmic Disease in Large Data Sets: A Systematic Review Ali G. Hamedani, MD, MHS, Lindsey B. De Lott, MD, MS, Tatiana Deveney, MD, Heather E. Moss, MD, PhD Background: Administrative health claims data have been used for research in neuro-ophthalmology, but the validity of International Classification of Diseases (ICD) codes for identifying neuro-ophthalmic conditions is unclear. Evidence Acquisition: We performed a systematic literature review to assess the validity of administrative claims data for identifying patients with neuro-ophthalmic disorders. Two reviewers independently reviewed all eligible full-length articles and used a standardized abstraction form to identify ICD code–based definitions for 9 neuro-ophthalmic conditions and their sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). A quality assessment of eligible studies was also performed. Results: Eleven articles that met criteria for inclusion are as follows: 3 studies of idiopathic intracranial hypertension (PPV 54%–91% and NPV 74%–85%), 2 studies of giant cell arteritis (sensitivity 30%–96% and PPV 94%), 3 studies of optic neuritis (sensitivity 76%–99%, specificity 83%–100%, PPV 25%– 100%, and NPV 98%–100%), 1 study of neuromyelitis optica (sensitivity 60%, specificity 100%, PPV 43%–100%, and NPV 98%–100%), 1 study of ocular motor cranial neuropathies (PPV 98%–99%), and 2 studies of myasthenia gravis (sensitivity 53%–97%, specificity 99%–100%, PPV 5%–90%, and NPV 100%). No studies met eligibility criteria for nonarteritic ischemic optic neuropathy, thyroid eye disease, and blepharospasm. Approximately 45.5% provided only one measure of Department of Neurology (AGH), Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; Department of Neurology and Ophthalmology & Visual Sciences (LBDL, TD), University of Michigan, Ann Arbor, Michigan; and Department of Ophthalmology and Neurology & Neurological Sciences (HEM), Stanford University, Palo Alto, California. Dr. Moss receives support from an unrestricted grant from Research to Prevent Blindness to the Stanford Department of Ophthalmology (NIH P30 026877). The authors report no conflicts of interest. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site (www. jneuro-ophthalmology.com). Address correspondence to Ali G. Hamedani, MD, MHS, Department of Neurology, Perelman School of Medicine, University of Pennsylvania, 3400 Spruce Street, 3 W. Gates Bldg, Philadelphia, PA 19104; E-mail: ali.hamedani@pennmedicine.upenn.edu 514 diagnostic accuracy. Complete information about the validation cohorts, inclusion/exclusion criteria, data collection methods, and expertise of those reviewing charts for diagnostic accuracy was missing in 90.9%, 72.7%, 81.8%, and 36.4% of studies, respectively. Conclusions: Few studies have reported the validity of ICD codes for neuro-ophthalmic conditions. The range of diagnostic accuracy for some disorders and study quality varied widely. This should be taken into consideration when interpreting studies of neuro-ophthalmic conditions using administrative claims data. Journal of Neuro-Ophthalmology 2020;40:514–519 doi: 10.1097/WNO.0000000000000971 © 2020 by North American Neuro-Ophthalmology Society T he epidemiology, health care utilization, and treatment outcomes of many neuro-ophthalmic disorders are incompletely understood and form the basis of ongoing active research efforts. Because these disorders are relatively rare, large administrative claims and other health care–related databases (“big data”) have become an increasingly popular clinical research tool. As Moss et al (1) discuss in a recent State-of-the-Art Review within this journal, large sample sizes (often in the tens or hundreds of millions) permit the study of rare diseases, and real-world data provide more accurate population-based estimates of disease incidence, prevalence, health care utilization, and costs. However, data that have been collected primarily for insurance billing rather than research purposes are prone to measurement error. Administrative claims databases such as Medicare, commercial health insurance data (e.g., Optum Clinformatics Data mart), and the National Inpatient Sample have been used to study idiopathic intracranial hypertension (IIH) (2), nonarteritic ischemic optic neuropathy (NAION) (3–7), optic neuritis (8), and thyroid eye disease (TED) (9). In these studies, patients are identified using the International Classification of Diseases (ICD) coding system (ICD-9 Hamedani et al: J Neuro-Ophthalmol 2020; 40: 514-519 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. State-of-the-Art Review or ICD-10), and accurate coding is critical for results to be externally valid. As neuro-ophthalmic disorders are especially prone to diagnostic error (10,11), validation studies that confirm the accuracy of ICD-9 and ICD-10 codes are important for performing and interpreting the results of administrative claims studies in neuro-ophthalmology. In this State-of-the-Art Review, we provide a systematic review of validation studies for ICD-9 and ICD-10 codes in neuro-ophthalmology. We believe that a better understanding of the validity of ICD codes for identifying neuroophthalmic disease in large data sets will help researchers and readers design and interpret administrative claims studies in neuro-ophthalmology. METHODS We performed a systematic review of validation studies for ICD-9 and ICD-10 codes used for neuro-ophthalmic disorders. The review protocol was developed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (12), prospectively registered in the PROSPERO database (https://www.crd. york.ac.uk/PROSPERO/), and was exempted from institutional review board’s review. We based our search strategy on a 2012 systematic review of validation studies in neurology (13), which has been applied to similar studies of diabetes (14) and sepsis (15). Briefly, this strategy involves performing separate searches for each of 3 concepts: 1) health services research, administrative claims, and ICD codes; 2) diagnostic validity including sensitivity, specificity, and predictive value; and 3) neuro-ophthalmic conditions of interest. For each concept, search terms (including both keywords and MeSH and EMTREE search terms) are separated by the delimiter “or.” When the results from all 3 concepts are combined, the delimiter “and” is used such that results must appear in each of the 3 parent searches. We selected the following 9 neuro-ophthalmic conditions of interest: IIH, NAION, giant cell arteritis (GCA), optic neuritis, neuromyelitis optica (NMO), ocular motor cranial neuropathies, myasthenia gravis (MG), TED, and blepharospasm. We chose these conditions because they represent a group of clinically relevant disorders encompassing both afferent and efferent neuro-ophthalmic diseases, many of which have been studied using administrative claims data. We did not include multiple sclerosis (MS) because its validation studies have previously been systematically reviewed (13). A complete list of our search syntax can be found in the Supplemental Material. We searched MEDLINE (Ovid platform, 1948 to August 8, 2019) and EMBASE (Elsevier platform, 1980 to August 8, 2019). Two reviewers (A.G.H. and L.B.D.) independently reviewed all MEDLINE abstracts, and a third reviewer (T.D.) reviewed all EMBASE abstracts for eligibility. Articles were eligible for review if they were original studies validating ICD-9 or ICD-10 codes against a reference standard; reported sensitivity, specificity, positive predictive Hamedani et al: J Neuro-Ophthalmol 2020; 40: 514-519 value (PPV), or negative predictive value (NPV); and were published in English as full-length articles. We did not include studies with ICD-8 or earlier code definitions. Full-length articles that were identified as potentially eligible by at least one reviewer were then independently reviewed in full by 2 reviewers (A.G.H. and L.D.B.) to confirm eligibility. Disagreements were resolved by consensus. We identified additional articles for review by examining the reference lists of full-length articles that we identified and consulting the Journal of Neuro-Ophthalmology Editorial Board to ensure that no additional studies were missed. We used a standardized form to record the specific ICD codes, case definitions, sensitivity, specificity, PPV, and NPV from each study. We also gathered information on study location, data source, sample size, years of study, and reference standard. Study quality was evaluated using adapted reporting guidelines for validation studies of health administrative data (16). Because of the limited number of studies for each condition, we summarized results in descriptive tables but did not perform a quantitative meta-analysis. RESULTS A total of 2,811 unique records were identified through database searching, and 9 additional studies were identified by reviewing reference lists and other external sources. Of these 2,820 studies, 31 abstracts met eligibility criteria for a full-length review. Twenty records were excluded during the review process (see Fig. 1 for exclusion reasons), resulting in a final yield of 11 articles. The range of published sensitivity, specificity, PPV, and NPV for the diseases of interest included in these articles (IIH, GCA, optic neuritis, NMO, ocular motor cranial neuropathies, and MG) is summarized in Table 1. Detailed summaries of each article and case definition can be found in Supplemental Digital Content 1 (see Table E1, http://links.lww.com/WNO/A399) and summarized quality assessments in Supplemental Digital Content 2 (see Table E2, http://links.lww.com/WNO/ A400). For NAION, TED, and blepharospasm, there were no full-length articles that met inclusion criteria. Idiopathic Intracranial Hypertension Validation data for the ICD-9 and ICD-10 codes for IIH come from 2 sources: a single-center hospital-based inpatient and emergency department database in the United States (17) and a National Patient Registry containing both inpatient and subspecialty outpatient data from Sweden (18,19). In both cohorts, a single code for IIH in an adult had a PPV of approximately 55% compared with Friedman and Jacobson or modified Dandy criteria, respectively. In the Swedish cohort, algorithm-based definitions that included 2 separate IIH code instances, age, and acetazolamide prescriptions increased PPV to 89%–90% and provided NPV of 74%–85%. 515 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. State-of-the-Art Review FIG. 1. Systematic review flow diagram. Giant Cell Arteritis Two studies used inpatient claims data from 2 different sources (a national public insurance database in France (20) and hospital discharge records in Olmsted County, MN (21)) to examine the validity of ICD-9 and ICD-10 codes for GCA. Both used American College of Rheumatology diagnostic criteria as the reference standard, although these criteria changed slightly between the publication of the 2 studies. Hospital discharge data usually include a primary diagnosis, followed by a number of secondary diagnoses, which can vary in number between state and health system. The Olmsted County study examined ICD codes in either the primary or secondary diagnosis position and found a much lower sensitivity (30.1%) than the French study (96.4%), which limited its evaluation to primary or primary-related admission or discharge diagnoses. Optic Neuritis Three studies reported the validity of ICD-9 and ICD-10 codes for optic neuritis. One study used pediatric inpatient data from Denmark (22), whereas 2 studies used adult inpatient and outpatient data from Canada and the United States, respectively (23,24). Sensitivity and specificity were generally high, although PPV varied widely from 25.4% in Canada to 100% in the United States. Interestingly, increasing the number of required diagnoses or specifying TABLE 1. Sensitivity, specificity, and positive and negative predictive values for ICD code–based definitions of neuro-ophthalmic conditions Conditions IIH GCA Optic neuritis NMO Cranial nerve palsy MG NAION TED Blepharospasm No. of Studies No. of Case Definitions/Validations Sensitivity (Range) Specificity (Range) 3 2 3 1 1 4 2 6 3 3 30%–96% 76%–99% 60% 83%–100% 100% 2 0 0 0 6 53%–97% 99%–100% PPV (Range) NPV (Range) 54%–91% 94% 25%–100% 43%–100% 98%–99% 74%–85% 5%–90% 98%–100% 100% 100% GCA, giant cell arteritis; ICD, International Classification of Diseases; IIH, idiopathic intracranial hypertension; MG, myasthenia gravis; NAION, nonarteritic ischemic optic neuropathy; NMO, neuromyelitis optica; NPV, negative predictive value; PPV, positive predictive value; TED, thyroid eye disease. 516 Hamedani et al: J Neuro-Ophthalmol 2020; 40: 514-519 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. State-of-the-Art Review diagnosis position did not have as much of an impact as in IIH or GCA, respectively. Importantly, the reference standard for all 3 of these studies was the treating clinician’s diagnosis, with no requirement for specific examination or MRI findings. One study also required that a serum angiotensin-converting enzyme level be ordered to identify patients with suspected optic neuritis, although this may have been because it was specifically examining optic neuritis in the setting of anti–tumor necrosis factor therapy for other rheumatologic conditions (24). Neuromyelitis Optica One of the above studies also included pediatric NMO, comparing ICD-10 G36.0 with the Wingerchuk 2015 diagnostic criteria (25). Sensitivity was 60%, and specificity was 100%. PPV improved when the diagnosis was required to be in the primary position and when 2 or more claims were required. Ocular Motor Cranial Neuropathies One study examined the association between migraine and ocular motor cranial nerve (CN) palsies using public health insurance data from Taiwan, and in doing so, also evaluated the validity of ICD-9 codes for third, fourth, and sixth nerve palsies (26). All had very high PPV (.97%) as compared to a chart review by 2 neurologists. In this study, a diagnosis was confirmed “if the chart clearly described both the symptoms and the signs of these 3 CN palsies with a detailed diagnostic procedure to identify the underlying causes.” However, the signs, symptoms, and diagnostic testing for each cranial neuropathy were not specified. For example, it is unclear if the Parks– Bielschowsky three-step test was required to diagnose a fourth nerve palsy, or if testing for ocular MG was required to exclude it as a mimicking cause of third, fourth, or sixth nerve palsy. Myasthenia Gravis Two inpatient and outpatient databases (one using public health insurance data (27) from Canada, and the other using a single US hospital’s electronic medical record (28)) were used. Sensitivity ranged from 53.1% to 97.1% and was better for inpatient or outpatient codes compared with inpatient only and for 2 code instances compared with just one. PPV varied even more dramatically (4.8%–89.7%) depending on the number and type of claims that were required. Specificity and NPV were uniformly excellent (.99%). Of note, ICD codes do not distinguish between ocular and generalized MG, so these are unable to be differentiated in administrative claims databases. Quality Assessment Using adapted reporting guidelines (16), study quality varied widely among the 11 studies. Most (n = 9, 81.8%) articles acknowledged in the title, abstract, or introduction that validation was a goal of the study. However, complete information about study methodology was variable. Only Hamedani et al: J Neuro-Ophthalmol 2020; 40: 514-519 one (9.1%) study described the validation cohort to which the reference standard was applied. Although inclusion criteria were described in all studies, only 3 (27.3%) studies defined exclusion criteria. The expertise of the reviewers was described in 7 (63.6%) studies. In reporting measures of diagnostic accuracy, 5 (45.5%) studies reported only a single measure, with PPV (90.9%) and sensitivity (54.5%) most often reported. Of the 10 studies reporting PPV, 4 (40%) compared their results to the population prevalence to allow readers to understand how prevalent a disease is in the validation cohort as compared to the population. Splitsample revalidation using a separate cohort was rare (n = 2, 18.2%). All but one (90.9%) study discussed the applicability of their findings in the discussion. DISCUSSION In this systematic review, we found few studies validating ICD codes for specific neuro-ophthalmic diagnoses and wide variability in the range of reported measures of diagnostic accuracy for IIH, GCA, optic neuritis, NMO, ocular motor cranial neuropathies, and MG. In the 11 studies we identified, there were also notable limitations in study methodology that may affect the accuracy and generalizability of the specific diagnostic algorithms investigated. Many of the conditions that neuro-ophthalmologists encounter in clinical practice are rare, limiting both clinical experience and research recruitment efforts at any one center. Administrative claims database studies provide the advantage of being able to study a much larger population across many centers (both academic and nonacademic) throughout an entire health system, which may yield new insights into disease risk factors and treatment utilization and outcomes with greater generalizability to target future clinical research efforts. However, the validity of these studies relies heavily on the accuracy with which neuroophthalmic conditions can be identified in these databases. Understanding the diagnostic accuracy of ICD codes in administrative claims databases is important for neuroophthalmologists to be able to interpret the results of previous large database studies, and efforts to improve their accuracy may also aid in future research endeavors. We observed several gaps in the validation literature for neuro-ophthalmic ICD codes. For both NMO and ocular motor cranial neuropathies, only one study in each has evaluated coding accuracy, and in the case of the latter, it is unclear how diagnoses of third, fourth, and sixth nerve palsies were confirmed. We did not find any validation studies of the ICD codes for NAION, TED, and blepharospasm. This is especially problematic for NAION, which has been the subject of a number of population-based and administrative claims studies without previous validation (3–7). This is an important gap in health services research in neuro-ophthalmology, and validation studies for NAION 517 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. State-of-the-Art Review and other disorders are needed to improve the quality and impact of future research in this area. In the validation studies we identified, we observed variability in measures of diagnostic accuracy. The accuracy of these measures depends on a number of factors, including the source of validation data (single-center clinical data vs health insurance database), clinical setting (inpatient vs outpatient), number of ICD codes, and other criteria. For IIH, optic neuritis, NMO, and MG, sensitivity or PPV improved when more than one claim or additional criteria (e.g., age and medication prescription) were required. In IIH, inpatient and combined inpatient–outpatient data yielded similar overall PPV, but in MG, a single inpatient code had lower sensitivity and specificity but better PPV than combined inpatient–outpatient data. In optic neuritis and MG, clinic-based data provided better sensitivity and PPV than claims databases, but the results were similar for IIH, and in GCA, a public health insurance database yielded higher sensitivity than a single hospital’s discharge data. In an ideal scenario, a case definition would be validated in the specific database of interest before its use, but as this is not always feasible, validation within a different database may be considered as long as the underlying population and sources of data (inpatient vs outpatient and administrative claims vs electronic health records) are similar. Estimates of the accuracy of ICD codes also depend on the reference standard. For optic neuritis and ocular motor cranial neuropathies, all validation studies compared ICD codes with the treating neurologist’s or ophthalmologist’s diagnosis, but data from tertiary neuro-ophthalmology clinics suggest that overdiagnosis of these and other neuroophthalmic conditions is high (10,11,29). The extent to which this affects the results of “big data” analyses depends on the specific research question. For studies of health care utilization, diagnostic accuracy (i.e., whether the clinical diagnosis was correct) is a separate question from health care access and delivery. For example, a recent study using commercial health insurance claims data found that only about 60% of patients who were diagnosed with optic neuritis received a brain MRI (30). Some of those patients may not actually have had optic neuritis, but to a certain extent, this does not matter; if a clinician believes that someone has optic neuritis, even if the diagnosis is ultimately incorrect, a brain MRI should probably be obtained to screen for demyelinating lesions, so the fact that 40% of patients do not receive one represents a relevant health care disparity. However, it is difficult to determine whether this discrepancy is due to coding error (assigning a diagnosis code for optic neuritis when the clinician actually suspected something else) or clinical practice. Coding accuracy has greater implications for studies of disease epidemiology, prognosis, and outcomes—for example, the risk of MS after optic neuritis, which is likely to be underestimated if optic neuritis is clinically overdiagnosed and overcoded. The valida518 tion study literature would be enhanced by standardized reporting. For several of the conditions we studied, sensitivity and PPV were the only parameters reported, and information on specificity or NPV was lacking. This reflects the manner in which validation studies are generally conducted. For clinic-based studies, a patient population is identified from a clinical registry or similar source, and ICD codes are examined to determine how many patients carry the code of interest (sensitivity). For administrative claims databases, the population is identified using ICD codes, and charts are reviewed to determine how many actually have the disease of interest (PPV). This highlights several limitations. First, specificity and NPV cannot be determined unless a comparable group of disease-free or ICD code–free patients is examined. For sensitivity studies, determining specificity requires reviewing the charts of disease-free patients and determining how many do not carry the ICD code of interest, and for PPV studies, determining NPV requires a group of patients without the ICD code of interest to determine how many truly do not have the disease of interest. However, defining this population is challenging, especially in neuro-ophthalmology where disease ascertainment often requires specialized examination or testing. For example, to determine NPV in IIH, one would ideally require individuals without the ICD code for IIH to have a fundus examination to confirm the lack of papilledema, but in an observational data set, only a small subset of otherwise healthy young women will have had a recent fundus examination, and the specific reasons for undergoing a recent eye examination are certain to confound the results. This raises a second important caveat: Although sensitivity and specificity are intrinsic properties of the test or code in question, PPV and NPV depend additionally on the disease prevalence within the population of interest. When interpreting the results of validation studies, the underlying prevalence of disease should therefore be considered, and the results of a validation study in a highly enriched population (e.g., demyelinating disease registry) should not be extrapolated to a population with lower disease prevalence (e.g., nationally representative sample). Furthermore, efforts to increase PPV almost always come at the expense of decreasing NPV because the positive results that are excluded are likely to contain at least some true positives. In both IIH and optic neuritis, measures to increase PPV by increasing the number of required claims resulted in a corresponding decrease in NPV. In summary, few studies have examined the validity of ICD codes for neuro-ophthalmic conditions. Measures of diagnostic accuracy have been reported for administrative claims studies of IIH, GCA, optic neuritis, NMO, ocular motor cranial neuropathies, and MG but not for other conditions, such as NAION, TED, or blepharospasm. ICD codes are naturally prone to misclassification error, and limited and variable diagnostic accuracy within specific diagnoses is expected. However, clinicians and researchers Hamedani et al: J Neuro-Ophthalmol 2020; 40: 514-519 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. State-of-the-Art Review should take this into consideration when interpreting and conducting “big data” research studies in neuroophthalmology, and additional validation studies are needed to improve the quality of research in this area. 15. 16. STATEMENT OF AUTHORSHIP Category 1: a. Conception and design: A. G. Hamedani and L. B. De Lott; b. Acquisition of data: A. G. Hamedani, L. B. De Lott, and T. Deveney; c. Analysis and interpretation of data: A. G. Hamedani and L. B. De Lott. Category 2: a. Drafting the manuscript: A. G. Hamedani; b. Revising it for intellectual content: L. B. De Lott, T. Deveney, and H. E. Moss. Category 3: a. Final approval of the completed manuscript: A. G. Hamedani, L. B. De Lott, T. Deveney, and H. E. Moss. REFERENCES 1. Moss HE, Joslin CE, Rubin DS, Roth S. Big data research in neuro-ophthalmology: promises and pitfalls. J Neuroophthalmol. 2019;39:480–486. 2. Sodhi M, Sheldon CA, Carleton B, Etminan M. Oral fluoroquinolones and risk of secondary pseudotumor cerebri syndrome: nested case-control study. Neurology. 2017;89:792–795. 3. Rubin DS, Matsumoto MM, Moss HE, Joslin CE, Tung A, Roth S. Ischemic optic neuropathy in cardiac surgery: incidence and risk factors in the United States from the national inpatient sample 1998 to 2013. Anesthesiology. 2017;126:810–821. 4. Rubin DS, Parakati I, Lee LA, Moss HE, Joslin CE, Roth S. Perioperative visual loss in spine fusion surgery: ischemic optic neuropathy in the United States from 1998 to 2012 in the nationwide inpatient sample. Anesthesiology. 2016;125:457–464. 5. Cestari DM, Gaier ED, Bouzika P, Blachley TS, De Lott LB, Rizzo JF, Wiggs JL, Kang JH, Pasquale LR, Stein JD. Demographic, systemic, and ocular factors associated with nonarteritic anterior ischemic optic neuropathy. Ophthalmology. 2016;123:2446–2455. 6. Lee MS, Grossman D, Arnold AC, Sloan FA. Incidence of nonarteritic anterior ischemic optic neuropathy: increased risk among diabetic patients. Ophthalmology. 2011;118:959–963. 7. Lee YC, Wang JH, Huang TL, Tsai RK. Increased risk of stroke in patients with nonarteritic anterior ischemic optic neuropathy: a nationwide retrospective cohort study. Am J Ophthalmol. 2016;170:183–189. 8. Guo D, Liu J, Gao R, Tari S, Islam S. Prevalence and incidence of optic neuritis in patients with different types of uveitis. Ophthalmic Epidemiol. 2018;25:39–44. 9. Stein JD, Childers D, Gupta S, et al. Risk factors for developing thyroid-associated ophthalmopathy among individuals with Graves disease. JAMA Ophthalmol. 2015;133:290–296. 10. Fisayo A, Bruce BB, Newman NJ, Biousse V. Overdiagnosis of idiopathic intracranial hypertension. Neurology. 2016;86:341– 350. 11. Stunkel L, Kung NH, Wilson B, McClelland CM, Van Stavern GP. Incidence and causes of overdiagnosis of optic neuritis. JAMA Ophthalmol. 2018;136:76–81. 12. Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and metaanalyses: the PRISMA statement. Ann Intern Med. 2009;151:264–269, W64. 13. St Germaine-Smith C, Metcalfe A, Pringsheim T, Roberts JI, Beck CA, Hemmelgarn BR, McChesney J, Quan H, Jette N. Recommendations for optimal ICD codes to study neurologic conditions: a systematic review. Neurology. 2012;79:1049– 1055. 14. Khokhar B, Jette N, Metcalfe A, Cunningham CT, Quan H, Kaplan GG, Butalia S, Rabi D. Systematic review of validated Hamedani et al: J Neuro-Ophthalmol 2020; 40: 514-519 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. case definitions for diabetes in ICD-9-coded and ICD-10-coded data in adult populations. BMJ Open. 2016;6:e009952. Jolley RJ, Sawka KJ, Yergens DW, Quan H, Jetté N, Doig CJ. Validity of administrative data in recording sepsis: a systematic review. Crit Care Lond Engl. 2015;19:139. Benchimol EI, Manuel DG, To T, Griffiths AM, Rabeneck L, Guttmann A. Development and use of reporting guidelines for assessing the quality of validation studies of health administrative data. J Clin Epidemiol. 2011;64:821–829. Koerner JC, Friedman DI. Inpatient and emergency service utilization in patients with idiopathic intracranial hypertension. J Neuroophthalmol. 2014;34:229–232. Sundholm A, Burkill S, Sveinsson O, Piehl F, Bahmanyar S, Nilsson Remahl AIM. Population-based incidence and clinical characteristics of idiopathic intracranial hypertension. Acta Neurol Scand. 2017;136:427–433. Sundholm A, Burkill S, Bahmanyar S, Nilsson Remahl AIM. Improving identification of idiopathic intracranial hypertension patients in Swedish patient register. Acta Neurol Scand. 2018;137:341–346. Caudrelier L, Moulis G, Lapeyre-Mestre M, Sailler L, Pugnet G. Validation of giant cell arteritis diagnosis code in the French hospital electronic database. Eur J Intern Med. 2019;60:e16– e17. Michet CJ, Crowson CS, Achenbach SJ, Matteson EL. The detection of rheumatic disease through hospital diagnoses with examples of rheumatoid arthritis and giant cell arteritis: what are we missing? J Rheumatol. 2015;42:2071–2074. Boesen MS, Magyari M, Born AP, Thygesen LC. Pediatric acquired demyelinating syndromes: a nationwide validation study of the Danish national patient register. Clin Epidemiol. 2018;10:391–399. Marrie RA, Ekuma O, Wijnands JMA, Kingwell E, Zhu F, Zhao Y, Fisk JD, Evans C, Tremlett H. Identifying optic neuritis and transverse myelitis using administrative data. Mult Scler Relat Disord. 2018;25:258–264. Winthrop KL, Chen L, Fraunfelder FW, Ku JH, Varley CD, Suhler E, Hills WL, Gattey D, Baddley JW, Liu L, Grijalva CG, Delzell E, Beukelman T, Patkar NM, Xie F, Herrinton LJ, Fraunfelder FT, Saag KG, Lewis JD, Solomon DH, Curtis JR. Initiation of antiTNF therapy and the risk of optic neuritis: from the safety assessment of biologic ThERapy (SABER) Study. Am J Ophthalmol. 2013;155:183–189.e1. Wingerchuk DM, Banwell B, Bennett JL, Cabre P, Carroll W, Chitnis T, de Seze J, Fujihara K, Greenberg B, Jacob A, Jarius S, Lana-Peixoto M, Levy M, Simon JH, Tenembaum S, Traboulsee AL, Waters P, Wellik KE, Weinshenker BG; International Panel for NMO Diagnosis. International consensus diagnostic criteria for neuromyelitis optica spectrum disorders. Neurology. 2015;85:177–189. Yang CP, Chen YT, Fuh JL, Wang SJ. Migraine and risk of ocular motor cranial nerve palsies: a nationwide cohort study. Ophthalmology. 2016;123:191–197. Breiner A, Young J, Green D, Katzberg HD, Barnett C, Bril V, Tu K. Canadian administrative health data can identify patients with myasthenia gravis. Neuroepidemiology. 2015;44:108– 113. Wright A, Pang J, Feblowitz JC, Maloney FL, Wilcox AR, Ramelson HZ, Schneider LI, Bates DW. A method and knowledge base for automated inference of patient problems from structured data in an electronic medical record. J Am Med Inform Assoc. 2011;18:859–867. Schroeder R, Stunkel L, Kendall E, Gowder M, Nagia L, Eggenberger E, Van Stavern G. Factors leading to the overdiagnosis of 3rd nerve palsy. Presented at the 2019 North American Neuro-Ophthalmology Society annual meeting. In: Las Vegas, NV. Meer E, Shindler KS, Yu Y, VanderBeek BL. Adherence to clinical trial supported evaluation of optic neuritis. Ophthalmic Epidemiol. 2019;26:321–328. 519 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. |
Date | 2020-12 |
Language | eng |
Format | application/pdf |
Type | Text |
Publication Type | Journal Article |
Source | Journal of Neuro-Ophthalmology, December 2020, Volume 40, Issue 4 |
Collection | Neuro-Ophthalmology Virtual Education Library: Journal of Neuro-Ophthalmology Archives: https://novel.utah.edu/jno/ |
Publisher | Lippincott, Williams & Wilkins |
Holding Institution | Spencer S. Eccles Health Sciences Library, University of Utah |
Rights Management | © North American Neuro-Ophthalmology Society |
ARK | ark:/87278/s61c3nkb |
Setname | ehsl_novel_jno |
ID | 1741128 |
Reference URL | https://collections.lib.utah.edu/ark:/87278/s61c3nkb |