Title | A Validated Method to Identify Neuro-Ophthalmologists in a Large Administrative Claims Database |
Creator | Y. Feng; C. C. Lin; A. G. Hamedani; L. B. De Lott |
Affiliation | Department of Ophthalmology (YF), Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts; Department of Neurology (CCL, LBDL), University of Michigan Medical School, Ann Arbor, Michigan; Departments of Neurology and Ophthalmology (AGH), Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; and Department of Ophthalmology and Visual Sciences (LBDL), Kellogg Eye Center, University of Michigan, Ann Arbor, Michigan |
Abstract | Validated methods to identify neuro-ophthalmologists in administrative data do not exist. The development of such method will facilitate research on the quality of neuro-ophthalmic care and health care utilization for patients with neuro-ophthalmic conditions in the United States. |
Subject | Quality of Care; Neuro-Ophthalmic Patients |
OCR Text | Show Original Contribution Section Editors: Clare Fraser, MD Susan Mollan, MD A Validated Method to Identify Neuro-Ophthalmologists in a Large Administrative Claims Database Yilin Feng, MD, Chun Chieh Lin, PhD, MBA, Ali G. Hamedani, MD, MHS, Lindsey B. De Lott, MD, MS Background: Validated methods to identify neuroophthalmologists in administrative data do not exist. The development of such method will facilitate research on the quality of neuro-ophthalmic care and health care utilization for patients with neuro-ophthalmic conditions in the United States. Methods: Using nationally representative, 20% sample from Medicare carrier files from 2018, we identified all neurologists and ophthalmologists billing at least 1 officebased evaluation and management (E/M) outpatient visit claim in 2018. To isolate neuro-ophthalmologists, the National Provider Identifier numbers of neuroophthalmologists in the North American NeuroOphthalmology Society (NANOS) directory were collected and linked to Medicare files. The proportion of E/M visits with International Classification of Diseases-10 diagnosis codes that best distinguished neuro-ophthalmic care (“neuro-ophthalmology–specific codes” or NSC) was calculated for each physician. Multiple logistic regression models assessed predictors of neuro-ophthalmology specialty designation after accounting for proportion of ophthalmology, neurology, and NSC claims and primary specialty designation. Sensitivity, specificity, and positive predictive value (PPV) for varying proportions of E/M visits with NSC were calculated. Results: We identified 32,293 neurologists and ophthalmologists who billed at least 1 outpatient E/M visit claim in Department of Ophthalmology (YF), Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts; Department of Neurology (CCL, LBDL), University of Michigan Medical School, Ann Arbor, Michigan; Departments of Neurology and Ophthalmology (AGH), Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; and Department of Ophthalmology and Visual Sciences (LBDL), Kellogg Eye Center, University of Michigan, Ann Arbor, Michigan. L. B. De Lott (K23EY027849) is supported by the National Eye Institute, Bethesda, MD. The authors report no conflicts of interest. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site (www. jneuro-ophthalmology.com). Address correspondence to Lindsey B. De Lott, MD, MS, Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI 48105; E-mail: ldelott@med.umich.edu Feng et al: J Neuro-Ophthalmol 2023; 43: 153-158 2018 in Medicare. Of the 472 NANOS members with a valid individual National Provider Identifier, 399 (84.5%) had a Medicare outpatient E/M visit in 2018. The model containing only the proportion of E/M visits with NSC best predicted neuro-ophthalmology specialty designation (odds ratio 1.05 [95% confidence interval 1.04, 1.05]; P , 0.001; area under the receiver operating characteristic [AUROC] = 0.91). Model predictiveness for neuro-ophthalmology designation was maximized when 6% of all billed claims were for NSC (AUROC = 0.89; sensitivity: 84.0%; specificity: 93.9%), but PPV was low (14.9%). The threshold was unchanged when limited only to neurologists billing $1% ophthalmology claims or ophthalmologists billing $1% neurology claims, but PPV increased (33.3%). Conclusions: Our study provides a validated method to identify neuro-ophthalmologists who can be further adapted for use in other administrative databases to facilitate future research of neuro-ophthalmic care delivery in the United States. Journal of Neuro-Ophthalmology 2023;43:153–158 doi: 10.1097/WNO.0000000000001794 © 2023 by North American Neuro-Ophthalmology Society A dministrative data are increasingly used to examine practice patterns of clinicians delivering care for patients in the real world. Although neurologists and ophthalmologists are often reliably classified, the identification of subspecialists, such as neuro-ophthalmologists, is challenging because most neurologic and ophthalmic subspecialties do not have dedicated subspecialty provider identifiers in administrative claims databases. Ophthalmology subspecialties have primarily used specific procedures or proportions of procedures to identify subspecialists, such as the proportion of procedures that are intravitreal injections or vitrectomies to identify retina specialists (1,2). However, this approach is difficult to use in neuro-ophthalmology where procedures may vary widely across clinicians because neuro-ophthalmologists may variably perform procedures based on their primary specialty training. More medically based specialties often classify clinicians using the volume of 153 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. Original Contribution specific diagnosis codes. Yet, the accuracy of clinician classification is often lacking with each of these methods. Linking society membership directories to a database that contains publicly available clinician identifiers, such as National Provider Identifier (NPI) (3,4) has been deployed in other studies to identify subspecialists. Unfortunately, few databases use NPI to identify specific clinicians, limiting the use of NPI alone. In this study, we aim to identify neuro-ophthalmologists in administrative data using Medicare carrier files linked to publicly available NPI data for self-identified neuro-ophthalmologists and determine the accuracy of an algorithm to identify neuroophthalmologists in databases that do not contain subspecialty identifiers. The development of such method will facilitate research on the quality of neuro-ophthalmic care and health care utilization for patients with complex neuroophthalmic conditions. METHODS This study does not contain protected health information and uses publicly available data. It was deemed exempt by the University of Michigan Institutional Review Board. Study Population Using 20% Medicare carrier files from 2018, clinicians with a Medicare provider specialty code for neurology and/or ophthalmology were included if they also billed at least 1 office-based evaluation and management (E/M) outpatient visit claim in 2018. Only the primary diagnosis was considered. To identify neuro-ophthalmologists, publicly available NPI numbers of neuro-ophthalmologists in the North American Neuro-Ophthalmology Society (NANOS) directory were collected and linked to Medicare carrier files. Content, Table 1, http://links.lww.com/SCJ/A352) and selected ICD-10 diagnosis code families that best distinguished neuro-ophthalmic care (“neuro-ophthalmology– specific codes” or NSC) from general neurology and ophthalmology care. These code families include optic neuritis (H46), other disorders of optic nerve and visual pathways (H47), paralytic strabismus (H49), other strabismus (H50), and visual disturbances (H53) (see Supplemental Digital Content, Table 2, http://links.lww.com/SCJ/A352). The proportion of each clinician’s E/M visits with NSC was calculated. Statistical Analysis Multiple logistic regression models with stepwise backward elimination assessed associations between primary specialty designation, proportion of each physician’s Medicare claims that fell into the neurology, ophthalmology, or NSC claims categories, and neuro-ophthalmology specialty designation (dichotomous outcome). The area under the receiver operating characteristics curve (AUROC) was used to evaluate model prediction. Once the best predicting model was determined, the AUROC, sensitivity, specificity, and positive predictive value (PPV) of neuro-ophthalmology specialty designation compared with the NANOS member directory (as the gold standard) were calculated across various proportions of NSC. Because a low number of neuro-ophthalmologists exist in the United States, which may affect the PPV, a sensitivity analysis was conducted using neurologists with $1% ophthalmology claims and ophthalmologists with $1% neurology claims. We performed additional sensitivity analyses assessing the associations between the proportion of NSC claims and specialty designation as a neuro-ophthalmologist vs ophthalmologist or neuro-ophthalmologist vs neurologist. Outcome RESULTS The primary outcome was neuro-ophthalmology specialty designation. Of 497 NANOS members, 472 (95.0%) had a valid individual NPI in the National Plan and Provider Enumeration System. Of the 472 individuals, 302 (64.0%) designated their specialty as ophthalmology (207W00000X), 135 (28.6%) designated their specialty as neurology (2084N0400X), 13 (2.8%) designated their specialty as neuro-ophthalmology (207WX0109X), and 22 (4.7%) designated their specialty as others. In 2018 Medicare claims, we identified 32,293 neurologists (n = 14,088) and ophthalmologists (n = 18,207) with at least 1 outpatient E/M visit. Two individuals identified themselves as both ophthalmologists and neurologists. Of the 472 NANOS members with a valid individual NPI, 399 (84.5%) had a Medicare outpatient E/M visit in 2018. One hundred twenty-four (31.1% of 399) of these neuro-ophthalmologists had neurology as their specialty designation in Medicare and 275 (68.9% of 399) had ophthalmology as their specialty designation. The most commonly billed CCS diagnoses categories in outpatient E/M visits for each specialty are Covariates Each clinician’s primary specialty designation (i.e., neurology or ophthalmology) in Medicare claims was identified using Medicare provider specialty codes (13 and 18, respectively). To better understand physician practice patterns, we categorized E/M visit claims into the Healthcare Cost and Utilization Project Clinical Classification Software (CCS) categories to identify the most common conditions seen by neurologists, ophthalmologists, and neuro-ophthalmologists. In brief, the CCS is a project sponsored by the Agency for Healthcare Quality and Utilization that collapses the International Classification of Diseases (ICD)-9/10 diagnostic and procedure codes into clinically meaningful categories or specific conditions (5). Two fellowship-trained neuro-ophthalmologists (L.B.D.L. and A.G.H.) reviewed these claims (see Supplemental Digital 154 Feng et al: J Neuro-Ophthalmol 2023; 43: 153-158 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. Original Contribution TABLE 1. Most common clinical classifications software categories billed in office E/M visits by specialty Neurology (Excluding Those Billed by Neuro-Ophthalmologists) Clinical Classifications Software Categories 653 79 83 6.9.1 81 205 84 6.9.3 109a 260 80 93 6.9.2 245 Dementia Parkinson Epilepsy Disorders of the peripheral nervous system Tremor/restless leg syndrome/amyotrophic lateral sclerosis Back pain Headache/migraine Chronic pain/abnormality of gait Stroke Sleep disorder Multiple sclerosis Dizziness/vertigo Other central nervous system disorders Syncope N* (%) 122,069 (10.1) 110,936 (9.2) 109,019 (9.0) 107,975 (8.9) 96,751 (8.0) 92,009 (7.6) 89,379 (7.4) 81,322 (6.7) 79,508 (6.6) 48,174 (4.0) 39,583 (3.3) 26,938 (2.2) 20,124 (1.7) 10,294 (0.9) Ophthalmology (Excluding Those Billed by Neuro-Ophthalmologists) Clinical Classifications Software Categories 88 87 86 91 90 50 49 89 47 Glaucoma Retinal detachments, defects, vascular occlusion, and retinopathy Cataract Other eye disorders Inflammation: infection of eye (except that caused by tuberculosis or sexually transmitted disease) Diabetes with ketoacidosis or uncontrolled diabetes Diabetes mellitus without complications Blindness and vision defects Other and unspecified benign neoplasm N* (%) 975,140 (22.7) 921,321 (21.4) 870,073 (20.2) 665,869 (15.5) 259,877 (6.0) 201,682 (4.7) 184,555 (4.3) 58,259 (1.4) 27,244 (0.6) Neuro-Ophthalmology† Clinical Classifications Software Categories N* (%) 91 Other eye disorders 11,773 (26.5) 89 Blindness and vision defects 6,179 (13.9) 88 Glaucoma 5,222 (11.8) 86 Cataract 4,324 (9.7) 87 Retinal detachments, defects, vascular occlusion, and retinopathy 2,876 (6.5) 90 Inflammation: infection of eye (except that caused by tuberculosis or sexually transmitted diseases) 2,264 (5.1) 6.9.1 Disorders of the peripheral nervous system 1,192 (2.7) 49 Diabetes mellitus without complications 979 (2.2) 84 Headache/migraine 849 (1.9) 47 Other and unspecified benign neoplasm 793 (1.8) 109a Stroke 786 (1.8) 80 Multiple sclerosis 610 (1.4) 50 Diabetes with ketoacidosis or uncontrolled diabetes 588 (1.3) 93 Dizziness/vertigo 555 (1.3) 81 Tremor/restless leg syndrome/amyotrophic lateral sclerosis 483 (1.1) 6.9.3 Chronic pain/abnormality of gait 484 (1.1) 6.9.2 Other central nervous system disorders 481 (1.0) 653 Dementia 380 (0.9) *N = number of claims. † Neuro-ophthalmology clinical classification software categories are distinct from ophthalmology–specific codes. Feng et al: J Neuro-Ophthalmol 2023; 43: 153-158 155 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. Original Contribution TABLE 2. Proportion of neurology, ophthalmology, and neuro-ophthalmology–specific claims by clinician specialty Specialty Neurology Claims* % Mean (SD) Ophthalmology Claims* % Mean (SD) Neuro-Ophthalmology– Specific Diagnosis* % Mean (SD) Neurology Ophthalmology Neuro-Ophthalmology† Neurology specialty (n = 275) Ophthalmology specialty (n = 124) 82.5 (20.5) 0.7 (3.2) 20.9 (26.2) 48.3 (29.5) 8.5 (10.8) 2 (5.5) 95.2 (10) 70.5 (28.4) 41.1 (30.1) 83.8 (14.1) 0.56 (3) 4.6 (14.7) 35.1 (25.7) 30.0 (24) 37.4 (26.1) *Neurology and ophthalmology claims refer to Clinical Classifications Software categories in Table 1. Neuro-ophthalmology–specific diagnoses refer to the neuro-ophthalmology–specific codes (NSC) in Supplemental Digital Content (see Table 1, http://links.lww.com/ SCJ/A352). † Neuro-ophthalmologists were not counted in the neurology or ophthalmology category. summarized in Table 1. Compared with neuroophthalmologists with specialty designation in Medicare as ophthalmology, neuro-ophthalmologists with specialty designation as neurology had higher proportion of claims for neurologic conditions (mean proportion of claims: 48.3% vs 8.5%) and lower proportion for ophthalmologic conditions (mean proportion of claims: 41.1% vs 83.8%) and NSC (mean proportion of claims: 30.0% vs 37.4%) (Table 2). A stepwise regression with backward elimination of variables was then undertaken to build 4 multiple logistic regression models to assess the association between proportion of specialty-specific claims and neuroophthalmology specialty designation. Model 4 (odds ratio 1.05 [95% confidence interval 1.04, 1.05]; P , 0.001), which contained only the proportion of each clinician’s E/ M visits with NSC, best predicted neuro-ophthalmology specialty designation (AUROC = 0.91, Table 3). Table 4 summarizes the AUROC and accuracy of model 4 to classify neuro-ophthalmologists across varying proportions of NSC. As area under the curve uses the relationship between true positive and false positive rates to calculate the overall model performance, we selected the model with the highest area under the curve value as the preferred model (6). The accuracy of predictiveness for neuro-ophthalmology designation was maximized when 6% of all claims were for neuro-ophthalmology–specific codes (AUROC = 0.89; sensitivity: 84.0%; specificity: 93.9%, Table 4), but PPV was low (14.9%) since the prevalence of neuroophthalmologists in the sample is low. A sensitivity analysis was conducted, and this study was limited to those neurologists with $1% ophthalmology diagnoses and ophthalmologists with $1% neurology diagnoses (n = 8,769 total; 325 [68.9%] of NANOS directory members with a valid individual NPI). Sensitivity and specificity were unchanged, but PPV increased to 33.3% (see Supplemental Digital Content, Table 3, http://links.lww.com/SCJ/ A352). Furthermore, in separate bivariate logistic regression models, the proportion of NSC claims was able to similarly 156 distinguish neuro-ophthalmologist from neurologists and neuro-ophthalmologists from ophthalmologists. DISCUSSION In this study, we use publicly available physician identifier data and neuro-ophthalmology membership directory data to validate a method to accurately identify neuroophthalmologists in administrative data. We show the effect of changing either the diagnosis proportion or pool of eligible providers has on estimates of predictive value. Although our model was developed using neuro-ophthalmologists and Medicare data, this method can be further adapted to study neuroophthalmology care in other administrative databases. For example, neurologists and ophthalmologists who have 1 or more E/M visits with patients aged 65 years or older and billed 6% of NSC could be classified as providing neuro-ophthalmic care, and care patterns across all patients seen by those clinicians could be assessed. Among neurologists and ophthalmologists within Medicare, the proportion of NSC (optic neuritis, other disorders of optic nerve and visual pathways, paralytic strabismus, other strabismus, and visual disturbances) is the best predictor of neuroophthalmology specialty designation. This finding reinforces that there are distinct pathologies that are evaluated and treated by fellowship-trained neuro-ophthalmologists, when compared with those of non–neuro-ophthalmology–trained neurologists or ophthalmologists. Approximately 6% of all claims billed by any individual neurologist or ophthalmologist should be NSC to maximize the sensitivity and specificity of identifying neuroophthalmologists. For many studies, such as those that aim at understanding practice patterns of neuro-ophthalmologists or clinicians who practice similarly to neuro-ophthalmologists, this may be a sufficient means for identifying clinicians providing neuro-ophthalmic care. However, if maximizing the probability that a specific neurologist or ophthalmologist is truly a fellowship-trained neuro-ophthalmologist, then further limits should be placed on the sample, such as limiting the sample Feng et al: J Neuro-Ophthalmol 2023; 43: 153-158 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. ,0.001 1.05 (1.04–1.05) P value ,0.001 ,0.001 1.05 (1.04–1.05) 3.83 (2.49–5.88) Feng et al: J Neuro-Ophthalmol 2023; 43: 153-158 AUROC, area under the receiver operating characteristic; CI, confidence interval; OR, odds ratio. 1.05 (1.04–1.05) 1.24 (0.97–1.58) ,0.001 0.98 (0.98–0.99) 0.99 (0.98–0.99) 1.00 (1.00–1.01) 1.05 (1.04–1.05) 4.86 (2.55–9.26) ,0.001 0.342 ,0.001 ,0.001 OR (95% CI) Neurology claims (%) Ophthalmology claims (%) Neuro-ophthalmology–specific code (%) Neurology specialty AUROC 0.89 OR (95% CI) 0.90 P value OR (95% CI) P value 0.90 P value ,0.001 0.083 OR (95% CI) 0.91 Model 4 Model 3 Model 2 Model 1 TABLE 3. Logistic regression models of the association between specialty-specific claims and neuro-ophthalmology specialty designation at the individual provider-level Original Contribution to neurologists with $1% ophthalmology diagnoses and ophthalmologists with $1% neurology diagnoses, to improve the PPV. The identification of true fellowship-trained neuroophthalmologists allows for their practice patterns to be compared with general ophthalmologists and/or neurologists, which may be useful in identifying the conditions that should be referred to a fellowship-trained subspecialist. To the best of our knowledge, previous studies of neuroophthalmic specialty care have relied primarily on surveys sent to clinicians self-reporting as neuro-ophthalmologists through society directories (e.g., NANOS) (7–10). This method, although somewhat convenient, is subject to recall and response bias. Using administrative data sets allows researchers to understand how neuro-ophthalmologists practice in the real world, not just what is being reported. Using NPI numbers to identify neuro-ophthalmologists in administrative claims data would be most accurate; however, most administrative data sets do not include NPI information. Furthermore, data sets that do include this information, such as Medicare, contain only patients who qualify for Medicare coverage based on age or a disabling diagnosis. Neuro-ophthalmologists are unique in that they often care for patients across ages. Therefore, methods such as ours that do not rely solely on NPI are important for investigating the practice and quality of care that neuro-ophthalmologists provide. Our study has limitations. First, we used data from Medicare carrier files, and thus, patients who are not enrolled in Medicare and providers who do not participate in Medicare were excluded from the analysis. For example, neuroophthalmologists who provide care to children only are likely to be excluded. Second, we identified neuro-ophthalmologists using publicly available NPI numbers of neuroophthalmologists in the NANOS directory. Thus, there may be neuro-ophthalmologists who were not classified as neuroophthalmologists within our analyses because they were not listed in the NANOS directory. Third, although our original model that included all neurologists and ophthalmologists who billed a claim in Medicare in 2018 had a high AUROC, the PPV was low because neuro-ophthalmology is a small field with only 497 members identified in the NANOS directory compared with the large number of practicing neurologists and ophthalmologists (32,293). In our sensitivity analysis, limiting the denominator of neurologists and ophthalmologists through a two-staged approach improved model accuracy. Adding this limitation reduced our sample to 8,769 neurologists and ophthalmologists. Investigators using this strategy should consider the goals of their study when selecting their sample and NSC threshold. Finally, we recognize that because ICD codes were developed primarily for billing rather than clinical research use, they are prone to misclassification error. In addition, it is possible that providers may code for visual symptoms rather than the underlying cause of the visual changes when filing claims. However, our approach used ICD code families that encompassed both specific and nonspecific diagnoses and visual symptoms. 157 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. Original Contribution TABLE 4. AUROC, sensitivity, and specificity of model 4 based on percent of neuro-ophthalmology–specific claims NSC Claims (%) AUROC Sensitivity (%) Specificity (%) PPV (%) NPV (%) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 25 30 35 78.7 84.5 87.4 88.3 88.4 0.889 0.885 0.883 0.880 0.868 0.863 0.852 0.839 0.831 0.827 0.823 0.823 0.817 0.810 0.807 0.788 0.761 0.731 91.2 89.5 88.0 86.2 84.2 84.0 82.2 81.2 80.2 77.7 76.4 73.9 71.2 69.4 68.4 67.7 67.4 66.2 64.7 64.2 59.9 54.4 48.1 66.2 79.6 86.8 90.4 92.5 93.9 94.7 95.3 95.7 96.0 96.2 96.5 96.6 96.7 96.9 97.0 97.1 97.2 97.3 97.3 97.6 97.8 98.0 3.3 5.2 7.7 10.1 12.3 14.7 16.3 17.8 19.0 19.5 20.3 20.7 20.9 21.1 21.8 22.1 22.7 22.8 22.9 23.1 23.9 24.0 23.0 99.8 99.8 99.8 99.8 99.8 99.8 99.8 99.8 99.7 99.7 99.7 99.7 99.6 99.6 99.6 99.6 99.6 99.6 99.6 99.5 99.5 99.4 99.3 AUROC, area under the receiver operating characteristic; NPV, negative predictive value; NSC, neuro-ophthalmology–specific codes; PPV, positive predictive value. CONCLUSIONS In this study, we validated a method that identifies neuroophthalmologists in a large claims database using dataguided expert consensus. Although our model was built based on Medicare data, it could be further adapted for use in other administrative data sets. Future studies may consider using a strictly data-driven approach to develop similar models and deploy these methods to understand quality, variation, and utilization of neuro-ophthalmic care across the United States. STATEMENT OF AUTHORSHIP Conception and design: Y. Feng, A. G. Hamedani, C. C. Lin, L. B. De Lott; Acquisition of data: C. C. Lin, L. B. De Lott; Analysis and interpretation of data: Y. Feng, A. G. Hamedani, C. C. Lin, L. B. De Lott. Drafting the manuscript: Y. Feng, A. G. Hamedani, C. C. Lin, L. B. De Lott; Revising the manuscript for intellectual content: Y. Feng, A. G. Hamedani, C. C. Lin, L. B. De Lott. Final approval of the completed manuscript: L. B. De Lott. REFERENCES 1. Wibbelsman TD, Pandit RR, Xu D, Jenkins TL, Mellen PL, Soares RR, Obeid A, Levin H, Hsu J, Ho AC. Trends in retina specialist imaging utilization from 2012 to 2016 in the United States medicare fee-for-service population. Am J Ophthalmol. 2019;208:12–18. 158 2. Pandit RR, Wibbelsman TD, Considine SP, Jenkins TL, Xu D, Levin HJ, Obeid A, Ho AC. Distribution and practice patterns of retina providers in the United States. Ophthalmology. 2020;127:1580–1581. 3. Rothman AL, Stoler JB, Vu DM, Chang TC. A geodemographic service coverage analysis of travel time to glaucoma specialists in Florida. J Glaucoma. 2020;29:1147–1151. 4. Vu DM, Stoler J, Rothman AL, Chang TC. A service coverage analysis of primary congenital glaucoma care across the United States. Am J Ophthalmol. 2021;224:112–119. 5. Clinical Classifications Software (CCS) for ICD-9-CM. Agency for healthcare research and quality. Available at: https://www. hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed December 16, 2021. 6. Steyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M, Obuchowski N, Pencina MJ, Kattan MW. Assessing the performance of prediction models: a framework for traditional and novel measures. Epidemiology. 2010;21:128–138. 7. Biousse V, Calvetti O, Drews-Botsch CD, Atkins EJ, Sathornsumetee B, Newman NJ. Management of optic neuritis and impact of clinical trials: an international survey. J Neurol Sci. 2009;276:69–74. 8. Foo R, Yau C, Singhal S, Tow S, Loo JL, Tan K, Milea D. Optic neuritis in the era of NMOSD and MOGAD: a survey of practice patterns in Singapore. Asia Pac J Ophthalmol. 2022;11:184– 195. 9. Moss HE, Lai KE, Ko MW. Survey of telehealth adoption by neuro-ophthalmologists during the COVID-19 pandemic: benefits, barriers, and utility. J Neuroophthalmol. 2020;40:346–355. 10. Schallhorn J, Haug SJ, Yoon MK, Porco T, Seiff SR, McCulley TJ. A national survey of practice patterns: temporal artery biopsy. Ophthalmology. 2013;120:1930– 1934. Feng et al: J Neuro-Ophthalmol 2023; 43: 153-158 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. |
Date | 2023-06 |
Date Digital | 2023-06 |
References | 1. Wibbelsman TD, Pandit RR, Xu D, Jenkins TL, Mellen PL, Soares RR, Obeid A, Levin H, Hsu J, Ho AC. Trends in retina specialist imaging utilization from 2012 to 2016 in the United States medicare fee-for-service population. Am J Ophthalmol. 2019;208:12-18. 2. Pandit RR, Wibbelsman TD, Considine SP, Jenkins TL, Xu D, Levin HJ, Obeid A, Ho AC. Distribution and practice patterns of retina providers in the United States. Ophthalmology. 2020;127:1580-1581. 3. Rothman AL, Stoler JB, Vu DM, Chang TC. A geodemographic service coverage analysis of travel time to glaucoma specialists in Florida. J Glaucoma. 2020;29:1147-1151. 4. Vu DM, Stoler J, Rothman AL, Chang TC. A service coverage analysis of primary congenital glaucoma care across the United States. Am J Ophthalmol. 2021;224:112-119. 5. Clinical Classifications Software (CCS) for ICD-9-CM. Agency for healthcare research and quality. Available at: https://www.hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed December 16, 2021. 6. Steyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M, Obuchowski N, Pencina MJ, Kattan MW. Assessing the performance of prediction models: a framework for traditional and novel measures. Epidemiology. 2010;21:128-138. 7. Biousse V, Calvetti O, Drews-Botsch CD, Atkins EJ, Sathornsumetee B, Newman NJ. Management of optic neuritis and impact of clinical trials: an international survey. J Neurol Sci. 2009;276:69-74. 8. Foo R, Yau C, Singhal S, Tow S, Loo JL, Tan K, Milea D. Optic neuritis in the era of NMOSD and MOGAD: a survey of practice patterns in Singapore. Asia Pac J Ophthalmol. 2022;11:184-195. 9. Moss HE, Lai KE, Ko MW. Survey of telehealth adoption by neuro-ophthalmologists during the COVID-19 pandemic: benefits, barriers, and utility. J Neuroophthalmol. 2020;40:346-355. 10. Schallhorn J, Haug SJ, Yoon MK, Porco T, Seiff SR, McCulley TJ. A national survey of practice patterns: temporal artery biopsy. Ophthalmology. 2013;120:1930-1934. |
Language | eng |
Format | application/pdf |
Type | Text |
Publication Type | Journal Article |
Source | Journal of Neuro-Ophthalmology, June 2023, Volume 43, Issue 2 |
Collection | Neuro-Ophthalmology Virtual Education Library: Journal of Neuro-Ophthalmology Archives: https://novel.utah.edu/jno/ |
Publisher | Lippincott, Williams & Wilkins |
Holding Institution | Spencer S. Eccles Health Sciences Library, University of Utah |
Rights Management | © North American Neuro-Ophthalmology Society |
ARK | ark:/87278/s6qh3qy6 |
Setname | ehsl_novel_jno |
ID | 2498905 |
Reference URL | https://collections.lib.utah.edu/ark:/87278/s6qh3qy6 |