| OCR Text |
Show Letters to the Editor Shining a Light on Disparities in Medicine: Response W e are grateful to Henderson and colleagues for highlighting the importance of reviewing the demographics of the neuro-ophthalmology fellowship program directors (PDs). Notably, 75% of PDs provided demographic information which was compared with the demographic information of neurology and ophthalmology residency programs as well as the US census data and data for students and residents from the Association of American Medical Colleges. As the authors note, there are disparities between the demographics of the PDs (70% male, 58.6% White, and 31% Asian) and the demographics of medical students, residents in ophthalmology or neurology, and their residency program directors. There were only 2 neuro-ophthalmology PDs (less than 7%) who identified as belonging to a historically underrepresented in medicine group (URiM). One often provided rationale for these types of disparities is that the neuro-ophthalmology PDs closely reflect the demographics of the ophthalmology and neurology faculty. However, a study published in 2000 evaluated the academic promotion rates of faculty who graduated medical school between 1979 and Comment on: Utility of ChatGPT for Automated Creation of Patient Education Handouts: An Application in Neuro-Ophthalmology W e would like to respond to a comment on the published article entitled “Utility of ChatGPT for Automated Creation of Patient Education Handouts: An Application in Neuro-Ophthalmology.”1 The study's objectives were to create patient education materials with ChatGPT-3.5, assess their readability with the Simple Measure of Gobbledygook (SMOG) index, and gauge their quality with the Quality of Generated Language Outputs for Patients (QGLOP) tool. Fifty-one pamphlets covering 17 medical disorders were created by them. QGLOP evaluated readability, while SMOG examined bias, currency, accuracy, and tone. Every handout was scored by a neuro-ophthalmologist. The accuracy, bias, currency, and tone scores were 2.43, 3.43, and 3.02, respectively, whereas the mean QGLOP score was 11.9 out of 16 (74.4% performance). A SMOG index of 10.9 years of education was the average. The limited generalizability of the results could be attributed to the small sample size of 51 handouts. Potential biases or limitations in ChatGPT-3.5's generation were not addressed in the study. Concerns over the QGLOP test's efficacy as an assessment tool were raised by the lack of discussion regarding its validity and reliability. The SMOG readability assessment of handouts ignores formatting and visual aids in favor of solely taking into account the text's difficulty. Letters to the Editor: J Neuro-Ophthalmol 2024; 44: e492-e497 2013.1 They found that women and individuals who identify as URiM continue to be underrepresented in academic promotions and in attaining higher ranks of academia. Women are half as likely as men to become department chairs, underscoring the increasing disparities with rank and academic advancement for women. It is clear that more work is necessary to eliminate all disparities in academic medicine, and shining a light on the disparities is an important step in understanding the problem. Lynn K. Gordon, MD, PhD, Peter A. Quiros, MD Stein Eye Institute (LKG); and Department of Ophthalmology, David Geffen School of Medicine at UCLA, Los Angeles, California; and Stein Eye Institute (PAQ); and Department of Ophthalmology, Doheny Eye Institute, David Geffen School of Medicine at UCLA, Los Angeles, California. The authors report no conflicts of interest. REFERENCE 1. Richter KP, Clark L, Wick JA, et al. Women physicians and promotion in academic medicine. N Engl J Med. 2020;383:2148–2157. doi: 10.1097/WNO.0000000000002046 Copyright © 2023 by North American Neuro-Ophthalmology Society To improve statistical power and cover a wider spectrum of medical disorders, the sample size should be increased. Examine potential biases in ChatGPT-3.5 and devise countermeasures to reduce them so that the material produced is accurate and believable. To make sure the QGLOP tool accurately measures the caliber of patient handouts, it should be validated and improved. To get a more thorough assessment, it should think about using additional techniques, including user testing and feedback, to gauge handout readability. Finally, it should be noted that the user of the AI system ultimately decides whether to adhere to a just and moral norm.2 Hinpetch Daungsupawong, PhD, Viroj Wiwanitkit, MD Private Academic Consultant (HD), Phonhong, Lao People's Democratic Republic; and Chandigarh University (VW), Punjab, India. The authors report no conflicts of interest. Address correspondence to Hinpetch Daungsupawong, Private Academic Consultant, Phonhong, Lao People's Democratic Republic, 10000; E-mail: hinpetchdaung@gmail.com REFERENCES 1. Tao BK, Handzic A, Hua NJ, Vosoughi AR, Margolin EA, Micieli JA. Utility of ChatGPT for automated creation of patient education handouts: an application in neuro-ophthalmology. J Neuroophthalmol. 2024;44:119–124. 2. Kleebayoon A, Wiwanitkit V. ChatGPT, critical thing and ethical practice. Clin Chem Lab Med. 2023;61:e221. doi: 10.1097/WNO.0000000000002175 © 2024 by North American Neuro-Ophthalmology Society e495 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. |