Title | Real-World Translation of Artificial Intelligence in Neuro-Ophthalmology: The Challenges of Making an Artificial Intelligence System Applicable to Clinical Practice |
Creator | Anat Bachar Zipori; Cailey I. Kerley; Ainat Klein; Rachel C. Kenney |
Affiliation | Ophthalmology Department (ABZ, AK), Tel Aviv Medical Center, Tel Aviv, Israel; Sackler Faculty of Medicine (ABZ, AK), Tel Aviv University, Tel Aviv, Israel; Department of Electrical and Computer Engineering (CK, RK), Vanderbilt University, Nashville, Tennessee; and Department of Radiology and Radiological Sciences and Medicine (RK), Vanderbilt University Medical Center, Nashville, Tennessee |
Abstract | 1. Lin D, Xiong J, Liu C, Zhao L, Li Z, Yu S, Wu X, Ge Z, Hu X, Wang B, Fu M, Zhao X, Wang X, Zhu Y, Chen C, Li T, Li Y, Wei W, Zhao M, Li J, Xu F, Ding L, Tan G, Xiang Y, Hu Y, Zhang P, Han Y, li J, Wei L, Zhu P, Liu Y, Chen W, Ting D, Wong T, Chen Y, Lin H. Application of Comprehensive Artificial intelligence Retinal Expert (CARE) system: a national real-world evidence study. Lancet Digit Health. 2021;3:e486-e495. 2. Xie Y, Nguyen Q, Bellemo V, Yip M, Lee M, Hamzah H, Lim G, Hsu W, Lee ML, Wang JJ, Cheng CY, Finkelstein EA, Lamoureux EL, Tan GSW, Wong T. Cost-Effectiveness analysis of an artificial intelligence-assisted deep learning system implemented in the national tele-medicine diabetic retinopathy screening in Singapore. Invest Ophthalmol Vis Sci. 2019;60:5471. 3. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner K, Madams T, Cuadros J, Kim R, Raman R, Nelson PC, Mega JL, Webster DR. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus Photographs. JAMA. 2016;316:2402-2410. 4. van der Heijden AA, Abramoff MD, Verbraak F, van Hecke M, Liem A, Nijpels G. Validation of automated screening for referable diabetic retinopathy with the IDx-DR device in the Hoorn Diabetes Care System. Acta Ophthalmol. 2018;96:63-68. 5. Milea D, Najjar RP, Jiang Z, Ting D, Vasseneix C, Xu X, Aghsaei Fard M, Fonseca P, Vanikieti K, Lagrèze WA, La Morgia C, Cheung CY, Hamann S, Chiquet C, Sanda N, Yang H, Mejico LJ, Rougier MB, Kho R, Tran THC, Singhal S, Gohier P, Vignal-Clermont C, Cheng Cy, Jonas JB, Yu-Wai-Man P, Fraser CL, Chen JJ, Ambika S, Miller NR, Liu Y, Newman NJ, Wong TY, Biousse V. Artificial intelligence to detect papilledema from ocular fundus Photographs. New Engl J Med. 2020;382:1687-1695. |
Subject | Artificial Intelligence; Ophthalmology |
OCR Text | Show Editorial Real-World Translation of Artificial Intelligence in Neuro-Ophthalmology: The Challenges of Making an Artificial Intelligence System Applicable to Clinical Practice Anat Bachar Zipori, MD, Cailey I. Kerley, BS, Ainat Klein, MD, Rachel C. Kenney, PhD A rtificial intelligence (AI) is a field in computer science that develops tools to substitute for human intelligence and behavior in the performance of tasks. In some cases, algorithms can surpass the human ability to detect patterns and make predictions. AI applications in clinical medicine may include disease diagnosis, identifying disease progression, and risk stratification. AI systems can potentially augment human workflow to improve use of clinical and financial resources.(1–8) AI systems are based on models developed using different strategies that vary in complexity. Traditional approaches are rule-based and may be sufficient for automated interpretation of a chemical blood test. Machine learning (ML) approaches move beyond rules to learn from patterns in data. Deep learning (DL) is a type of ML that deploys artificial neural networks to analyze the input data at multiple layers and has been broadly applied to image analysis. Ophthalmology has been one of the pioneers in the developing field of DL clinical image analysis, including in the detection of diabetic retinopathy, glaucoma, and age-related macular degeneration (6,7,9,10). There are fewer publications related to neuro-ophthalmic disorders, although recent publications on swollen disc image classification demonstrate the potential for AI in neuro-ophthalmology (5,11,12). Yet some barriers still exist before these techniques can be used reliably (13,14). In the previous issue of Journal of Neuro-ophthalmology, Dumitrascu et al (15) reviewed the nuts and bolts of performing and reporting ML (including DL) studies. In this article, presented as a companion to Vasseneix et al’s (16) “Accuracy of a Deep Learning System for Classification of Papilledema Severity on Ocular Fundus Photographs,” we examine some challenges in translating the scientific proof-of-concept into a medical device or software for use in clinical care. We discuss barriers including procuring and labelling a generalizable data set, quality control of data acquisition, and limitations of study design (16,17). Other factors that delay AI adaptation into clinical practice are also considered. SELECTING THE DATA Data Availability AI/DL models use copious amounts of data to train, tune, and validate an algorithm. Typically, tens of thousands of data points are needed to train the model, which is a particular hurdle in neuroophthalmology, where the rarity of disease limits available data (17). Two approaches to mitigate data scarcity are listed below: Transfer learning is the process of modifying pre-existing algorithms on a new dataset/new task, which allows use of smaller data sets for training and validation (16). For example, ImageNet (18) is a library of more than 1 million retinal color fundus images that were used by Vasseneix et al (16) to Ophthalmology Department (ABZ, AK), Tel Aviv Medical Center, Tel Aviv, Israel; Sackler Faculty of Medicine (ABZ, AK), Tel Aviv University, Tel Aviv, Israel; Department of Electrical and Computer Engineering (CK, RK), Vanderbilt University, Nashville, Tennessee; and Department of Radiology and Radiological Sciences and Medicine (RK), Vanderbilt University Medical Center, Nashville, Tennessee. The authors report no conflicts of interest. Address correspondence to Anat B. Zipori, MD, Ophthalmology Department, Tel Aviv Medical Center, 6 Weizmann Street, Tel Aviv 6423906, Israel; E-mail: a.bachar5759@gmail.com Zipori et al: J Neuro-Ophthalmol 2022; 42: 287-291 287 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. Editorial pretrain the performance of their classification network. Their fundus photos of interest later fine-tuned the model for their specific classification task. Data augmentation involves increasing the size of the dataset by duplicating images and tweaking them slightly (e.g., introducing small rotations, translations, noise, etc.) to create synthetic data from existing data (19,20). Generalizability Ideally, an AI/DL product should be accurate across a variety of patient populations, diverse clinical presentations, and a wide range of devices, protocols, and health care systems. Prospective studies using actual clinical data (3,16,21) acquired in a variety of settings to build the model enable such generalizability. Investigators should be sensitive to data imbalance because of age, gender, race, and socioeconomic status to accommodate to a broader population (22,23). After the initial building and validation of the model, it is essential to evaluate the model using separate unrelated datasets, acquired by different devices, with diverse populations, and in various clinical settings to achieve the best estimation of a model’s generalizability. An international multicenter collaboration such as the Brain and Optic Nerve Study with Artificial Intelligence Consortium (5) who conducted Vasseneix et al’s study (16), is a fine example of how to overcome limited single-site data resources and at the same time sample diverse populations with different ethnic backgrounds imaged under different clinical conditions. The data sets used by Vasseneix et al (16) were acquired with 15 different types of mydriatic and nonmydriatic fundus cameras using experienced photographers. Future external validation and tuning may use images from mobile phone devices taken by untrained photographers (24). Setting the “Ground Truth” ML (and DL) can be supervised or unsupervised. In supervised learning, the training dataset is explicitly labelled (e.g., papilledema, not papilledema), and the algorithm is trained to differentiate between these labels. “Ground truth” is a term used in ML/AI to describe what is considered a true label. In unsupervised learning, the algorithm is trained to look for patterns in the dataset and create its own categories. Although this approach holds the promise of advancing our understanding of disease processes by not constraining the algorithm to our preconceived categories, it presents challenges with interpretation and clinical relevance beyond the scope of this study. A prerequisite for high-performance supervised models is not just data quantity, but also quality of labelling and annotations as inaccuracies in the ground truth can lead to inaccurate predictions (3,25). However, the ground truth may be subjective and difficult to attain, for example, if intragrader or intergrader reliability for labelling is poor. 288 Grading parameters should adhere to consensus protocols and diagnostic criteria described in the literature. To ensure consensus, multiple graders for the same image are often used, although this may be time-consuming and expensive (3). Vasseneix et al (16) used 2 levels of papilledema severity: severe and mild/moderate. While diminishing the complexity of the 5 levels of severity used in the Frisen criteria (26), using a binary outcome, rather than a categorical or continuous outcome for ground truth, can often improve the performance of a DL model. In Vasseneix et al’s study (16), 2 graders were used to label the severity of papilledema and in cases of disagreement, 2 additional graders were used to determine final classification, and only images for which consensus was achieved were used in model development. Data Quality Data quality can be a double-edged sword. On one hand, poor-quality images (e.g., because of operator variation, signal strength, centration of scan, illumination, beam placement, or patient factors) (27,28) may not be suitable for clinical decision making by humans or AI. Hence, some investigators have highlighted the importance of quality control (QC) in data acquisition (29), with specific standards for quality of input data stated in the protocol, along with the implications regarding patient care when this standard is not met (13,14). Vasseneix et al (16) excluded images with insufficient quality from the training set, but did not include specifications regarding their quality standard. Conversely, high-quality images are sometimes unachievable because of patient disabilities, and aspiring for optimization may risk bias and reduce generalizability because of noninclusion of patients prone to poor imaging in the training and testing datasets. QC can also become a barrier to utilization. In nonresearch clinical situations, human intelligence will exclude consideration of images because of poor quality (e.g., outof-focus or not capturing the area of interest) (12). It is important that AI models identify images that do not meet the quality necessary for accurate classification so as not to mislead the physician. However, if the training dataset only contains high-quality images such as those in Vasseneix et al (16), the AI model will not be trained to distinguish between useable and unusable images. Transparency One of the major obstacles to clinical acceptance of AI models is what is often referred to as the “black box” of neural networks (30,31), which refers to the notion that the relationship between a model’s input and output is not fully understood, even by the creators of the model. Many clinicians emphasize the need for transparency to ensure that an AI tool is safe for its intended use (13,14). For regulatory Zipori et al: J Neuro-Ophthalmol 2022; 42: 287-291 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. Editorial purposes (31), it has become necessary to unravel the algorithm’s “black box” and decode the algorithm before it can be used for patient care. Vasseneix et al (16) took a step toward transparency by describing common features in images that were misclassified by their model. A more comprehensive approach is to use a black-box explainability method such as Grad-CAM (32) that highlights image features the network was considering during classification. By reviewing the output of these techniques for both correctly and incorrectly classified images, experts can evaluate whether or not a model is learning features relevant to the disease of interest. Explainable AI systems, which demonstrate to the user the portion of the image that was used to deduce the algorithm’s conclusion together with the level of certainty of the diagnosis, can enhance the human–machine interaction (33) and allow for clinicians to critically appraise the AI output. DL algorithms can identify exceedingly early markers of sight-threatening diseases that are otherwise undetectable using human intelligence. Transparency of these algorithms is unlikely sufficient to provide clinical confidence, and utilization trials may be necessary to demonstrate whether acting on the categorization improves patient outcomes. Human-Centered Approach is Required for a Useful Product For an AI model to be implemented as a clinical product, it needs to be adopted by the user (34). Beyond being comfortable with the model’s output, clinicians will need to purchase the product and integrate it into their workflow. To justify the cost and modification to practice, they are likely to demand evidence of either improved outcomes or ability to save resources (35). Wide acceptance of AI as a tool to improve patient care is restricted because of practitioners’ preconceived notion that it will not improve their performance (36). Clinicians will likely need targeted education to encourage adoption. In the case of Vasseneix et al (16), the DL model outperformed the neuroophthalmologists, providing evidence that their tool could provide useful medical data if it is properly integrated into the clinical workflow. Cost-Effectiveness Once reliable AI systems are in place, there is an opportunity to save costs in patient care using automated diagnostics to save personnel time (2). A long-term cost– benefits evaluation of the model is crucial to demonstrate the feasibility of adopting an AI technology into clinic and justify clinical adoption. Unfortunately, almost 90% of AI studies in medicine lack a proper economic impact assessment (37). Studies should discuss medical and economic outcomes, such as cost savings per patient per year (38), and consider the initial investment and operational costs for the AI product compared with the expected economic benefit. Zipori et al: J Neuro-Ophthalmol 2022; 42: 287-291 Developing and training an AI model is computationally expensive, and its infrastructure and service can be associated with a significant financial burden that is passed onto the consumer (39). One strategy to reduce this is by transfer of pre-existing AI architecture (38,40). Vasseneix et al (16), as previously mentioned, applied this principle by pretraining their classification network using ImageNet (18) data. Regulatory Limitations The FDA has strict regulatory requirements for medical device licensing, and even more so for AI/ML devices, which, although ensuring safety, can be a barrier to implementation (31,41). The FDA must consider that AI models continue to update. Ideally, each version should be separately evaluated, and safety and efficacy continuously assessed. However, this creates a substantial regulatory burden. Two solutions have been adopted by the FDA: the “Total product lifecycle-based” (TPLC) regulatory approach (41,42) and “locked” algorithms.41,42 TPLC incorporates premarket and postmarket information and allows for modifications to be made, while indirectly ensuring safety and efficacy by judiciously observing the company’s level of quality control and good ML practices. In “locked” algorithms (41,42), the system is “locked,’ fixing it at a certain level of function. IDx-DR, a commercially available fundus camera that incorporates AI analysis for retinopathy, was approved in this manner (2). Other legal considerations are data security and protection and data privacy requirements. Comprehensive data collection, which is necessary for an inclusive AI system, is vulnerable to security breaches and information leakage. Vasseneix et al (16) used deidentified data for training and validation. CONCLUSION Despite recent advances in AI and DL, there is a wide gap between the promise that this technology holds and the implementation of these tools as products used in clinical practice. The challenges create opportunities for novel approaches to dataset creation, bias reduction, model transparency, and clinician training. Ultimately, using a human-centered design approach should increase the quality and generalizability of AI tools, allowing tools such as the papilledema severity classifier described in Vasseneix et al (16) to leave the lab bench and start improving patient care. REFERENCES 1. Lin D, Xiong J, Liu C, Zhao L, Li Z, Yu S, Wu X, Ge Z, Hu X, Wang B, Fu M, Zhao X, Wang X, Zhu Y, Chen C, Li T, Li Y, Wei W, Zhao M, Li J, Xu F, Ding L, Tan G, Xiang Y, Hu Y, Zhang P, Han Y, li J, Wei L, Zhu P, Liu Y, Chen W, Ting D, Wong T, Chen Y, Lin H. Application of Comprehensive Artificial intelligence Retinal Expert (CARE) system: a national real-world evidence study. Lancet Digit Health. 2021;3:e486–e495. 289 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. Editorial 2. Xie Y, Nguyen Q, Bellemo V, Yip M, Lee M, Hamzah H, Lim G, Hsu W, Lee ML, Wang JJ, Cheng CY, Finkelstein EA, Lamoureux EL, Tan GSW, Wong T. Cost-Effectiveness analysis of an artificial intelligence-assisted deep learning system implemented in the national tele-medicine diabetic retinopathy screening in Singapore. Invest Ophthalmol Vis Sci. 2019;60:5471. 3. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner K, Madams T, Cuadros J, Kim R, Raman R, Nelson PC, Mega JL, Webster DR. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus Photographs. JAMA. 2016;316:2402–2410. 4. van der Heijden AA, Abramoff MD, Verbraak F, van Hecke M, Liem A, Nijpels G. Validation of automated screening for referable diabetic retinopathy with the IDx-DR device in the Hoorn Diabetes Care System. Acta Ophthalmol. 2018;96:63–68. 5. Milea D, Najjar RP, Jiang Z, Ting D, Vasseneix C, Xu X, Aghsaei Fard M, Fonseca P, Vanikieti K, Lagrèze WA, La Morgia C, Cheung CY, Hamann S, Chiquet C, Sanda N, Yang H, Mejico LJ, Rougier MB, Kho R, Tran THC, Singhal S, Gohier P, VignalClermont C, Cheng Cy, Jonas JB, Yu-Wai-Man P, Fraser CL, Chen JJ, Ambika S, Miller NR, Liu Y, Newman NJ, Wong TY, Biousse V. Artificial intelligence to detect papilledema from ocular fundus Photographs. New Engl J Med. 2020;382:1687– 1695. 6. Ran A, Cheung C, Wang X, Chen H, Luo L, Chan P, Wong M, Chang R, Mannil S, Young A, Yung HW, Pung CP, Heng PA, Tham CC. Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: a retrospective training and validation deep-learning analysis. Lancet Digit Health. 2019;1:e172–e182. 7. Ting DSW, Cheung CYL, Lim G, Tan G, Quang N, Gan A, Hamzah H, Garcia-Franco R, Yeo I, Lee S, Wong E, Sabanayagam C, Baskaran M, Ibrahim F, Tan N, Finkelstein EA, Lamoureux EL, Wong I, Bressler NM, Sivaprasad S, Varma R, Jonas J, He M, Cheng C, Cheung G, Aung T, Hsu W, Lee M, Wong T. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA. 2017;318:2211. 8. Yim J, Chopra R, Spitz T, Winkens J, Obika A, Kelly C, Askham H, Lukic M, Huemer J, Fasler K, Moraes G, Meyer C, Wilson M, Dixon J, Hughes C, Rees G, Khaw PT, Karthikesalingam A, King D, Hassabis D, Suleyman M, Back T, Ledsam JR, Keane PA, De Fauw J. Predicting conversion to wet age-related macular degeneration using deep learning. Nat Med. 2020;26:892– 899. 9. Ting DSW, Pasquale LR, Peng L, Campbell JP, Lee Ay, Raman R, Tan GSW, Schmetterer L, Keane PA, Wong TY. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol. 2019;103:167–175. 10. Lee CS, Baughman DM, Lee AY. Deep learning is effective for the classification of OCT images of normal versus Age-related Macular Degeneration HHS Public Access. Ophthalmol Retina. 2017;1:322–327. 11. Biousse V, Newman NJ, Najjar RP, Vasseneix C, Xu X, Ting DS, Milea LB, Hwang JM, Kim DH, Yang HK, Hamann S, Chen JJ, Liu Y, Wong TW, Milea D, Rondé-Courbis B, Gohier P, Miller N, Padungkiatsagul T, Poonyathalang A, Suwan Y, Vanikieti K, Amore G, Barboni P, Carbonelli M, Carelli V, La Morgia C, Romagnoli M, Rougier MB, Ambika S, Komma S, Fonseca P, Raimundo M, Karlesand I, Alexander Lagrèze W, Sanda N, Thumann G, Aptel F, Chiquet C, Liu K, Yang H, Chan CKM, Chan NCY, Cheung CY, Chau Tran TH, Acheson J, Habib MS, Jurkute N, Yu-Wai-Man P, Kho R, Jonas JB, Sabbagh N, VignalClermont C, Hage R, Khanna RK, Aung T, Cheng CY, Lamoureux E, Loo JL, Singhal S, Ting D, Tow S, Jiang Z, Fraser CL, Mejico LJ, Fard MA. Optic disc classification by deep learning versus expert neuro-ophthalmologists. Ann Neurol. 2020;88:785–795. 12. Liu T, Wei J, Zhu H, Subramanian PS, Myung D, Yi PH, Hui FK, Unberath M, Ting DSW, Miller NR. Detection of optic disc 290 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. abnormalities in color fundus Photographs using deep learning. J Neuroophthalmol. 2021;41:368–374. Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK, SPIRIT- AI; CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI Extension. BMJ. 2020;370:m3164. Cruz Rivera S, Liu X, Chan AW, Denniston AK, Calvert MJ, Darzi A, Holmes C, Yau C, Moher D, Ashrafian H, Deeks JJ, Ferrante di Ruffano L, Faes L, Keane PA, Vollmer SJ, Lee AY, Jonas A, Esteva A, Beam AL, Panico MB, Lee CS, Haug C, Kelly CJ, Yau C, Mulrow C, Espinoza C, Fletcher J, Paltoo D, Manna E, Price G, Collins GS, Harvey H, Matcham J, Monteiro J, ElZarrad MK, Ferrante di Ruffano L, Oakden-Rayner L, McCradden M, Keane PA, Savage R, Golub R, Sarkar R, Rowley S. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat Med. 2020;26:1351–1363. Dumitrascu OM, Wang Y, Chen JJ. Clinical machine learning modeling studies: methodology and data reporting. J Neuroophthalmol. 2022;42:145–148. Vasseneix C, Najjar RP, Xu X, Tang Z, Loo JL, Singhal S, Tow S, Milea L, Ting DSW, Liu Y, Wong TY, Newman NJ, Biousse V, Milea D. Accuracy of a deep learning system for classification of papilledema severity on ocular fundus Photographs. Neurology. 2021;97:e369–e377. Moss HE, Joslin CE, Rubin DS, Roth S. Big data research in neuro-ophthalmology: promises and pitfalls. J Neuroophthalmol. 2019;39:480. Deng J, Dong W, Socher R, Li LJ, Li Kai, Fei-Fei Li. ImageNet: a large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition. 2009:248–255. Asaoka R, Murata H, Matsuura M, Fujino Y, Miki A, Tanito M, Mizoue S, Mori K, Suzuki K, Yamashita T, Kashiwagi K, Shoji N. Usefulness of data augmentation for visual field trend analyses in patients with glaucoma. Br J Ophthalmol. 2020;104:1697– 1703. Wang Z, Lim G, Ng WY, Keane PA, Campbell JP, Tan GSW, Schmetterer L, Wong TW, Liu Y, Ting DSW. Generative adversarial networks in ophthalmology: what are these and how can they be used? Curr Opin Ophthalmol. 2021;32:459– 467. Liu TYA, Ting DSW, Yi PH, Wei J, Zhu H, Subramanian PS, Li T, Hui FK, Hager GD, Miller NR. Deep learning and transfer learning for optic disc laterality detection: implications for machine learning in neuro-ophthalmology. J Neuroophthalmol. 2020;40:178–184. Burlina P, Joshi N, Paul W, Pacheco KD, Bressler NM. Addressing AI bias in retinal disease diagnostics. Transl Vis Sci Technol. 2021;10:13. Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019;17:195. Ko MW, Busis NA. Tele–neuro-ophthalmology: vision for 20/20 and beyond. J Neuro-Ophthalmology. 2020;40:378–384. Krause J, Gulshan V, Rahimy E, Karth P, Widner K, Corrado GS, Peng L, Webster DR. Grader variability and the importance of reference standards for evaluating machine learning models for diabetic retinopathy. Ophthalmology. 2018;125:1264– 1272. Frisen L. Swelling of the optic nerve head: a staging scheme. J Neurol Neurosurg Psychiatry. 1982;45:13–18. Balk LJ, de Vries-Knoppert WAEJ, Petzold A. A simple sign for recognizing off-axis OCT measurement beam placement in the context of multicentre studies. PLoS One. 2012;7(11): e48222. Oberwahrenbrock T, Weinhold M, Mikolajczak J, Zimmermann H, Paul F, Beckers I, Brandt AU. Reliability of intra-retinal layer thickness estimates. PLoS One. 2015;10(9):e0137316. Petzold A, Albrecht P, Balcer L, Bekkers E, Brandt AU, Deborah OG, Graves JS, Green A, Keane PA, Nij Bijvank JA, Sander JW, Paul F, Saidha S, Villoslada P, Wagner SK, Yeh EA, Aktas O, Antel J, Asgari N, Audo I, Avasarala J, Avril D, Bagnato FR, Zipori et al: J Neuro-Ophthalmol 2022; 42: 287-291 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. Editorial Banwell B, Bar-Or A, Behbehani R, Manterola AB, Bennett J, Benson L, Bernard J, Bremond-Gignac D, Britze J, Burton J, Calkwood J, Carroll W, Chandratheva A, Cohen J, Comi G, Cordano C, Costa S, Costello F, Courtney A, Cruz-Herranz A, Cutter G, Crabb D, Delott L, De Seze J, Diem R, Dollfuss H, El Ayoubi NK, Fasser C, Finke C, Fischer D, Fitzgerald K, Fonseca P, Frederiksen JL, Frohman E, Frohman T, Fujihara K, Cuellar IG, Galleta S, Garcia-Martin E, Giovannoni G, Glebauskiene B, Suárez IG, Jensen GP, Hamann S, Hartung HP, Havia B, Hemmer B, Huang SC, Imitola J, Jasinskas V, Jiang H, Kafieh R, Kappos L, Kardon R, Keegan D, Kildebeck E, Kim US, Klistorner S, Knier B, Kolbe S, Korn T, Krupp L, Lagrèze W, Leocani L, Levin N, Liskova P, Preiningerova JL, Lorenz B, May E, Miller D, Mikolajczak J, Saïd SM, Montalban X, Morrow M, Mowry E, Murta J, Navas C, Nolan R, Nowomiejska K, Oertel FC, Oh J, Oreja-Guevara C, Orssaud C, Osborne B, Outteryck O, Paiva C, Palace J, Papadopoulou A, Patsopoulos N, Preiningerova JL, Pontikos N, Preising M, Prince J, Reich D, Rejdak R, Ringelstein M, Rodriguez de Antonio L, Sahel JA, Sanchez-Dalmau B, Sastre-Garriga J, Schippling S, Schuman J, Shindler K, Shin R, Shuey N, Soelberg K, Specovius S, Suppiej A, Thompson A, Toosy A, Torres R, Touitou V, TrauzettelKlosinski S, van der Walt A, Vermersch P, Vidal-Jordana A, Waldman AT, Waters C, Wheeler R, White O, Wilhelm H, Winges KM, Wiegerinck N, Wiehe L, Wisnewski T, Wong S, Würfel J, Yaghi S, You Y, Yu Z, Yu-Wai-Man P, Zemaitien R, Zimmermann H. Artificial intelligence extension of the OSCARIB criteria. Ann Clin Translational Neurol. 2021;8:1528–1542. 30. Castelvecchi D. Can we open the black box of AI? Nature. 2016;538:20–23. 31. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25:44–56. 32. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: visual explanations from deep networks via Zipori et al: J Neuro-Ophthalmol 2022; 42: 287-291 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. gradient-based localization. 2017 IEEE International Conference on Computer Vision (ICCV). 2017:618–626. de Fauw J, Ledsam JR, Romera-Paredes B, Nikolov S, Tomasev N, Cornebise J, Keane PA, Ronneberger O. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018;24:1342–1350. Shah NH, Milstein A, Bagley SC. Making machine learning models clinically useful. JAMA. 2019;322:1351–1352. Keane PA, Topol EJ. With an eye to AI and autonomous diagnosis. NPJ Digital Med. 2018;1:1–3. Blease C, Kaptchuk TJ, Bernstein MH, Mandl KD, Halamka JD, Desroches CM. Artificial intelligence and the future of primary care: exploratory qualitative study of UK general practitioners’ views. J Med Internet Res. 2019;21:e12802. Wolff J, Pauling J, Keck A, Baumbach J. The economic impact of artificial intelligence in health care: systematic review. J Med Internet Res. 2020;22:e16866. Dismuke C. Progress in examining cost-effectiveness of AI in diabetic retinopathy screening. The Lancet Digital Health. 2020;2:e212–e213. Li D, Becchi M, Chen X, Zong Z. Evaluating the energy efficiency of deep convolutional neural networks on CPUs and GPUs. 2016 IEEE International Conferences on Big Data and Cloud Computing (BDCloud). 2018. Beam AL, Kohane IS. Translating artificial intelligence into clinical care. JAMA. 2016;316:2368–2369. Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digital Med. 2020;3:118. FDA. Artificial Intelligence and Machine Learning in Software as a Medical Device j FDA. 2021. Available at: https://www. fda.gov/medical-devices/software-medical-devicesamd/artificial-intelligence-and-machine-learning-softwaremedical-device. Accessed April 23, 2022. 291 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. |
Date | 2022-09 |
Date Digital | 2022-09 |
Language | eng |
Format | application/pdf |
Type | Text |
Publication Type | Journal Article |
Source | Journal of Neuro-Ophthalmology, September 2022, Volume 42, Issue 3 |
Collection | Neuro-Ophthalmology Virtual Education Library: Journal of Neuro-Ophthalmology Archives: https://novel.utah.edu/jno/ |
Publisher | Lippincott, Williams & Wilkins |
Holding Institution | Spencer S. Eccles Health Sciences Library, University of Utah |
Rights Management | © North American Neuro-Ophthalmology Society |
ARK | ark:/87278/s66b034d |
Setname | ehsl_novel_jno |
ID | 2344198 |
Reference URL | https://collections.lib.utah.edu/ark:/87278/s66b034d |