Title | A Reliability and Comparative Analysis of the New Randomized King-Devick Test |
Creator | Minh Q. Nguyen; Doug King; Alan J. Pearce |
Affiliation | School of Allied Health (MQN, AJP), Human Services and Sport, La Trobe University, Melbourne, Australia; and Sports Performance Research Institute New Zealand (SPRINZ) (DK), Faculty of Health and Environmental Science, Auckland University of Technology, Auckland, New Zealand |
Abstract | Objective: The King-Devick (K-D) test is a rapid visual screening tool that can assess underlying brain trauma such as concussion via impairments in saccadic rhythm. A new tablet version of the K-D test using randomized numbers is now available, but reliability for this new version and comparison to the traditional K-D test has not yet been reported. Known for learning effects in the test, the aim of this study was to determine test-retest reliability and to compare performance of the new 'randomized' version to the 'traditional' K-D test version. We hypothesized that the 'traditional' K-D test would show a greater rate of improvement with repeat application, compared with the 'randomized' K-D test. Methods: Using a cross-sectional, repeated measures design in a healthy university student cohort (n = 96; age 21.6 ± 2.8 years; 49 women, 47 men), participants were required to complete the K-D test twice with a one-week break between testing sessions. Participants were randomly assigned into a 'traditional' group, where they completed a test-retest of the established K-D protocol, using the same numbers; or the 'randomized' group, where they completed test-retest protocol using 2 different sets of numbers. Results: Reliability testing showed a strong intraclass correlation coefficient for both the 'traditional' test group (control group; 0.95 [CI: 0.91-0.97]) and the 'randomized test group' (0.97 [CI: 0.95-0.98]). However, contrary to our hypothesis, no differences were found between 'traditional' and 'randomized' groups for baseline (control: 42.5 seconds [CI: 40.2-44.9 s] vs randomized: 41.5 [38.7-44.4], P = 0.23) and repeated testing between groups (control: 40.0 seconds [37.9-42.1 s] vs randomized: 39.5 [36.9-42.0], P = 0.55), with both groups showing improved times with repeated testing (control: 2.1 seconds [CI: 1.1-3.2 seconds] and randomized: 1.9 seconds CI: [0.9-2.9 seconds], P < 0.001). Conclusions: The 'randomized' version of the K-D test, using different sets of numbers, demonstrates good reliability that is comparable to the traditional K-D testing protocol that uses the same number sets. However, similar to the 'traditional' K-D test, learning effects were also observed in the 'randomized' test, suggesting that learning effects are not because of content memorization, but rather familiarity of the test. As a result, although either test format is suitable for sideline concussion screening or return to play decisions, comparison of data should be made to the individual's baseline rather than to normative data sets. |
Subject | Athletic Injuries / complications; Athletic Injuries / diagnosis; Brain Concussion / diagnosis; Brain Concussion / etiology; Brain Concussion / physiopathology; Cross-Sectional Studies; Female; Humans; Male; Neuropsychological Tests; Reproducibility of Results; Saccades / physiology; Young Adult |
OCR Text | Show Original Contribution A Reliability and Comparative Analysis of the New Randomized King-Devick Test Minh Q. Nguyen, BSc, Doug King, PhD, Alan J. Pearce, PhD Objective: The King-Devick (K-D) test is a rapid visual screening tool that can assess underlying brain trauma such as concussion via impairments in saccadic rhythm. A new tablet version of the K-D test using randomized numbers is now available, but reliability for this new version and comparison to the traditional K-D test has not yet been reported. Known for learning effects in the test, the aim of this study was to determine test-retest reliability and to compare performance of the new "randomized" version to the "traditional" K-D test version. We hypothesized that the "traditional" K-D test would show a greater rate of improvement with repeat application, compared with the "randomized" K-D test. Methods: Using a cross-sectional, repeated measures design in a healthy university student cohort (n = 96; age 21.6 ± 2.8 years; 49 women, 47 men), participants were required to complete the K-D test twice with a one-week break between testing sessions. Participants were randomly assigned into a "traditional" group, where they completed a test-retest of the established K-D protocol, using the same numbers; or the "randomized" group, where they completed test-retest protocol using 2 different sets of numbers. Results: Reliability testing showed a strong intraclass correlation coefficient for both the "traditional" test group (control group; 0.95 [CI: 0.91-0.97]) and the "randomized test group" (0.97 [CI: 0.95-0.98]). However, contrary to our hypothesis, no differences were found between "traditional" and "randomized" groups for baseline (control: 42.5 seconds [CI: 40.2- School of Allied Health (MQN, AJP), Human Services and Sport, La Trobe University, Melbourne, Australia; and Sports Performance Research Institute New Zealand (SPRINZ) (DK), Faculty of Health and Environmental Science, Auckland University of Technology, Auckland, New Zealand. The authors report no conflicts of interest. A. J. Pearce is supported, in-part, by funding from Sport Health Check charity (Australia), and has previously received in-part funding from the Australian Football League (Melbourne, Australia), Samsung Corporation Australia (Sydney, Australia), and Impact Technologies Inc (Perth, Australia). The remaining authors report no conflicts of interest. D. King is not affiliated with the King-Devick test or King-Devick Technologies. Address correspondence to Alan J. Pearce, PhD, School of Allied Health, Human Services and Sport, La Trobe University, Melbourne, Australia 3083; E-mail: alan.pearce@latrobe.edu.au Nguyen et al: J Neuro-Ophthalmol 2020; 40: 207-212 44.9 s] vs randomized: 41.5 [38.7-44.4], P = 0.23) and repeated testing between groups (control: 40.0 seconds [37.9-42.1 s] vs randomized: 39.5 [36.9-42.0], P = 0.55), with both groups showing improved times with repeated testing (control: 2.1 seconds [CI: 1.1-3.2 seconds] and randomized: 1.9 seconds CI: [0.9-2.9 seconds], P , 0.001). Conclusions: The "randomized" version of the K-D test, using different sets of numbers, demonstrates good reliability that is comparable to the traditional K-D testing protocol that uses the same number sets. However, similar to the "traditional" K-D test, learning effects were also observed in the "randomized" test, suggesting that learning effects are not because of content memorization, but rather familiarity of the test. As a result, although either test format is suitable for sideline concussion screening or return to play decisions, comparison of data should be made to the individual's baseline rather than to normative data sets. Journal of Neuro-Ophthalmology 2020;40:207-212 doi: 10.1097/WNO.0000000000000829 © 2019 by North American Neuro-Ophthalmology Society C oncussion continues to be an ongoing public health concern (1). Greater awareness of concussion injury, and the effects of concussions on risks of cognitive impairment and neurodegenerative disease, has increased endeavors to develop tools to identify potentially concussed athletes quickly, efficiently, and by the field of play (2). In addressing this concern, many sports have implemented general recognition and clinical concussion diagnostic assessments. Among the available range of sideline concussion tests, the King-Devick (K-D) test, which involves reading aloud a series of numbers as quickly as possible, has been demonstrated as a valid and reliable tool for the rapid screening of athletes suspected of concussion (2-5). This is because the visual system is important in the diagnosis of concussion (2,6,7). Visual processing uses functional connectivity of multiple areas of the cortex involving visual-spatial integration, attention, motor planning, and language function (2,5). Cortical areas involved in saccadic function include the frontal eye fields, dorsolateral prefrontal cortex (DLPFC), supplementary motor, posterior 207 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. Original Contribution parietal, and middle temporal areas, and the striate cortex (8-12). Specifically, these areas control planning and completing coordinated saccades involved in tasks such as reading. Moreover, for smooth saccades to occur during tasks such as reading, the DLPFC is also involved in antisaccades (10,12). It is also known that subcortical structures including the thalamus, superior colliculus, cerebellum, and brainstem are also involved in maintaining smooth eye movements (13). The functional connectivity of these areas toward maintaining optimal eye movements is important, and injury to brain will result in poorer performance in tasks such as reading (14). As suggested by Galetta et al (2), saccadic eye assessment is pertinent in observing the underlying neurophysiology of the brain following injury such as a concussion. Originally described as a reading test to assess the relationship between oculomotor function and reading ability (15,16), the K-D test is now used to assess brain functionality in both experimental and clinical settings. For example, slower performances in the K-D reflecting suboptimal brain function has been observed following experimentally induced hypoxia with mean times for the K-D test 9.2 seconds slower than nonhypoxic controls (17). Similar findings of poorer K-D performance have been reported in sleep deprivation (18). Studies in clinical populations, including Parkinson disease, multiple sclerosis (MS), and amyotrophic lateral sclerosis, have shown similar outcomes of poorer performance with K-D times than agematched healthy controls (19-21). Furthermore, the study in MS showed that poorer performance in K-D times was associated with MS patients' worse quality of life regarding vision (assessed by the 25-Item National Eye Institute Visual Functioning Questionnaire) (21). Conversely, improved K-D performance has been observed following moderate and high exercise bouts inferring increased mental arousal (22,23). The K-D test has previously reported good construct validity to eye movement tracking metrics in experimental conditions and clinical screening tools for concussion including the sports concussion assessment tool and military acute concussion evaluation (2,23-25). However, despite reports of high reliability of the K-D test (intraclass correlation coefficient of .0.92) (2), a continuing concern revolves around improvement in test times with subsequent attempts, which has been attributed to test familiarization and learning effects. In the meta-analysis by Galetta et al (2) the weighted mean improvement in nonconcussed individuals across 15 studies was 1.8 seconds (95% CI: 23.4 to 20.1; I2 = 0.0%, P = 0.98). However, it has been argued that improvement in the test outcome further highlights the deleterious effects of concussion, particularly when concussed individuals show a mean worsening of times by an average of 4.8 seconds (2). In response to this, the developers of the K-D test have updated the tablet application (Version 4) to include 3 sets 208 of numbers, allowing assessors to "randomize" the test application. Currently, a test-retest reliability and comparison of learning effects between the new randomized and traditional versions has not yet been undertaken. Therefore, this is the first independent study to compare intertest reliability between the "traditional" (same numbers presented) and the new "randomized" (different numbers presented) tablet application. We calculated intraclass correlation coefficients (ICC) and compared differences in repeated data between the traditional and randomized tablet versions. We hypothesized that both tests would show good reliability with repeated application. With known learning effects associated with the K-D test (2), a second hypothesis was that a significant difference in performance improvement would be observed in the "traditional" K-D test compared with the new "randomized" K-D test. METHODS Study Design and Participants A convenience total of 96 participants (21.6 ± 2.8 years; 49 women and 47 men) were recruited from the university student population, based on an a priori power analysis (f = 0.2, 1 2 b error probably 0.95, P . 0.05) requiring a minimum sample of 84 participants (26). With less stable data in older population groups (2), inclusion criteria required participants to be under the age of 50 years and neurologically healthy. Using previously published criteria (27), those who had a recent concussion, within the last 6 months, were ineligible to participate in the study. As exercise has been shown to influence K-D test scores (22,23), participants refrained from any physical activity 2 hours before testing. All study procedures received approval from the University human research ethics committee (LTUHEC 18207). Individuals recruited for the study completed pretesting screening and signed an informed consent before study enrolment. Participants were randomly assigned, via the random number generator, into one of 2 groups (Fig. 1 for study design): 1) the "traditional" K-D test where individuals completed the standard protocol using previously published numbers (Test 1 twice); and 2) the "randomized" K-D test where individuals completed the standard test the first time, but were then provided with one of 2 new tests (Test 2 or Test 3). All K-D testing was completed using a tablet (iPad; Apple Inc., Cupertino, CA) according to the developer's recommendations (v4.2.2; King-Devick technologies Inc). All testing protocols were completed in an indoor laboratory setting, shielding participants from any background noise or visual distractions. Using standardized protocols (3,4,22,28,29), participants were instructed to read the numbers presented for 3 trials of increasing difficulty. Errors made during reading Nguyen et al: J Neuro-Ophthalmol 2020; 40: 207-212 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. Original Contribution within the randomized cohort with ICCs of 0.967 (0.931-0.986) and 0.974 (0.944-0.990), respectively. Comparison Between Traditional and Randomized Number Test Protocols FIG. 1. Research design for the study. All participants completed "Test 1" of the King- Devick (K-D) assessment. After a 1-week washout period, half were then randomly assigned to repeat "Test 1" again, using the same number set. The other half were randomly assigned to complete "Test 2" or "Test 3" with different number sets. and the total time for the 3 trials were included in calculating the K-D test "score." All participants completed 2 trials of Test 1 within a short period of time (several minutes) to establish their baseline "score." The faster time from 2 trials became the established baseline K-D test time for week 1 (3,23). A 1-week break was provided between test-retest sessions. For the retest sessions, participants assigned to the "traditional K-D" control group repeated the protocol using the same numbers from Test 1. Participants assigned to the "randomized" group repeated the protocol from either Test 2 or Test 3. The same person conducted all baseline and repeated measurements. Statistical Analysis Data were screened for normal distribution using the Kologorov-Smirnov (KS) tests and found to be normally distributed (KS: 0.083, P = 0.149). Test-retest reliability was calculated using ICC, with 95% CI, to examine agreement between first and second baseline test scores and the repeat-testing scores. Comparisons between groups (traditional vs randomized) were completed using a mixed-model repeated measure ANOVA. Furthermore, comparison was also made between those in the randomized group who completed Test 2 vs those who completed Test 3. Post-hoc Bonferroni tests were undertaken, where ANOVA detected differences. All data are presented as mean [± 95% CI], unless specified, and alpha was set at P , 0.05. Comparison between groups (Fig. 2) showed no interaction effect (F1, 95 = 1.475; P = 0.228) nor differences for the main effect for group (F1, 95 = 0.358; P = 0.551). A main effect of time (F1, 95 = 27.787; P , 0.001) was observed, showing that both groups had improved in their repeated test. K-D test time for the traditional test (Test 1) improved by a mean of 2.029 seconds (1.158-2.901 seconds) whereas the combined randomized test sample (Test 2 and Test 3) showed a mean improvement of 1.269 seconds (0.358- 2.180 seconds). Analyses of improvements of those who completed Test 2 and Test 3 separately were 1.014 seconds (20.408 to 2.438 seconds) and 1.631 seconds (0.598- 2.664 seconds), respectively. No interaction effect was observed (F1, 47 = 0.445; P = 0.508) nor a difference for the main effect for group (F1, 47 = 1.736; P = 0.195). A main effect for time (F1, 47 = 8.195; P = 0.006) was observed, also showing that both subgroups had improved in their repeated test. DISCUSSION The aim of this study was to assess the reliability of the newly modified K-D assessment that features different number sets to randomize the assessment protocol. A secondary question was to compare the new randomized number sets with the traditional K-D test protocol where the same set of numbers is used each time. Although the K-D test has been an important tool for the recognition of concussion and screening for other neurological conditions RESULTS Reliability Analysis The ICC between trials for the traditional (control) group was 0.949 (0.913-0.972). The ICC for the randomized group was 0.972 (0.951-0.984). Further reliability testing was completed within the 2 subgroups (Test 2/Test 3) Nguyen et al: J Neuro-Ophthalmol 2020; 40: 207-212 FIG. 2. Mean (±95% CI) King-Devick (K-D) test times for "traditional" group (Test 1: Test 1; light-colored bars), and "randomized" group (Test 1: Test 2 or Test 3; dark-colored bars). Both groups, irrespective of the test completed in the retest session (Fig. 1), significantly improved over time. 209 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. Original Contribution (2), the rationale of developing 3 test options, thereby randomizing the numbers and potentially reducing learning effects, was in response to previous studies reporting improvement in times with repeat applications of the test (2,5,24,29). We hypothesized that using new number sets (Test 2 or Test 3) would be reliable. We also hypothesized that there would be a greater learning effect using the same number sets (Test 1) compared to using a different number set (Test 2 or 3). Across all versions of the tests we found high reliability in baseline retest K-D scores. However, contrary to our second hypothesis, our data showed that there was improvement across all test options, suggesting learning effects irrespective of whether the retest used the same (Test 1: Test 1) or randomized (Test 1: Test 2 or Test 3) number sets. Furthermore, no differences were observed in improvements between the randomized subtests (Test 1: Test 2 or Test 1: Test 3). The test-retest reliability results in this study are analogous to previous research reporting constancy of the K-D test of ICCs .0.92 (2). Similar to recent studies that used a 2-week test-retest interval (27), our interval of one week test-retest time frame was aimed to reflect general clinical practice in Australia where medical doctors may typically observe their patients between appointments, who will be rested from activities involving physical contact for 7-10 days, particularly if they are not elite-athletes (5,30). However, it is acknowledged that patients who are diagnosed with concussion may be tested more routinely after injury. Previous investigations have demonstrated reliability of the K-D test with learning effects, using test-retest intervals varying from 15 minutes (25) and 1 week (5) to several months (3,4,31). The improvement across all 3 number sets hints that the learning effects observed are not likely to be learning or familiarization of the numbers themselves, but rather a practice effect of the testing protocol. Across cognitive testing performance literature, improvements in retesting have been well reported. Further to familiarization or learning of the content, there are a number of reasons why improvements can occur. Suggestions include reduction in anxiety with familiarity of the test (32), improved strategies in performing a test that does not reflect learning per se (33), and regression to the mean (34). However, Hausknecht et al (35) have also posited that mere repetition can affect (improve) test scores. Improvements without interventions, such as coaching or instruction, are ascribed to familiarity with the test environment, irrespective of the actual content in the assessment, or an improved understanding of the items contained in the test. We suggest that this familiarity would underlie observed improvements. Despite the K-D test protocols requiring a baseline being established from 2 error-free trials, with subsequent improvements in time establishing a new baseline for the individual (5,23), further studies are required to determine how 210 much repetition would be required before improvement plateaus are observed. Although other cognitive tests may affect interpretation through improvements, improvements in K-D times, outside of interventions such as exercise (22), serve to improve recognition of suspected concussion injuries and other conditions affecting brain function. For example, in a study investigating the effects of sleep deprivation on brain function, neurology residents who averaged 2 hours of sleep showed smaller performance improvements in the K-D compared with controls, who had normal amounts of sleep (18). In concussion, recent studies have demonstrated increased (worsening) K-D times, relative to improvement in baselines, in those who were clinically diagnosed with concussion (3-5). With improvements observed in this study for Tests 2 and 3, future studies should not discourage the use of the randomization protocol for field assessments to establish baseline values. Irrespective of which test is used for retesting, it is important to note that posttest data should be compared with the individual's baseline, rather than comparison of age-normative data sets (2). Limitations of this study include the sample tested, specifically a young, university student population. We focused on a younger population group given that most studies investigating test-retest reliability have been in those younger than 40 years of age (2), allowing for comparability to previous data. Future studies should also consider testing an older population group given their higher risk of falls, which is the largest contributor to concussions in the elderly (36). Another limitation is that although it can be argued that the test is culturally neutral, we did not screen participants for English as their first language. Several recent studies have reported on K-D times being affected by those whose first language is not English (37,38). However, in this study we compared individuals with their own baseline, suggesting the findings would not be affected. Nevertheless, future studies that involve quantifying the effects of an intervention (such as exercise) or suspected concussion should screen participants for first language and, if feasible, individuals should complete the test using their preferred language (37). Further studies should investigate comparing the randomized protocol using an older population, particularly those over the age of 40 years. In their systematic review, Galetta et al (2) suggest that between the ages of 18-40 years, K-D test times appear stable with slight increases in times for those older than 40 years. Future studies should also compare different intervals of application of the randomized K-D test to reflect the possibility of practice effects. Patients diagnosed with concussion may be tested routinely, such as daily or every third day for example, and thus practice effects may be an issue. Studies could compare reliability of application of the randomized test between shorter and longer time frames (2 days vs 1 week), Nguyen et al: J Neuro-Ophthalmol 2020; 40: 207-212 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. Original Contribution because the randomized version my still be prone to practice effects or issues such as test anxiety, as opposed to time. In conclusion, this study has demonstrated reliability with the new randomized version of the K-D test, comparable with the traditional K-D testing protocol using the same number sets. Similar to the traditional protocol, we observed learning effects using the randomized number sets. This suggests that the randomized K-D test is suitable for concussion recognition using an individual comparison, rather than to any normative data sets. STATEMENT OF AUTHORSHIP Category 1: a. Conception and design: M. Q. Nguyen and A. J. Pearce; b. Acquisition of data: M. Q. Nguyen and A. J. Pearce; c. Analysis and interpretation of data: M. Q. Nguyen, D. King, and A. J. Pearce. Category 2: a. Drafting the manuscript: M. Q. Nguyen, D. King, and A. J. Pearce; b. Revising it for intellectual content: D. King and A. J. Pearce. Category 3: a. Final approval of the completed manuscript: M. Q. Nguyen, D. King, and A. J. Pearce. REFERENCES 1. Wiebe DJ, Comstock RD, Nance ML. Concussion research: a public health priority. Inj Prev. 2011;17:69-70. 2. Galetta KM, Liu M, Leong DF, Ventura RE, Galetta SL, Balcer LJ. The King-Devick test of rapid number naming for concussion detection: meta-analysis and systematic review of the literature. Concussion. 2016;1:CNC8. 3. King D, Gissane C, Hume P, Flaws M. The King-Devick test was useful in management of concussion in amateur rugby union and Rugby League in New Zealand. J Neurol Sci. 2015;351:58-64. 4. King D, Hume P, Gissane C, Clark T. Use of the King-Devick test for sideline concussion screening in junior rugby league. J Neurol Sci. 2015;357:75-79. 5. Hecimovich M, King D, Dempsey AR, Murphy M. The KingDevick test is a valid and reliable tool for assessing sportrelated concussion in Australian football: a prospective cohort study. J Sci Med Sport. 2018;21:1004-1007. 6. Ventura RE, Jancuska JM, Balcer LJ, Galetta SL. Diagnostic tests for concussion: is vision part of the puzzle? J Neuroophthal. 2015;35:73-81. 7. Ventura RE, Balcer LJ, Galetta SL. The neuro-ophthalmology of head trauma. Lancet Neurol. 2014;13:1006-1016. 8. Sparks DL, Mays LE. Signal transformations required for the generation of saccadic eye movements. Annu Rev Neurosci. 1990;13:309-336. 9. Pierrot-Deseilligny C, Rivaud S, Gaymard B, Agid Y. Cortical control of reflexive visually-guided saccades. Brain. 1991;114:1473-1485. 10. Pierrot-Deseilligny C, Rivaud S, Gaymard B, Müri R, Vermersch AI. Cortical control of saccades. Ann Neurol Official J Am Neurol Assoc Child Neurol Soc. 1995;37:557-567. 11. Rivaud S, Müri RM, Gaymard B, Vermersch AI, PierrotDeseilligny C. Eye movement disorders after frontal eye field lesions in humans. Exp Brain Res. 1994;102:110-120. 12. Ploner CJ, Rivaud-Péchoux S, Gaymard BM, Agid Y, PierrotDeseilligny C. Errors of memory-guided saccades in humans with lesions of the frontal eye field and the dorsolateral prefrontal cortex. J Neurophysiol. 1999;82:1086-1090. 13. Heitger MH, Anderson TJ, Jones RD. Saccade sequences as markers for cerebral dysfunction following mild closed head injury. Prog Brain Res. 2002;140:433-448. 14. White OB, Fielding J. Cognition and eye movements: assessment of cerebral dysfunction. J Neuroophthalmol. 2012;32:266-273. Nguyen et al: J Neuro-Ophthalmol 2020; 40: 207-212 15. King A. The Proposed King-Devick Test and its Relation to the Pierce Saccade Test and Reading Levels. Chicago, IL: Illinois College of Optometry, 1976. 16. Oride MK, Marutani JK, Rouse MW, DeLand PN. Reliability study of the Pierce and King-Devick saccade tests. Am J Optom Physiol Opt. 1986;63:419-424. 17. Stepanek J, Pradhan GN, Cocco D, Smith BE, Bartlett J, Studer M, Kuhn F, Cevette MJ. Acute hypoxic hypoxia and isocapnic hypoxia effects on oculometric features. Aviat Space Environ Med. 2014;85:700-707. 18. Davies EC, Henderson S, Balcer LJ, Galetta SL. Residency training: the King-Devick test and sleep deprivation: study in pre- and post-call neurology residents. Neurology. 2012;78:e103-e106. 19. Lin TP, Adler CH, Hentz JG, Balcer LJ, Galetta SL, Devick S. Slowing of number naming speed by King-Devick test in Parkinson's disease. Parkinsonism Relat Disord. 2014;20:226- 229. 20. Ayaz H, Shewokis PA, Scull L, Libon DJ, Feldman S, Eppig J, Onaral B, Heiman-Patterson T. Assessment of prefrontal cortex activity in amyotrophic lateral sclerosis patients with functional near infrared spectroscopy. J Neurosci Neuroengineering. 2014;3:41-51. 21. Moster S, Wilson JA, Galetta SL, Balcer LJ. The King-Devick (K-D) test of rapid eye movements: a bedside correlate of disability and quality of life in MS. J Neurol Sci. 2014;343:105-109. 22. Rist B, Cohen A, Pearce AJ. King-Devick performance following moderate and high exercise intensity bouts. Int J Exerc Sci. 2017;10:619-628. 23. Galetta KM, Brandes LE, Maki K, Dziemianowicz MS, Laudano E, Allen M, Lawler K, Sennett B, Wiebe D, Devick S. The King- Devick test and sports-related concussion: study of a rapid visual screening tool in a collegiate cohort. J Neurol Sci. 2011;309:34-39. 24. Galetta MS, Galetta KM, McCrossin J, Wilson JA, Moster S, Galetta SL, Balcer LJ, Dorshimer GW, Master CL. Saccades and memory: baseline associations of the King-Devick and SCAT2 SAC tests in professional ice hockey players. J Neurol Sci. 2013;328:28-31. 25. Galetta KM, Barrett J, Allen M, Madda F, Delicata D, Tennant AT, Branas CC, Maguire MG, Messner LV, Devick S, Galetta SL, Balcer LJ. The King-Devick test as a determinant of head trauma and concussion in boxers and MMA fighters. Neurology. 2011;76:1456-1462. 26. Faul F, Erdfelder E, Lang AG, Buchner AG. *Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. 2007;39:175-191. 27. Eddy R, Goetschius J, Hertel J, Resch J. Test-retest reliability and the effects of exercise on the king-devick test. Clin J Sport Med. [published ahead of print March 26, 2018] doi: 10.1097/JSM.0000000000000586. 28. Leong DF, Balcer LJ, Galetta SL, Evans G, Gimre M, Watt D. The King-Devick test for sideline concussion screening in collegiate football. J Optom. 2015;8:131-139. 29. King D, Clark T, Gissane C. Use of a rapid visual screening tool for the assessment of concussion in amateur rugby league: a pilot study. J Neurol Sci. 2012;320:16-21. 30. Elkington L, Manzanero S, Hughes D. Concussion in Sport Position Statement, 2019. Available at: ama.com.au/ position-statement/concussion-sport-2016. Accessed March 1, 2019. 31. Leong DF, Balcer LJ, Galetta SL, Liu Z, Master CL. The KingDevick test as a concussion screening tool administered by sports parents. J Sports Med Phys Fitness. 2014;54:70-77. 32. Messick S, Jungeblut A. Time and method in coaching for the SAT. Psychol Bull. 1981;89:191. 33. Sackett PR, Burris LR, Ryan AM. Coaching and practice effects in personnel selection. In: Robertson CCI, eds. International Review of Industrial and Organizational Psychology. New York, NY: Wiley, 1989:145-183. 34. Campbell DT, Kenny DA. A Primer on Regression Artifacts. New York, NY: Guilford Publications, 1999. 211 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. Original Contribution 35. Hausknecht JP, Halpert JA, Di Paolo NT, Moriarty Gerrard MO. Retesting in selection: a meta-analysis of coaching and practice effects for tests of cognitive ability. J Appl Psychol. 2007;92:373-385. 36. Dumire RD. Geriatric concussions. In: Rodriguez A, Barraco RD, Ivatury RR, eds. Geriatric Trauma and Acute Care Surgery. Cham, Switzerland: Springer International Publishing, 2018:55-67. 212 37. Dickson TJ, Waddington G, Terwiel FA, Elkington L. The King- Devick test is not sensitive to self-reported history of concussion but is affected by English language skill. J Sci Med Sport. [published ahead of print August 23, 2018] doi: 10.1016/j.jsams.2018.08.013. 38. D'Amico N, Elbin R, Schatz P. A comparison of king-devick test baseline scores between English-speaking and Spanishspeaking high school athletes. J Atthl Train. 2017;52:S237. Nguyen et al: J Neuro-Ophthalmol 2020; 40: 207-212 Copyright © North American Neuro-Ophthalmology Society. Unauthorized reproduction of this article is prohibited. |
Date | 2020-06 |
Language | eng |
Format | application/pdf |
Type | Text |
Publication Type | Journal Article |
Source | Journal of Neuro-Ophthalmology, June 2020, Volume 40, Issue 2 |
Collection | Neuro-Ophthalmology Virtual Education Library: Journal of Neuro-Ophthalmology Archives: https://novel.utah.edu/jno/ |
Publisher | Lippincott, Williams & Wilkins |
Holding Institution | Spencer S. Eccles Health Sciences Library, University of Utah |
Rights Management | © North American Neuro-Ophthalmology Society |
ARK | ark:/87278/s6kx15gt |
Setname | ehsl_novel_jno |
ID | 1592868 |
Reference URL | https://collections.lib.utah.edu/ark:/87278/s6kx15gt |