A Reliability and Comparative Analysis of the New Randomized King-Devick Test

Update Item Information
Title A Reliability and Comparative Analysis of the New Randomized King-Devick Test
Creator Minh Q. Nguyen, Doug King, Alan J. Pearce
Affiliation School of Allied Health (MQN, AJP), Human Services and Sport, La Trobe University, Melbourne, Australia; and Sports Performance Research Institute New Zealand (SPRINZ) (DK), Faculty of Health and Environmental Science, Auckland University of Technology, Auckland, New Zealand
Abstract Objective: The King-Devick (K-D) test is a rapid visual screening tool that can assess underlying brain trauma such as concussion via impairments in saccadic rhythm. A new tablet version of the K-D test using randomized numbers is now available, but reliability for this new version and comparison to the traditional K-D test has not yet been reported. Known for learning effects in the test, the aim of this study was to determine test-retest reliability and to compare performance of the new "randomized" version to the "traditional" K-D test version. We hypothesized that the "traditional" K-D test would show a greater rate of improvement with repeat application, compared with the "randomized" K-D test. Methods: Using a cross-sectional, repeated measures design in a healthy university student cohort (n = 96; age 21.6 ± 2.8 years; 49 women, 47 men), participants were required to complete the K-D test twice with a one-week break between testing sessions. Participants were randomly assigned into a "traditional" group, where they completed a test-retest of the established K-D protocol, using the same numbers; or the "randomized" group, where they completed test-retest protocol using 2 different sets of numbers. Results: Reliability testing showed a strong intraclass correlation coefficient for both the "traditional" test group (control group; 0.95 [CI: 0.91-0.97]) and the "randomized test group" (0.97 [CI: 0.95-0.98]). However, contrary to our hypothesis, no differences were found between "traditional" and "randomized" groups for baseline (control: 42.5 seconds [CI: 40.2-44.9 s] vs randomized: 41.5 [38.7-44.4], P = 0.23) and repeated testing between groups (control: 40.0 seconds [37.9-42.1 s] vs randomized: 39.5 [36.9-42.0], P = 0.55), with both groups showing improved times with repeated testing (control: 2.1 seconds [CI: 1.1-3.2 seconds] and randomized: 1.9 seconds CI: [0.9-2.9 seconds], P < 0.001). Conclusions: The "randomized" version of the K-D test, using different sets of numbers, demonstrates good reliability that is comparable to the traditional K-D testing protocol that uses the same number sets. However, similar to the "traditional" K-D test, learning effects were also observed in the "randomized" test, suggesting that learning effects are not because of content memorization, but rather familiarity of the test. As a result, although either test format is suitable for sideline concussion screening or return to play decisions, comparison of data should be made to the individual's baseline rather than to normative data sets.
OCR Text Show
Date 2020-06
Language eng
Format application/pdf
Type Text
Publication Type Journal Article
Source Journal of Neuro-Ophthalmology, June 2020, Volume 40, Issue 2
Publisher Lippincott, Williams & Wilkins
Holding Institution Spencer S. Eccles Health Sciences Library, University of Utah, 10 N 1900 E SLC, UT 84112-5890
Rights Management © North American Neuro-Ophthalmology Society
ARK ark:/87278/s6kx15gt
Setname ehsl_novel_jno
ID 1592868
Reference URL https://collections.lib.utah.edu/ark:/87278/s6kx15gt
Back to Search Results