Using essays to evaluate learning and comparing human scoring of essays to computer scoring systems

Update Item Information
Publication Type thesis
School or College College of Education
Department Educational Psychology
Author Hudson, Michelle Alissa
Title Using essays to evaluate learning and comparing human scoring of essays to computer scoring systems
Date 2016
Description Prior research conducted by Butcher, Davies, and Cook (2015, in preparation) demonstrated that using concept maps to search within the online scientific database from the National Science Digital Library (NSDL) decreases cognitive effort over more common keyword-based searches; our purpose was to determine whether this decreased cognitive effort translated into different learning gains as measured by evaluating and scoring pre- and postessays. Teachers are one group who would benefit from more effective, less cognitively demanding ways of finding online material for their classrooms, so the participants in this study were student preservice as well as practicing inservice teachers. Using a rubric developed to evaluate the specific essays written for the Butcher et al. study, we found that participants were able to learn from online search tasks, as measured by more correct information contained in a postessay compared to a pre-essay, and a higher overall score; but this learning was not a function of which online search methods were used. The decreased cognitive effort did not lead to more learning gains as measured in this study. Our second study compared the hand-scored results from the postessays to two computerized scoring systems: Latent Semantic Analysis (LSA) and Coh-Metrix. The purpose of such systems is to help alleviate some of the issues with scoring large numbers of essays by hand. LSA determines semantic similarity between two texts, and Coh-Metrix gives measures of cohesion within each text. LSA correlated moderately with the hand scores (0.44 for the preservice teachers and 0.38 for inservice teachers). Other research has shown higher correlations between LSA and human graders, and because the LSA cosine scores do not show essay quality or level of correctness (only semantic similarity), they could not be substituted for the hand scores. None of the Coh-Metrix cohesion measures correlated significantly with the hand scores. This indicates that cohesion measures obtained from Coh-Metrix are not indicative of the quality of essays as determined by human scorers as given for these essays.
Type Text
Publisher University of Utah
Subject Coh-Metrix; Essays; Latent Semantic Analysis (LSA); National Science Digital Library (NSDL); Online Learning
Dissertation Name Master of Science
Language eng
Rights Management ©Michelle Alissa Hudson
Format Medium application/pdf
Format Extent 569,278 bytes
Identifier etd3/id/4228
ARK ark:/87278/s63f7xz9
Setname ir_etd
ID 197773
Reference URL https://collections.lib.utah.edu/ark:/87278/s63f7xz9
Back to Search Results