Description |
Prior research conducted by Butcher, Davies, and Cook (2015, in preparation) demonstrated that using concept maps to search within the online scientific database from the National Science Digital Library (NSDL) decreases cognitive effort over more common keyword-based searches; our purpose was to determine whether this decreased cognitive effort translated into different learning gains as measured by evaluating and scoring pre- and postessays. Teachers are one group who would benefit from more effective, less cognitively demanding ways of finding online material for their classrooms, so the participants in this study were student preservice as well as practicing inservice teachers. Using a rubric developed to evaluate the specific essays written for the Butcher et al. study, we found that participants were able to learn from online search tasks, as measured by more correct information contained in a postessay compared to a pre-essay, and a higher overall score; but this learning was not a function of which online search methods were used. The decreased cognitive effort did not lead to more learning gains as measured in this study. Our second study compared the hand-scored results from the postessays to two computerized scoring systems: Latent Semantic Analysis (LSA) and Coh-Metrix. The purpose of such systems is to help alleviate some of the issues with scoring large numbers of essays by hand. LSA determines semantic similarity between two texts, and Coh-Metrix gives measures of cohesion within each text. LSA correlated moderately with the hand scores (0.44 for the preservice teachers and 0.38 for inservice teachers). Other research has shown higher correlations between LSA and human graders, and because the LSA cosine scores do not show essay quality or level of correctness (only semantic similarity), they could not be substituted for the hand scores. None of the Coh-Metrix cohesion measures correlated significantly with the hand scores. This indicates that cohesion measures obtained from Coh-Metrix are not indicative of the quality of essays as determined by human scorers as given for these essays. |