| Title | Predicting the ACT reading test score and why it matters |
| Publication Type | dissertation |
| School or College | College of Education |
| Department | Educational Psychology |
| Author | Wilson, Tonia J. |
| Date | 2017 |
| Description | This study examined the relation between the ACT Reading score and a variety of high school student academic behaviors including cumulative grade point average (GPA), number of college courses taken, number of reading intensive courses taken, and two state common core standards reading assessments. It also looked at how income and gender moderated the effects of these variables on the ACT Reading score. Using standard multiple regression, a model representing 52% of the variance in the ACT Reading score was revealed. The findings indicate students who take more rigorous coursework in high school and maintain a high GPA are more likely to do well on the ACT Reading test. Additionally, taking more advanced courses was correlated with better ACT Reading scores for boys. A strong GPA showed a weaker relationship to success on the ACT Reading test for low-income students than for their higher income peers |
| Type | Text |
| Publisher | University of Utah |
| Dissertation Name | Doctor of Philosophy |
| Language | eng |
| Rights Management | © Tonia J. Wilson |
| Format | application/pdf |
| Format Medium | application/pdf |
| ARK | ark:/87278/s6d26sfj |
| Setname | ir_etd |
| ID | 1484655 |
| OCR Text | Show PREDICTING THE ACT READING TEST SCORE AND WHY IT MATTERS by Tonia J. Wilson A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Educational Psychology The University of Utah August 2017 Copyright © Tonia J. Wilson 2017 All Rights Reserved The University of Utah Graduate School STATEMENT OF DISSERTATION APPROVAL The dissertation of _______________Tonia J. Wilson________________ has been approved by the following supervisory committee members: Douglas Hacker , Chair 4/21/2017 Date Approved Lauren Liang , Member 4/21/2017 Date Approved Seung-Hee Son , Member 4/21/2017 Date Approved Andrea Rorrer , Member 4/21/2017 Date Approved Yongmei Ni , Member 4/21/2017 Date Approved and by Anne Cook Department/College/School of Educational Psychology and by David B. Kieda, Dean of The Graduate School. , Chair/Dean of the ABSTRACT This study examined the relation between the ACT Reading score and a variety of high school student academic behaviors including cumulative grade point average (GPA), number of college courses taken, number of reading intensive courses taken, and two state common core standards reading assessments. It also looked at how income and gender moderated the effects of these variables on the ACT Reading score. Using standard multiple regression, a model representing 52% of the variance in the ACT Reading score was revealed. The findings indicate students who take more rigorous coursework in high school and maintain a high GPA are more likely to do well on the ACT Reading test. Additionally, taking more advanced courses was correlated with better ACT Reading scores for boys. A strong GPA showed a weaker relationship to success on the ACT Reading test for low-income students than for their higher income peers. For Jason, who never gives up. For Kylie, Tade, and Grace, who never let me down. TABLE OF CONTENTS ABSTRACT....................................................................................................................... iii LIST OF FIGURES ........................................................................................................... vi LIST OF TABLES ............................................................................................................ vii Chapters 1. INTRODUCTION TO THE STUDY AND LITERATURE REVIEW .........................1 The ACT and ACT Reading Subtest History and Development ............................ 2 The ACT Reading Score and College and Career Readiness ................................. 8 The State of College Reading Readiness as Examined by English Course Placement .............................................................................................................. 15 Research on the ACT Reading Score and High School Academic Factors ..........20 Purpose of the Study ............................................................................................. 25 2. METHOD .....................................................................................................................26 Variables ............................................................................................................... 27 Data Analysis ........................................................................................................ 29 Data Set ................................................................................................................. 32 Summary ............................................................................................................... 33 3. RESULTS .....................................................................................................................34 4. DISCUSSION ...............................................................................................................45 Answering the Research Questions ...................................................................... 46 Unexpected Results ............................................................................................... 49 Connections to Previous Research ........................................................................ 50 Limitations ............................................................................................................ 51 Future Research .................................................................................................... 52 Conclusion ............................................................................................................ 53 REFERENCES ..................................................................................................................54 LIST OF FIGURES Figures 1 ACT Test Question Development ...................................................................................5 2 The ACT Reading Test Structure ...................................................................................7 3 Placement Exams as Gateway to College Classes ........................................................ 13 4 Scatterplot of Standardized Residuals Against Standardized Predicted Values of Y ............................................................................................................................... 39 5 Regression of ACT Reading Score on Number of College Courses for Males and Females ......................................................................................................................... 43 6 Regression of ACT Reading Score on Grade Point Average (GPA) for Low-income and Not Low-Income Students ..................................................................................... 44 LIST OF TABLES Tables 1 Description of Independent Variables and Research Related to the ACT Reading Score ............................................................................................................................. 28 2 Description of Interaction Variables Used in Moderation Analysis ............................. 29 3 Bivariate Correlations (Pearson r) of All Independent Variables and the Dependent Variable ......................................................................................................................... 37 4 Dependent Variable: ACT Reading ............................................................................. 38 5 Standard Multiple Regression of All Independent Variables on ACT Reading Score ............................................................................................................................. 40 6 Standard Multiple Regression of All Significant Independent Variables on ACT Reading Scores............................................................................................................. 41 CHAPTER 1 INTRODUCTION TO THE STUDY AND LITERATURE REVIEW In 2015, nearly 2 million high school students took the ACT exam to prepare for college admission (ACT, 2015). The assessment is currently the most widely accepted entrance exam in the United States, edging out the Scholastic Aptitude Test (SAT) in 2012 (Strauss, 2012). The ACT has even become mandatory for high school students in 20 states, including Utah (ACT, 2015). Capitalizing on the ever-increasing popularity of rigorous assessment among secondary and postsecondary institutions, ACT, Inc. touts its exam as a multifaceted assessment tool that can be used in a variety of ways such as a high school exit exam, college readiness indicator, course placement test, and even an employment assessment (ACT, 2015). Indeed, for many students, achieving a high ACT score is now more important than ever. Most of these students (and their parents) clearly understand the significance of the ACT composite or overall score for college entrance. Importantly, admission to the university of their choice and thousands of dollars in merit-based scholarships may depend upon that score. However, the ACT composite score is the average of four individual scores that are often overlooked. These scores depict proficiency in English, reading, mathematics, and science. Beyond using the ACT composite score for 2 admission, postsecondary institutions may use these separate scores to determine firstyear course placement. Placement refers to the first-year English or math class assigned to a student based on competency. Most colleges and universities require students to begin their English and math classes at the level that best fits their academic preparation prior to admission. Those who are deemed academically prepared for college-level math and English are permitted to register for regular, credit-bearing courses. Those who are not prepared may be required or strongly encouraged to take developmental or remedial courses, which are usually noncredit. Colleges and universities across the nation use the ACT Reading score as an indicator of academic preparedness for college-level English. This study explored the ACT Reading score. To frame the study, a review of relevant research and literature follows. The review contains three components: a description of how the ACT exam and the reading subsection are developed, elaboration on the significance of the ACT Reading score for students and education institutions, and a complete overview of existing research involving this score. The ACT and ACT Reading Subtest History and Development Exam History The American College Test (ACT) was developed by E. F. Lindquist, a University of Iowa psychometrician, in 1959. At the time, the SAT was the only significant entrance exam in use. Lindquist claimed the SAT focused too narrowly on assessing intelligence and aptitude for college work rather than academic achievement prior to college. Lindquist and his associates called their newly formed company the American College Testing program and soon found a market for their exam throughout 3 the United States (History, n. d.). At that time, America was experiencing significant growth in college enrollments and campuses across the country were seeking help dealing with admissions decisions. The American College Testing program as an organization grew steadily in the ensuing decades and in 1996, the official name of the company was changed to ACT, Incorporated and the exam itself officially adopted its shorter moniker, ACT. Today, ACT, Inc. is a nonprofit corporation whose self-stated purpose is to "help high school students develop postsecondary educational plans and to help postsecondary educational institutions meet the needs of their students" (ACT, 2014, p. 1). In keeping with Lindquist's original ideals, they continue to focus on measurement of "what students can do with what they have learned in school, not abstract qualities such as intelligence or aptitude" (ACT, 2014, p. 1). Beyond assessment, ACT, Inc. also claims to be the first organization to develop college readiness benchmarks and is the only college entrance exam that tests students in science (History, n. d.). A student's overall or composite score on the test ranges from 1-36. This composite score is the average of four subtest scores also ranging from 1-36. The four tested areas are English, reading, mathematics, and science. The ACT also contains an optional writing section. Its score remains separate from the composite score (ACT, 2014). Structure and Development The ACT is a curriculum-based assessment (ACT, 2014). It endorses the Code of Fair Testing Practices in Education (Joint Committee on Testing Practices, 2004), as well as the Code of Professional Responsibilities in Educational Measurement (NCME 4 Ad Hoc Committee on the Development of a Code of Ethics, 1995). As a criterionreferenced test, ACT proudly claims it is designed to measure specific academic skills related to curriculum areas (for example, reading comprehension or trigonometry) regardless of peer performance (ACT, 2014). The ACT Technical Manual claims that the best preparation for the ACT is high school coursework because it directly assesses many of the skills that are taught in high school (2014, p. 3). The overall message there is that working hard in high school, not talent or coaching, leads to good ACT scores (ACT, 2014). However, the ACT is also a norm-referenced test. The score scale for the current ACT was established using data from 12th-grade students who intended to enroll in a 2year or 4-year college or university after graduation during 1995. This norming process showed a composite score of 36 (highest possible score) on the ACT meant 99% of other 12th-grade, college-bound students would score at or below 36. Results verified the same thing for the subtest scores. However, it is important to note that the ACT Technical Manual says these norms may vary based on student demographic, regional location, or educational background and should not be applied universally (p. 50). The ACT may also be considered a norm-referenced test because it evaluates how examinees perform in college courses after taking the test and then uses those grades to create cut scores that predict college readiness. For example, they have determined that a student with a score of 22 on the ACT Reading subtest has a 50% chance of getting a B in an entry level English class and is therefore considered college-ready, while test takers who scored below that level are, therefore, not college-ready. ACT also offers national rankings of its test takers each year with the publication of an annual report (further 5 discussion of cut scores follows in the next section). ACT Test Question Development The process of test question development is depicted in Figure 1. The subject content of the ACT is derived from an information gathering process that focuses on three sources (ACT, 2014). First, state curricula for grades 7 through 12 are examined by ACT staff. Second, textbook material corresponding to these grade levels is reviewed. Third, a survey of educators at both the secondary and postsecondary level provides expert input on the content gathered from the previous two sources with respect to relevance and importance for early college success. ACT conducts ongoing research to ensure all current changes in state curricula and state and national educational standards are represented within the content of its exams (ACT, 2014). Once subject matter related to the four subtests is established, contracted item writers from around the United States write exam questions. To qualify as an item writer, one must be considered an expert in a subject area related to the ACT, and be employed Figure 1. ACT Test Question Development 6 as an educator at the secondary or postsecondary level. ACT makes an effort to see that writers vary geographically, ethnically, and in gender and that they come from both private and public institutions (ACT, 2014). Prospective item writers are provided with training on test specifications including how to avoid exclusion of certain populations due to word choice and how to create genderless questions. Writers must first submit a set of sample questions for evaluation by the ACT development staff and successful applicants will be extended a contract. Each writer is required to produce only a small number of items to contribute to an exam so that security and variety are ensured (ACT, 2014). All multiple-choice items submitted by writers are evaluated in two areas: content specifications and statistical specifications. Content specifications dictate that questions are grade appropriate, use fair language, and are consistent in structure. Statistical specifications measure the level of difficulty and level of discrimination or biserial correlation between other test items (ACT, 2014). The mean difficulty target is .58 with an acceptable range of .20 to .89. The biserial correlation is set at .20 or higher for each item in relation to its content area test score. After each test item has met the content and statistical requirements, it is reviewed and edited by ACT staff. All items are then, once again, examined by consultant panels for content accuracy and fairness. Finally, successful items are added to future forms of the exam as experimental questions. More data are gathered about how students perform on the experimental questions prior to officially including them on a test. Development of a single form of the ACT entrance exam can take 2 or more years (ACT, 2014). The reading subtest structure and development. As shown in Figure 2, the reading section of the ACT contains four passages with 10 multiple choice questions per 7 Figure 2. The ACT Reading Test Structure passage. Students have 35 minutes to complete the subtest, which measures reading comprehension of passages representative in scope and difficulty of four topics commonly encountered in beginning college course work: social sciences, natural sciences, literary narrative, and the humanities. Performance on questions for the first two topics is combined and given a scaled score ranging from 1-18 called the Social Sciences/Science Reading score and the final two categories result in an Arts/Literature Reading score from 1-18. These two scores are combined to create a third scaled score, the overall ACT Reading score (1-36). It is important to note that scores do not necessarily sum to create the final Reading score (ACT, 2014). Test takers may refer to the passages as they answer the questions. The questions assess both explicit and implicit meaning (ACT, 2014). All questions are text based. Vocabulary or factual knowledge outside the passages is not tested. Instead, students are expected to derive the meaning of 8 vocabulary words based on the text provided and to use reasoning and inferential skills to answer questions related to content (ACT, 2014). Reading subtest multiple choice questions are developed according to the process discussed above by contracted item writers based on carefully selected passages provided to them by the ACT development staff. The passages are assigned a difficulty rating by ACT relating to "three degrees of reading complexity: uncomplicated, more challenging, and complex" (ACT, Inc., 2006, p. 14). For the literary section, passages may come from short stories, novels, or memoirs. Humanities passages are taken from memoirs or personal essays about topics including but not limited to art, music, and philosophy. The social science reading passages relate specifically to a variety of general education college courses including psychology, sociology, history, and business. Similarly, the natural science passages cover content from biology, zoology, and/or medicine, among others (ACT, 2014). The ACT Reading Score and College and Career Readiness To understand the significance of the ACT Reading score, it is important to examine its place in a much broader educational context. Therefore, this section will establish a connection between the ACT Reading score and the college and career readiness (CCR) framework. Following a description of this framework, a research overview will present the current state of college readiness in reading. Finally, the practices of postsecondary institutions to determine college reading ability and therefore placement in beginning English course work will be presented. 9 College and Career Readiness Framework In 2003, the Association of American Universities (AAU) and the Pew Charitable Trusts published the first comprehensive set of college readiness standards in a booklet titled "Understanding University Success" (Conley, 2003). These standards were based on a 2-year study involving input from faculty and staff at 20 American postsecondary institutions. Other organizations such as The American Diploma Project, ACT, Inc., and the College Board developed standards as well (Allen & Sconing, 2005; American Diploma Project, 2004; Kobrin, 2007). This push was largely brought about by the growing belief that American college students were falling behind their international peers (Organization for Economic Cooperation and Development, 2010; Report in Brief: NAEP, 1996; Trends in Academic Progress, 2000). In addition to the NAEP report, several sources pointed to a large number of students in need of remediation or developmental coursework their 1st year of college (Ali & Jenkins, 2002; Horn, Peter, & Roone, 2002; McNabb, 1990; NCES, 2004; Wilkins, Hartman, Howland, & Sharma, 2010). Studies showed that while many of these students were admitted to and entered college as hoped, they were far less likely to complete their degree (Calcagno & Long, 2008; Wirt, Choy, Rooney, Provasnik, Sen, & Tobin, 2004). Prominent researcher, David Conley, was at the forefront of these discussions and offered the following definition of college and career readiness (CCR): "the level of preparation a student needs to enroll and succeed-without remediation-in a credit-bearing course at a postsecondary institution that offers a baccalaureate degree or transfer to a baccalaureate program" (Conley, 2007, p. 5). He also posited a CCR framework in 2010 (later updated in 2012) based on four principles: (a) key cognitive strategies, (b) key content knowledge, (c) key 10 learning skills, and (d) key transition knowledge and skills (Conley, 2012). The last principle is commonly referred to as college knowledge. While Conley's framework is not the only conception of CCR, it is probably the most widely used. In response to the number of unprepared high school graduates, policy makers around the nation called for reform and began working to update and unify secondary education standards (Barnet, Fay, Bork, & Weiss, 2013; U.S. Department of Education, 2010). They also sought to construct assessments and define assessment outcomes that would provide insight into whether a student was truly prepared for beginning, creditbearing courses. For example, in 2007, Illinois passed a law authorizing funding for partnerships between the state's community colleges and high schools to develop pilot programs for improving college and career readiness. Their primary goal was to align K12 learning outcomes with early college requirements, including ACT scores (Bragg & Taylor, 2014). Other states like California and Texas implemented CCR programs as well (Barnett et al., 2012; Howell, Kurleaender, & Grodsky, 2010). In California, the Early Assessment Program, or EAP, was rolled out in 2006. This three-pronged approach involved testing 11th-graders in math and English to evaluate college readiness, funding professional development for high school teachers, and offering supplemental materials to high school seniors who were not college ready. Howell et al. (2011) found evidence that the program's early intervention reduced the need for remedial course work in English for participating students by just 6.1%. Texas developed summer bridge programs also aimed at reducing the need for remediation for first-year college students (Wathington et al., 2011). Twenty-two colleges invited recently graduated high school students to campus during the summer 11 prior to their freshman year for accelerated math and English instruction, academic support services, college knowledge instruction, and the opportunity for a $400 stipend. Unfortunately, research found that 2 years after entering college, participants showed no gains in persistence, or number of credits earned over those who did not attend the bridge program (Barnett et al., 2012). Common Core State Standards and college readiness. At the same time, individual states were working to create CCR solutions; a national movement had begun. In 2009, as a response to college readiness discussions, state academic leaders, including in The Council of Chief State School Officers (CCSSO) and the National Governors Association Center for Best Practices (NGA Center), began working on a set of standards that could be used at the K-12 level to raise the bar on what students learned before entering their 1st year of college (NGA Center & CCSSO, 2010). They also sought to unify the wide variety of state standards found across the country (CCSS, 2016a). The result was the set of benchmarks known as the Common Core State Standards (CCSS), which set forth key skills and competencies in English Language Arts and Math. To date, 42 states have adopted these standards as a guide for teaching and learning in grades K-12 (CCSS, 2016). The CCSS do not provide guidance on how to reach these academic goals, but rather offer content area benchmarks that students should achieve as they progress through their secondary education. States and school districts face the significant challenge of designing effective curricula that ensure students will meet these standards. They must also create assessments that measure whether their students are succeeding. Many states have created their own end of level tests to assess student outcomes or have collaborated with partnering states. The Partnership for Assessment of 12 Readiness for College and Careers (PARCC) and the Smarter Balanced Assessment Consortium (Smarter Balanced) are two such collaborative organizations currently working to create tests that evaluate student progress on the Common Core State Standards. While it is too soon to tell whether these standards will result in increased academic preparedness, the hope is that students who meet these benchmark standards will be college ready. CCSS anchor standards in reading. The CCSS offer broad college and career readiness anchor standards that are supported by leveled, grade-specific standards (CCSS, 2016). In reading, there are 10 anchor standards based in the following four areas: key ideas and details, craft and structure, integration of knowledge and ideas, range of reading, and level of text complexity. The final category regarding text complexity has garnered much attention. The CCSS uses an ACT research report to explain the need for increased instruction in handling complex tests (CCSS Appendix A, 2015). This report, called "Reading Between the Lines," examined which specific reading skills differentiated students who achieved the ACT Benchmark score in reading and those who did not (ACT, 2006). They found that it was not inferential reasoning, determination of main ideas, or understanding word meaning that led to success, but rather the ability to read and understand complex text. Some passages on the ACT Reading subtest are more difficult than others and students who correctly answered multiple choice questions regarding these passages scored higher. As the Common Core State Standards and their accompanying assessments build momentum in secondary education, preparation for standard college entrance exams such at the ACT and SAT may be impacted. The hope is that adherence to the CCSS will 13 better prepare students for college-level work and that they will, therefore, perform better on standardized entrance exams. To date, there is no evidence demonstrating that preparing students to do well on Common Core assessments will also result it stronger ACT or SAT scores. Because most colleges and universities will use the ACT and SAT exams and not CCSS assessments to determine students' college readiness, it is critical that we understand and not lose sight of how students succeed on tests like the ACT. Indeed, as depicted in Figure 3, even if students perform well according to the Common Core in grades K-12, college placement exams, such as the ACT Reading test, are still a gateway through which they must pass to begin college in credit-bearing courses. ACT college readiness benchmarks. ACT, Inc. claimed an early stake in the college and career readiness discussions (Allen & Sconing, 2005). In 2005, they e stablished college readiness benchmark scores for the composite score and each of their four subtest scores: English-18, Reading-21, Math-22, Science-23. The scores were Figure 3. Placement Exams as Gateway to College Classes 14 updated again in 2013 when the Reading score was raised from 21 to 22 (ACT, 2013). Their research shows that ACT scores are predictive of college academic factors such as grade point average (GPA) (Radnunzel & Noble, 2012; Sawyer, 2013), first-year success (Noble, 1991; Allen & Sconing, 2005), math and writing proficiency (Sawyer, 2008), and Career Readiness" (ACT, 2015). The ACT organization also publishes an annual report titled "The Condition of College and Career Readiness" (ACT, 2015). The report offers a look at how American high school students are doing on the ACT benchmarks. In 2015, 59% of American high school students took the ACT, but 31% of test takers failed to meet even one of the college ready benchmark scores in that subject area. These college readiness benchmarks are associated with a 50% chance of receiving a B or better and a 75% chance of getting a C or better in an entry-level, creditbearing college course in the related subject area (Allen & Sconing, 2005). These criterion-referenced benchmark scores are determined using hierarchical logistic regression. Student grades from common first-year college courses such as history and psychology are compared to those same students' ACT composite and subtest scores to develop cut scores representing the probability of earning the specific grade. According to ACT, potential uses of the benchmark scores include progress monitoring for K-12 students, college readiness evaluation by subject or content area, and assessment of educational progress at the state, local, and national level (ACT, 2013). Importantly, ACT notes that when using the benchmarks as a tool for placement and intervention decisions for first-year college students, it is best to do so as part of a multimeasure system (ACT, 2013). It also points out that because benchmark scores dichotomize the idea of college readiness, evaluation should be done using all scores 15 from the four subtests together, rather than in isolation (Allen & Sconing, 2005). Unfortunately, this is often not the case. Students may be placed in remedial English education based solely on their ACT Reading and ACT English scores. The State of College Reading Readiness as Examined by English Course Placement According to Conley's definition of college readiness, students who are academically prepared for college reading should be able to succeed in beginning, creditbearing college English classes their first semester (Conley, 2007). As previously mentioned, however, many college freshmen are not prepared (Wirt et al., 2004). Instead, they are placed in developmental or remedial English courses. These courses are designed to bring underprepared students up to par with academically ready students. Students who are deemed unprepared in reading will most likely be placed in developmental English courses. A handful of universities offer standalone developmental reading courses, but there are no statistics available on just how many. It is also important to note that institutions differ in their remediation practices (Fields & Parsad, 2012). Some require students to complete mandatory developmental work prior to registering for credit-bearing courses while others only recommend that students do so. At some colleges, academically underprepared students are permitted to take creditbearing courses but must enroll in tandem, supplemental instruction programs. Although developmental classes are not new to postsecondary education (Holschuh & Paulson, 2013; Stahl & King, 2009; Wyatt, 1992), the number of students required to take them is somewhat staggering. 16 Some studies suggest that as many as half of high school graduates are unprepared for college reading (Wilkins, Hartman, Howland, & Sharma, 2010). Indeed, information gathered by the National Center for Education Statistics (NCES) found that 28% of entry-level college students needed at least one remedial or developmental course (NCES, 2004). This number has changed very little over the past few decades: 30% in 1989 and 29% in 1995. Additionally, a study focused on the California higher education system found that 46% of college freshmen needed developmental instruction in English (Ali & Jenkins, 2002). As of the year 2000, 35% of students who receive financial aid enroll in developmental reading courses their 1st year of college (Horn, Peter, & Roone, 2002). Of all college students enrolled, 11% are placed in these courses and approximately 56% of public degree-granting institutions offer developmental reading (Parsad, Lewis, & Greene, 2003). These statistics indicate far too many college freshmen lack necessary reading skills. The oft-cited statistic that approximately 85% of college learning comes through reading text dramatically illustrates how truly important reading skills are in college (Baker, 1974). Students who cannot comprehend the complex nature of a variety of texts will struggle to succeed academically at the college level (Mealey & Nist, 1989; Merisotis & Phipps, 2000). This information, though troubling, may not accurately portray the state of college reading readiness. Of the colleges that offer developmental reading (and not all of them do), parameters for placement are individually defined by each institution; there is no national standard from which to examine overall need for developmental reading. In fact, some institutions rely solely on their admissions criteria, including college entrance exam scores such as the ACT, to assess first-year student preparation in reading despite the 17 caution from ACT, Inc. that their subtest scores should only be used for placement as part of a multiple measures assessment (Allen & Sconing, 2005). For example, one Western state university assigns students to developmental English based only on whether they have a score of 17 or higher on the ACT English or Reading subtest. Although this practice may not be deemed educationally sound, it highlights the importance of understanding how academic behaviors influence not only the ACT composite score, but also the Reading subtest score in its own right. Students who are placed in developmental reading may experience significant setbacks in college, both academically and financially (ACT, 2005; Adelman, 1999; Merisotis & Phipps, 2000). One report estimates that only 25% of students who take remedial courses in college go on to finish a degree, and those who do will likely take longer than their peers (Crisp & Delgado, 2014). Developmental courses do not offer credit toward college graduation. However, students must still pay college tuition for these courses, adding additional burden to the already cumbersome price of postsecondary education. At public institutions, developmental programs are supported by tax payer funds, increasing the burden on tax payers. One study estimated this burden to be as high as 2.3 billion dollars annually for community colleges (Strong American Schools, 2008). Although debate surrounding the effectiveness of these programs continues, most agree they do not work (Hughes & Scott-Clayton, 2011). These dismal conclusions lie at the heart of the college and career readiness movement, which aims to decrease the need for remedial courses including developmental reading. However, one key assumption in this framework is that better reading preparation in high school will automatically translate into fewer students 18 needing developmental reading instruction. This assumption does not consider the very critical element of how students are assessed and placed in college-level reading and English courses. If assessment practices are not carefully aligned to both high school and college standards, and if secondary curriculums are not adequately preparing students to succeed on these specific assessments, the college and career readiness efforts may be for naught. Conley (2010) argues that the high number of unprepared college students exists in part because there is a clear disconnect between how secondary and postsecondary institutions define and assess college preparation. It is entirely possible that, despite good secondary preparation, students may not achieve high enough scores on placement measures to avoid remedial work because those measures are not assessing what high schools are teaching. Therefore, it is essential to clearly understand which assessments colleges and universities use to evaluate reading ability and how students can succeed on those assessments. Postsecondary Reading/English Coursework Placement Practices Because reading assessment practices are not standardized, a variety of formal and informal assessments, including the ACT Reading test, are used to determine college reading readiness (Fields & Parsad, 2012). The most recent evaluation of course placement practices in the United States comes from a study done by the National Assessment Governing Board (NAGB) in the fall of 2011 as part of an overall movement to assess college and career readiness. This organization, which oversees and sets policy for the National Assessment of Educational Progress (NAEP), conducted a survey of 1,560 postsecondary institutions regarding their first-year course placement practices in 19 reading and math. The survey aimed to determine which national standardized tests were being used and what cut scores were designated by the institutions to determine academic preparedness (Fields & Parsad, 2012). Results identified five tests commonly used for reading placement: ACT Reading test, SAT, ACCUPLACER Reading Comprehension, ASSET Reading Skills, and COMPASS Reading tests. Of all public and private 2-year and 4-year institutions surveyed, 53% used at least one of these reading tests for placement. The COMPASS Reading test was used most frequently (22%) and the ASSET Reading Skill test least frequently (9%). Both the COMPASS and the ASSET are produced by ACT, Inc. Importantly, Fields and Parsad (2012) note that there is wide variability in the cut scores chosen by these institutions as indicators of college readiness. They suggest, "postsecondary education institutions across the nation do not hold a single, common conception of "just academically prepared"" (Fields & Parsad, 2012, p. viii). The following will specifically examine use of the ACT Reading score as a placement tool. ACT Reading score as placement measure. According to the NAGB survey, the ACT Reading score is used by 16% of institutions for first-year course placement. Another study found that 43% of postsecondary institutions nationwide used the ACT scores in both English and Reading for English placement (McNabb, 1990). Such a wide variation in results can likely be attributed to the differing data gathering practices employed for each study and leaves room for further inquiry to determine prevalence. However, even by adopting the more conservative numbers, we see that hundreds of institutions across the United States employ the Reading score, impacting the first-year English placement of thousands of students. The college readiness benchmark score in 20 reading established by ACT, Inc. is 22. Only 46% of ACT tested students achieved that score in 2015 (ACT, 2015). However, according to the NAGB survey, a much lower ACT Reading score may keep students out of developmental classes. The mean cut score adopted by surveyed institutions was 18; the lowest score set at 14 and the highest at 25 with an interquartile range of .32 (Fields & Parsad, 2012). Even if an institution's placement cut score is lower than the recommended 22, many students will still require remediation. A total of 42% of ACT examinees missed the reading benchmark by three or more points (ACT, 2015). Most students are unaware that their ACT Reading score may be used as a placement measure and are even less aware of what specific behaviors may help avoid a low score. Furthermore, the ACT is likely to see increased usage as a college placement measure in the future. The number of students taking the ACT has risen 18.9% since 2011. Because the exam results are already reported for admissions, institutions can save time and money by not administering a separate reading assessment for placement purposes and it seems likely that many will do so. Consequently, further exploration of the ACT Reading score and its relationship to high school academic factors is both warranted and timely. While research on this topic is somewhat sparse, the following review illustrates what studies have revealed so far. Research on the ACT Reading Score and High School Academic Factors The following paragraphs summarize studies done using the ACT Reading subtest score as a dependent variable. It should be noted that the bulk of existing research 21 regarding the ACT exam does not specifically involve the Reading score and factors that predict it, but is rather an examination of the ACT scores (composite and subtest) and their validity as a predictor of college performance, completion, and placement. It might be said that researchers have extensively examined what happens once the test is taken, but not what happens before it is taken. The following studies specifically related to the ACT Reading score will be presented in chronological order to establish a line of research dating from the 1980s to the present. The first available study validating the ACT Reading test as a measure of reading skill was conducted by Noble in 1985. This study involved ACT Reading scores of a Midwestern university student sample that was divided into subsamples. The first subsample (N=2,431) consisted of students who had both an ACT score and a NelsonDenny Form C score. The second subsample (N=3,016) had a Form E Nelson-Denny score and an ACT score. The purpose of the study was to determine if all three tests measured the same constructs, thereby eliminating the need for separate reading tests in addition to the ACT admissions exam for university course placement. Means, standard deviations, and correlation coefficients were examined for both Forms C and E and the separate ACT scores. Prediction models were then constructed using the Nelson-Denny Vocabulary, Comprehension, and Total scores. Results showed that the ACT English Reading score and the ACT Social Studies Reading score moderately predicted reading skill according the Nelson-Denny. Importantly, the ACT Natural Sciences Reading score was dropped because it contributed very little to the regression model. The ACT Reading section has changed substantially since this study was done, but no updated research comparing it to other standardized reading tests could be located. 22 Another study done by Noble and McNabb in 1989 examined the course-taking practices of high school subgroups divided by gender and race in comparison to all ACT subtest scores. This study derived its student course information from the Course Grade Information Section (CGIS) of the exam. This section gathers self-report data from each examinee on the number of courses taken and the grades achieved in each course. Thirty total courses are listed on this section of the assessment. Examinees report whether they have taken, are taking, or will take that course and, if completed, report the earned grade for that course. Noble and McNabb took a random sample of ACT test takers from 198687 and divided it by juniors or seniors in high school (N=5,624 & N=5,655 respectively). Each subgroup was analyzed separately. The list of courses was clustered by subject and each group was analyzed for variance, collinearity, and degree of positive statistical significance. Results showed that number of courses taken and course grades were positively correlated with both the social studies reading and science reading ACT scores as well as the ACT composite score regardless of race and gender. The current utility of the study is limited because the ACT exam itself has changed structure since 1989, the student data were self-reported, and even the most predictive cluster accounted for less than 50% of variance in ACT scores. Lanier (1994) found that taking the ACT more than once increased a student's composite score by an average of 0.8 points. Additionally, course work taken in combination with multiple test attempts was statistically significant by subject. Taking more courses in ACT tested areas such as social studies and sciences along with two ACT attempts produced higher scores on the ACT Reading subtest. The study also demonstrated that students who took the exam first in their junior year and then again, 23 their senior year were most likely to benefit from taking the test twice. Noble, Davenport, Schiel, and Pommerich (1999) looked at high school core course GPA, course-taking practices and nonacademic factors in relation to the ACT scores. Using stepwise multiple regression, eight blocks of independent variables were examined. Two blocks, high school core course GPA and core courses taken, are most related to the present study. The first block looked at students' GPA in English, math, natural science, and social studies (as self-reported on the CGIS) in relation to ACT subtest scores. The second block examined individual courses taken in relation to the scores. Results showed that for every one-point increase in high school core course GPA, there was an increase of 3.24 in the ACT Reading score. This positive correlation was generally true for all subtest scores. The second block that considered the effect of taking individual courses on the ACT scores revealed that only higher math, chemistry, and physics were significantly correlated to stronger Reading scores. The researchers pointed out that this was likely because other courses such as English and social studies were highly correlated to math and science courses and were therefore eliminated from the blocks early on. It is important to note that of the four subtest scores examined, variance in the Reading score proved most difficult to establish. Only 47% of its variance could be accounted for while 65% of variance in the mathematics score was detected. Conrad-Curry (2011) investigated the effects of gender on the ACT Reading scores and found that girls were more successful than boys on the literature/arts subtest while boys were generally more successful on the social studies/science subtest. When examining the Reading score as a whole, girls outperformed boys. The sample for the study (N=540,650) was drawn from the 11th-grade population in a Midwestern state for 24 the years 2007-2010 where the ACT exam is the compliance test for the No Child Left Behind initiative. This study also used a multivariate analysis of variance (MANOVA) to examine how the relationship between educational placement (IEP or non-IEP), student SES, race, and gender affected performance on the ACT Reading score. It revealed that, with few exceptions, girls were more successful than their male counterparts in each category. McNeish, Radunzel, and Sanchez (2015) conducted a survey of ACT tested students to assess academic factors, noncognitive characteristics, school characteristics, and student demographics in relation to ACT Reading scores. A random sample was taken from the October and December 2012 ACT tested population (56,000). Those students were sent an email questionnaire and a final N of 6,440 respondents (a nonresponse rate of more than 80%) provided the data for the study. While the results revealed several predictors of ACT success, the most relevant findings for the present study pertain to the Reading score alone. High school grade point average (HSGPA) was a significant predictor of all ACT scores (including reading) and the composite score. However, it was the least predictive for the Reading score. HSGPA accounted for 31% of the overall ACT score, but just 20% of the Reading score. An additional 8% of variance in the Reading score could be accounted for by high school curriculum and advanced courses taken, but again, both factors were less predictive of the ACT Reading score than the other three scores or the composite score. Importantly, the study did reveal that taking higher level math classes increased ACT scores for all subject areas. The final variance accounted for by all factors in the model was lowest for the ACT Reading score (44%) and highest for the composite score (61%). Because the Reading score variance 25 proved the most difficult to predict, more investigation is necessary. Also, results of this study may have been affected by the survey's low response rate and its self-report structure. Purpose of the Study Given that the ACT Reading score is important for college course placement and because previous research has failed to fully explain the relation between high school academic behaviors and the ACT Reading score, this study examined the contribution of specific high school academic variables to that score. Additionally, only one study on the ACT Reading score (Conrad-Curry, 2011) has examined the roles socioeconomic status (SES) and gender play as moderators of these academic variables in determining the ACT Reading score. As a result, this study looked at how the connection between high school academic variables and the ACT Reading score was influenced by students' SES and gender. The results help explain what students must do in high school to achieve a college ready score on the ACT Reading test and provide insight into how the efficacy of such actions may be moderated by a students' socioeconomic background and gender. To guide the analysis, the following research questions were posited: 1. What are the relative contributions of specific high school academic variables to the variability in the ACT Reading score? 2. What are the moderating effects of socioeconomic status and gender on the relations between the specific high school academic variables and the ACT Reading score? CHAPTER 2 METHOD The research questions examined in this study were the following: (a) What are the relative contributions of specific high school academic variables to the variability in the ACT Reading score? and (b) What are the moderating effects of socioeconomic status and gender on the relations between the specific high school academic variables and the ACT Reading score? These questions were investigated by conducting a standard multiple regression in which the ACT Reading score served as the dependent variable and data on the following academic variables were requested from the Utah State Board of Education: 1. Cumulative high school grade point average (GPA) 2. Number of college preparation courses (college courses) 3. Number of reading intensive courses (reading intensive courses) 4. Highest ACT Reading score 5. Gender of students 6. Social economic status (reduced lunch/not reduced lunch These independent variables were chosen based on previous research that was discussed in Chapter 1. In addition, because gender (female/male) and socioeconomic status (SES), defined as reduced school lunch or not reduced school lunch, have been 27 shown to play major roles in reading achievement (Conrad-Curry, 2011), their moderating effects on the significant independent variables were examined. Following is a description of each of the academic and demographic independent variables and an account of the statistical procedures that were used to analyze the data. Variables The dependent variable in this study was the ACT Reading score. It is derived from a 40-item assessment in two reading areas, Social Science/Science and Arts/Literature (20 items each area). Each area receives a scaled score between 1 and 18. Those two scores result in a third scaled score ranging from 1-36. The ACT Technical Manual notes that the first two scaled scores do not necessarily combine to create the third scaled score (ACT, 2014). A description of the independent variables and research connected to those variables and the ACT Reading score can be found in Table 1. Although some of these variables have appeared in previous work, it should be noted that no studies have examined the state assessment for 11th-grade reading in relation to the ACT Reading score, which is a variable unique to the participating state. To test for moderation effects, 10 additional independent variables were created by centering the continuous variables at their mean and calculating the cross-product of both reduced school lunch/not reduced school lunch and female/male with each of the academic variables. These additional independent variables are shown in Table 2. 28 Table 1 Description of Independent Variables and Research Related to the ACT Reading Score Independent Variable Description Related Studies Cumulative GPA cumulative GPA for all courses taken in a student's high school career (range = 0-4.0) total number of Advanced Placement (AP) courses, college preparation courses, and dual credit courses taken by an individual student (range = 0-29.5) total number of courses taken by an individual student in English and social science (range = 0-28) Lanier (1994), McNeish, Radunzel and Sanchez (2015) College Courses Reading Intensive Courses Reading Literature Score Reading Informational Text Reduced School Lunch/Not Reduced School Lunch Female/Male SAGE (Student Assessment of Growth and Excellence) reading literature scores of 11th-grade students which evaluate performance on the Common Core State Standards (range = 100-868) SAGE (Student Assessment of Growth and Excellence) reading informational text scores of 11th-grade students which evaluate performance on the Common Core State Standards (range = 100-868) reduced school lunch status or not reduced school lunch status (0 = not reduced lunch status, 1 = reduced lunch status) male or female (1= female, 0 = male) McNeish, Radunzel and Sanchez (2015) Allen and Sconing (2005), Allen (2013), King, Rasool and Judge (1994) Note: no studies tied this variable to the ACT Reading score. not previously studied not previously studied Conrad-Curry (2011) Conrad-Curry (2011) 29 Table 2 Description of Interaction Variables Used in Moderation Analysis Cumulative GPA College Courses Reading Intensive Courses Reading Literature Score Reading Informational Text Score Reduced School Lunch/Not Reduced School Lunch Cumulative GPA x Reduced School Lunch/Not Reduced School Lunch College Courses x Reduced School Lunch/Not Reduced School Lunch Cumulative GPA x Female/Male College Courses x Female/Male Reading Literature Score x Reduced School Lunch/Not Reduced School Lunch Reading Literature Score x Female/Male Reading Informational Text Score x Reduced School Lunch/Not Reduced School Lunch Female/Male Reading Intensive Courses x Reduced School Lunch/Not Reduced School Lunch Reading Intensive Courses x Female/Male Reading Informational Text Score x Female/Male Data Analysis All data analyses were performed using IBM SPSS, version 24. The statistical method chosen to examine the research questions in this study was standard multiple regression. Standard multiple regression takes a postpositivist approach in that it aims tooffer explanations that can ultimately enable the prediction and control of human behavior (Lincoln & Guba, 2000). In accordance with this approach, the method is also reductionist in nature because variables will be condensed into a parsimonious model that reflects the most effective correlational relationship with the criterion. Standard multiple regression best answers the research questions by furthering our understanding of which academic behaviors exhibited by high school students, alone and as an interactive set, play a role in predicting the variability in the ACT Reading score and addresses whether the prediction of that variability was influenced by income and gender (Tabachnick & 30 Fidell, 2014). Analysis of Research Question One To answer my first research question, I performed standard multiple regression using the ACT Reading score as dependent variable and the variables listed in Table 1 as the independent variables. Standard multiple regression analyzes the contribution of each independent variable to the variance in the dependent variable as if it was entered last in the regression. Therefore, the change in the dependent variable that is predicted by each independent variable is unique to that variable. Additionally, the final statistical equation calculated in standard multiple regression provided an R2 that revealed the overall contributions of the independent variables on the ACT Reading score. Analysis of Research Question Two To answer my second research question, regarding the moderating effects of income and gender, interaction terms were created and assessed. This process began by centering all five academic independent variables at their respective means. Tabachnick and Fidell (2014) recommend centering to avoid problems of statistically created multicollinearity. Then, those five variables were crossed with Reduced School Lunch/Not Reduced School Lunch and Female/Male, thereby creating 10 new interaction terms (see Table 2). Once the interaction variables were created, standard multiple regression was performed using IBM SPSS REGRESSION. The standard multiple regression calculated the correlations between the variables, the unstandardized regression coefficients and 31 intercept, the standardized regression coefficients, the semipartial correlations, R2, and adjusted R2. All the independent variables, including the interaction terms, were entered simultaneously. When interaction variables were found to be significant, methods described by Aiken and West (1991) were used to probe the nature of the moderation. The regression of ACT Reading score on a specific academic variable was examined at differing levels of the moderating variable, where the levels of the moderating variable were Female/Male and Reduced School Lunch/Not Reduced School Lunch. For example, if the interaction term of Cumulative GPA x Female/Male was found to be significant, the relation between ACT Reading score and high school GPA was examined separately for males and females to illustrate how this relation differed by gender. Preparing the Data Missing data. First, missing data for each student were discovered and analyzed for randomness using IBM SPSS MVA (IBM, 2013). This process used Little's MCAR (missing completely at random) test to determine whether missing data appeared according to a pattern or were correlated with missing data for the other variables. Little's MCAR test showed missing data were not missing randomly. Univariate outliers. Second, univariate outliers were detected using IBM SPSS FREQUENCIES and, where present, were handled using case elimination (Tabachnick & Fidell, 2014). Normality. Once univariate outliers had been dealt with, a final test of normality (skewness and kurtosis) of continuous variables was assessed using IBM SPSS 32 FREQUENCIES, which provided expected normal probability plots and detrended expected normal probability plots. Linearity and homoscedasticity. Next, pairwise linearity and homoscedasticity was evaluated using IBM SPSS PLOT and no problems were detected. Multivariate outliers. The next step in data preparation was examination of multivariate outliers through the Mahalanobis distance test in IBM SPSS REGRESSION. Extreme multivariate outlier cases with Mahalanobis distances significant at p < .00 were not detected in the data set (Tabachnick & Fidell, 2014). Multicollinearity. This issue was addressed with tolerance analysis, post-hoc. All variables showed only moderate correlation and multicollinearity statistics (tolerances) were reasonable. Descriptive Statistics Once the data set was prepared, descriptive statistics, including means, standard deviations, and correlations, were established for each variable using standard IBM SPSS DESCRIPTIVES. Data Set The data for this study were provided by the Utah State Board of Education (USBE). To obtain the data, a request was submitted that included Institutional Review Board approval through the University of Utah and a complete description of the research questions, desired variables, and statistical measures. The request was then reviewed and approved by the USBE. The final data set was provided as a downloadable Excel file 33 through a secured website. It represented all students in Utah who graduated from high school in spring of 2015. Students with disabilities were not included. The ACT is mandatory for 11th-grade students in this state, thereby eliminating the self-selection bias present in other studies concerning the ACT Reading score. USBE also obtained official ACT data for students who took the exam more than once, which allowed this study to examine the highest ACT Reading score earned by each student. Additionally, the information on academic variables comes from official school records offering a unique opportunity to examine the connections between academic behaviors and the ACT Reading score without the error commonly associated with self-reported data. Summary Standard multiple regression was used to analyze the relative contributions of six high school academic behaviors to the variability of the ACT Reading score. Reduced school lunch/not reduced school lunch and Female/Male were assessed for moderation effects by first centering the remaining academic variables and then calculating the crossproducts. Standard multiple regression containing all independent variables and the interaction variables was performed to detect the significant unique contribution of each variable to the variability in the ACT Reading score. Where interaction terms were found to be significant, the regression of that academic variable on the ACT Reading score was examined at differing levels of the dichotomous moderating variable to reveal differences. CHAPTER 3 RESULTS The original data set provided by the Utah State Board of Education contained 34,261 cases. A preliminary review of the data indicated that there were 1,707 cases without an ACT Reading score. There were also an additional 3,512 cases without a Reading Literature score and a Reading Informational Text score. Although an IBM SPSS Missing Value Analysis (MVA) revealed that the missing data were not missing completely at random according to Little's MCAR test (χ2 = 5604.534, df = 19, p < .001), there was no explanation for the missing data on these two variables from the Utah State Board of Education. Therefore, there was no basis to establish any systematic loss of data. As recommended by Tabachnick and Fidell (2014), multiple imputation was performed using estimation maximization (EM). However, the imputed data resulted in a reduction in the standard errors of these variables, which caused a change in the results of the multiple regression analysis. Based on the change in the analysis and the controversial use of imputed data (Tabachnik & Fidell, 2014), a decision was made to eliminate these cases, resulting in 29,042 cases remaining. Outliers were detected using IBM SPSS FREQUENCIES. For two of the continuous independent variables, Reading Literature Score and Reading Informational Text Score, there was a "bunching" of 2,899 scores at the maximum score of 868 and 456 35 scores at the minimum score of 100, indicating that there were ceiling and floor effects for these two variables. The design of the two measurement instruments that produced these scores, therefore, does not allow for the discrimination among students who could have scored beyond these two values and does not allow for the estimation of variance above and below these two extreme sores, making statistical analyses impossible. Therefore, cases at or above a score of 868 and at or below a score of 100 were also removed from the data set, resulting in 25,687 cases that were used in the final analyses. The only other variable for which there was a similar "bunching" of scores was Cumulative GPA, with 1047 cases having a score or 4.0. However, GPA represents a rank ordered variable of A, B, C, D, and F that is converted to a numerical score. Letter grades are assigned according to a percentage of points earned or scores achieved. A student who receives an A has earned more points or scored higher than a student who has received a B; however, the difference in the number of points or scores achieved is not uniformly defined and can vary by teachers, grade level, schools, school districts, or states. The same is true for the differences among the other letter grades. Therefore, whether there is a ceiling effect for Cumulative GPA is impossible to determine. Cumulative GPA should be analyzed using statistical techniques that deal with rank ordered data; however, the conversion of letter grades to numeric scores representing interval data is commonly accepted (National Center for Educational Statistics, 2015). Moreover, with the large number of cases being analyzed in the current study, the nonparametric statistical methods used for rank ordered data yield results that are the same or similar to the results of parametric statistical methods. Following recommendations from Aiken and West (1991), I examined the 36 moderation effects of Reduced School Lunch/Not Reduced School Lunch and Female/Male on the other independent variables by centering the continuous independent variables at their means. Centering the variables reduces the possibilities of statistically created multicollinearity. The interaction terms were then derived by calculating the product of each centered continuous variable and each dichotomous variable. Bivariate correlations were calculated for all pairs of variables and appear in Table 3. Descriptive statistics can be found in Table 4. Further screening runs were conducted to evaluate normality, linearity, and homoscedasticity. Evaluation of standardized residuals was conducted after the initial screening runs by examining the plotted residuals. A scatterplot is shown in Figure 4. It shows that the residuals were normally distributed. The mean of the standardized residuals was zero with a standard deviation of 1.0. For linearity, the scatterplot of the residuals has a predominately rectangular distribution rather than curved. There are a few cases to the upper left and lower right that detract from the rectangular configuration; however, given the large number of cases being analyzed, the vast majority of cases fall within a rectangular distribution. Finally, for the assumption of homoscedasticity, the scatterplot indicates that the standard deviations of the errors are approximately equal in width across predicted values of the dependent variable and that there is no appreciable widening of the distribution at either end of the distribution. In sum, the assumptions of normality, linearity, and homoscedasticity appear to be reasonably met. Given the large and diverse sample of students being analyzed, it is safe to assume that the residual errors are independent. Moreover, given the properties of the central limit theorem, it is a reasonable conclusion that the estimates of the coefficients are unbiased and also 37 Table 3 Bivariate Correlations (Pearson r) of All Independent Variables and the Dependent Variable 1. Reduced School Lunch/Not Reduced School Lunch 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 .01 .27 .16 .04 .13 -.17 .36 .27 .06 .20 -.25 .20 .10 -.03 .11 -.12 -.23 .03 .003 .001 2. Female/Male .17 .02 .05 .04 .004 .07 .03 .02 .01 .005 .12 .02 .03 3. Cumulative GPA+ + 4. College Courses 5. Reading Intensive Courses+ 6. Reading Literature Score+ 7. Reading Informational Text Score+ 8. Cumulative GPA+ x Reduced School Lunch/Not Reduced School Lunch 9. College Courses+ x Reduced School Lunch/Not Reduced School Lunch 10. Reading Intensive Courses+ x Reduced school lunch/not reduces school lunch 11. Reading Literature Score+ x Reduced School Lunch/Not Reduced School Lunch 12. Reading Informational Text Score+ x Reduced School Lunch/Not Reduced School Lunch 13. Cumulative GPA+ x Female/Male 14. College Courses+ x Female/Male 15.Reading Intensive Courses+ x Female/Male 16. Reading Literature Score+ x Female/Male 17. Reading Informational Text Score+ x Female/Male 18. ACT Reading Score + Denotes .43 .08 .33 .40 .60 .30 .02 .22 .27 .69 .28 .06 .23 .28 .48 .36 .28 .34 .23 .45 .14 .14 .17 .30 .71 .26 .20 .24 .45 .09 .02 .16 .53 .04 .05 .06 .28 .74 .06 .07 .12 .52 .19 .16 .04 .51 .28 .24 .20 .05 .71 .37 .52 .24 .19 .05 .28 .52 .29 .24 .06 .36 .70 .62 .49 .02 .36 .44 .42 .15 .012 .14 .16 .27 .31 .31 .36 .22 .32 .12 .12 .14 .24 .07 .10 .01 .10 .39 .03 .03 .05 .54 .17 .10 .03 .37 .21 .28 .20 .12 .04 .21 .37 .33 .42 .08 .33 .42 .34 .37 .28 .34 .30 .08 .09 .08 .52 .36 .07 that the variable is centered All correlations greater than .01 or less than -.01are significant at p < .01, correlations from .01 to -.01 are not significant. .44 38 Table 4 Descriptive statistics for all independent variables and ACT Reading score Variables Reduced School Lunch/Not Reduced School Lunch Female/Male Cumulative GPA Centered College Courses Centered Reading Intensive Courses Centered Reading Literature Score Centered Reading Informational Text Score Centered Cumulative GPA Centered x Reduced School Lunch/Not Reduced School Lunch College Courses Centered x Reduced School Lunch/Not Reduced School Lunch Reading Intensive Courses Centered x Reduced School Lunch/Not Reduced School Lunch Reading Literature Score Centered x Reduced School Lunch/Not Reduced School Lunch Reading Informational Text Score Centered x Reduced School Lunch/Not Reduced School Lunch Cumulative GPA Centered x Female/Male College Courses Centered x Female/Male Reading Intensive Courses Centered x Female/Male Reading Literature Score Centered x Female/Male Reading Informational Text Score Centered x Female/Male ACT Reading Score Valid N (listwise) N 25687 Min. Max. Mean 0.238 Standard Deviation 0.4256 Skewness Standard Statistic Error Statistic Kurtosis Standard Error 25687 25687 -3.31 0.69 0.521 0.000 0.500 0.602 -0.980 0.015 0.497 0.031 25687 -3.19 26.31 0.000 3.292 1.765 0.015 5.262 0.031 25687 -7.91 20.09 0.000 1.734 1.570 0.015 9.261 0.031 25687 -409.31 356.69 0.000 119.523 -0.161 0.015 0.504 0.031 25687 -404.82 360.18 0.000 109.824 -0.154 0.015 0.510 0.031 25687 -2.44 0.69 -0.069 0.344 -2.559 0.015 9.325 0.031 25687 -3.19 22.47 -0.220 1.436 2.847 0.015 32.422 0.031 25687 -7.41 14.34 -0.033 0.918 2.439 0.015 36.368 0.031 25687 -408.31 345.69 -6.836 60.502 -1.267 0.015 10.883 0.031 25687 -404.82 349.18 -7.751 55.824 -1.494 0.015 11.014 0.031 25687 -3.31 0.69 0.051 0.410 -1.341 0.015 4.557 0.031 25687 -3.19 23.06 0.039 2.335 2.407 0.015 12.331 0.031 25687 -7.91 20.09 0.043 1.284 2.268 0.015 20.060 0.031 25687 -408.31 356.69 2.579 85.211 -0.116 0.015 3.856 0.031 25687 -404.82 360.18 0.230 76.967 -0.142 0.015 3.712 0.031 25687 5.00 36.00 22.326 5.896 0.205 0.015 -0.584 0.031 25687 39 Figure 4. Scatterplot of Standardized Residuals Against Standardized Predicted Values of Y consistent (Tabachnick & Fidell, 2007; Williams, Grajales, & Kurkiewicz, 2013). IBM SPSS REGRESSION was used to conduct a standard multiple regression. ACT Reading score was the dependent variable and the other variables were independent variables. Table 5 offers the standard multiple regression of all independent variables on ACT Reading score and Table 6 shows the unstandardized regression coefficients, the intercept, the standardized regression coefficients, and collinearity statistics. Tolerance and variance inflation factors were within acceptable limits, indicating that independent variables were moderately correlated. The R for regression was significantly different from zero, F(17, 25,669) = 16.34.42, p < .001, with R = .721, R2 = .520 and adjusted R2 = .519. As seen in Table 6, the following variables added significantly to the model: GPA Centered, College Courses Centered, Reading Literature Score Centered, Reading Informational Text Score Centered, GPA Centered x Reduced School Lunch/Not 40 Table 5 Standard Multiple Regression of All Independent Variables on ACT Reading Score Unstandardized Coefficients Variable (Constant) Reduced School Lunch/Not Reduced School Lunch Female/Male Cumulative GPA Centered College Courses Centered Standardized Coefficients B Std. Error Beta 22.799 .040 -.966 .065 -.070 -.540 1.853 .052 .078 .352 .014 Collinearity Statistics Tolerance VIF -14.808 .000** .843 1.186 -.046 .189 -10.353 .000** 23.746 .000** .957 .294 1.045 3.396 .197 24.651 .000** .294 3.398 -.056 .956 .328 3.044 27.049 .000** .300 3.329 45.168 .000** .287 3.481 -4.251 .000** .438 2.284 -.695 .487 .524 1.909 .083 .934 .628 1.593 -.096 .923 .515 1.942 -.525 .600 .472 2.118 .468 .640 .375 2.667 -2.567 .010* .333 3.006 -.905 .365 .387 2.584 -1.010 .313 .349 2.866 1.584 .113 .333 3.007 Reading Intensive Courses -.001 .026 .000 Centered Reading Literature Courses .011 .000 .213 Centered Reading Informational Text .020 .000 .364 Score Centered Cumulative GPA X Reduced School Lunch/Not Reduced -.476 .112 -.028 School Lunch College Courses X Reduced School Lunch/Not Reduced -.017 .025 -.004 School Lunch Reading Intensive Courses X Reduced School Lunch/Not .003 .035 .000 Reduced School Lunch Reading Literature Score X Reduced School Lunch/Not .000 .001 -.001 Reduced School Lunch Reading Informational Text Score X Reduced School .000 .001 -.003 Lunch/Not Reduced School Lunch Cumulative GPA X .048 .102 .003 Female/Male College Courses X -.049 .019 -.019 Female/Male Reading Intensive Courses X -.029 .032 -.006 Female/Male Reading Literature Score X -.001 .001 -.007 Female/Male Reading Informational Text .001 .001 .012 Score X Female/Male Dependent Variable: ACT Reading Score.05 ** p<.01 t Sig. 569.575 .000 R = .721 R2 = .520 Adjusted R2 = .519 41 Table 6 Standard Multiple Regression of All Significant Independent Variables on ACT Reading Scores Model (Constant) Unstandardized Standardized Coefficients Coefficients Std. B Error Beta t 22.802 .040 575.772 Cumulative GPA 1.895 .060 .194 Centered College Courses .344 .012 .192 Centered Reading Literature .010 .000 .208 Score Centered Reading .020 .000 .371 Informational Text Centered Cumulative GPA -.537 .096 -.031 Centered X Reduced School Lunch/Not Reduced School Lunch College Courses -.046 .016 -.018 Centered X Female/Male Reduced School -.957 .065 -.069 Lunch/Not Reduced School Lunch Female/Male -.544 .052 -.046 ** p<.01 Dependent Variable: ACT Reading Score Sig. sr2 .000** Collinearity Statistics Tolerance VIF 31.598 .000** .019 .498 2.007 28.983 .000** .016 .426 2.350 40.512 .000** .031 .710 1.409 69.414 .000** .090 .654 1.530 -5.618 .000** .001 .601 1.663 -2.957 .003** .001 .496 2.015 -14.794 .000** .005 .858 1.166 -10.447 .000** .002 .961 1.041 R = .721 R2 = .520 Adjusted R2 = .520 42 Reduced School Lunch, College Courses Centered x Female/Male, Reduced School Lunch/Not Reduced School Lunch, and Female/Male. To derive a final solution with a final regression equation, nonsignificant variables were dropped and standard multiple regression was performed for the remaining variables. The results of this analysis are also found in Table 6, showing the intercept, unstandardized and standardized coefficients, standard errors, semipartial squared correlations, R, R2, and adjusted R2. No suppressor variables were found. The final model indicates that 52% of the variability in ACT Reading score was predicted by the independent variables. Of this amount, 35% was shared variability and 17% was uniquely contributed by the individual independent variables. Reading Informational Text Score accounted for 9%, Reading Literature Score accounted for 3%, Cumulative GPA and College Courses each accounted for approximately 2%, and the remaining variables each accounted for less than 1% of the variability. The final linear equation is: Y = 22.802 + 1.895(cum. GPA) + 0.344(college courses) + 0.01(state reading lit. score) + 0.02(state reading info. score) - 0.537(GPA X reduced school lunch/not reduced school lunch) - 0.046(college courses X reduced school lunch/not reduced school lunch) - 0.957(reduced school lunch/not reduced school lunch) 0.544(Female/Male) + residual Concerning the second research question, gender and socioeconomic status showed significant moderating effects only on College Courses Centered and Cumulative GPA Centered, respectively. To illustrate the nature of these moderating effects, two separate multiple regression analyses were conducted. First, I regressed ACT Reading score on College Courses Centered, Female/Male, and College Courses Centered x Female/Male and calculated regression lines at levels of Female/Male. Then I regressed ACT Reading 43 score on Cumulative GPA Centered, Reduced School Lunch/Not Reduced School Lunch, and Cumulative GPA Centered x Reduced School Lunch/Not Reduced School Lunch and calculated regression lines at both levels of Reduced School Lunch/Not Reduced School Lunch. Figure 5 shows that as the number of college courses increased, ACT Reading score for males went up at a steeper rate than females. Figure 6 shows that as Cumulative GPA increased, students who were not reduced school lunch showed a relatively higher increase in ACT Reading score than students who were reduced school lunch. (.00) (3.19) (6.48) (13.06) (16.36) (19.65) (22.94) (26.23) (29.52) (32.82) (36.11) College Courses Centered (Raw Score) Figure 5. Regression of ACT Reading Score on Number of College Courses for Males and Females. 44 (.31) (.91) (.51) (2.11) (2.71) (3.31) (3.91) GPA Centered (Raw Score) Figure 6. Regression of ACT Reading Score on Grade Point Average (GPA) for LowIncome and Not Low-Income Students. CHAPTER 4 DISCUSSION The purpose of this study was to find specific connections between high school academic factors and the ACT Reading score so that educators might better understand how students can strengthen their performance on the ACT Reading test. Because the exam is widely used as a first-year English course placement measure in community colleges and universities across the country, it can be a key factor in determining whether students will need to take developmental English classes their 1st year of college (McNabb, 1990). This issue is at the forefront of national secondary education discussions because required developmental courses can delay graduation and cost students thousands of dollars. This study has three key advantages over previous research regarding the ACT Reading score. First, the data examined here were not self-reported as is the case with most other studies of this nature (McNeish, Radunzel, & Sanchez, 2015; Noble & McNabb, 1989; Noble, Schiel, & Davenport, 1999). Second, for the first time, the ACT Reading test was compared to a state reading Common Core State Standards assessment. Finally, the study was conducted independent of the ACT organization. 46 Answering the Research Questions This study examined 2 research questions: 1. What are the relative contributions of specific high school academic variables to the variability in the ACT Reading score? 2. What are the moderating effects of income and gender on the relations between the specific high school academic variables and the ACT Reading score? The first question was answered by revealing that four specific variables and two interaction terms contribute significantly to the variability in the ACT Reading score. These contributions indicate that taking Advanced Placement (AP), International Baccalaureate (IB), and duel credit courses (also known as concurrent enrollment courses) while in high school may increase the likelihood of success on the ACT Reading test. The regression also revealed that performance in school, measured as cumulative GPA, is an important academic behavior for success on the exam but does not explain as much variance as might be expected (sr2 = .019). Finally, the state reading assessment examining the Common Core English Language Arts (ELA) Standards is also connected to the ACT Reading score; success on the state's reading assessments contributes to success on the ACT Reading score. The two state assessments, called Reading Literature score and Reading Informational Text score, offered the strongest bivariate correlations of all 17 variables (r = 0.52 and r = 0.62, respectively) and contributed the most to the regression model. The second research question pertains to the moderating effects gender and income have on the other independent variables with ACT Reading score as the 47 dependent variable. I derived 10 interaction terms by calculating the product of each of the five continuous, dependent variables (Cumulative GPA, College Courses, Reading Intensive Courses, Reading Literature score, and Reading Informational Text score) and the two dichotomous variables (Female/Male, Reduced School Lunch/Not Reduced School Lunch). Of these 10, only two remained in the final model: College Courses x Female/Male, and Cumulative GPA x Reduced School Lunch/Not Reduced School Lunch. These results show that boys may be more likely to raise their ACT Reading score by taking advanced courses than girls. This has important implications because as noted by Conrad-Curry (2011), girls generally score better than boys on the ACT Reading test. Perhaps this gap can be narrowed by encouraging male students to take additional college prep and dual credit courses. The Cumulative GPA x Reduced School Lunch/Not Reduced School Lunch result, while not surprising, indicates that students from lower income households with high GPAs may not achieve as strong of an ACT Reading score as their similarly educated higher income peers. This is important information for higher education policy makers who have noted that students of low socioeconomic status are more likely to be placed in developmental classes (Horne, Peter, & Rooney, 2002). Both research questions were answered using a single multiple regression analysis with 17 independent variables and the ACT Reading score as the dependent variable. Of those 17, eight were significant at the p < .01 level. Those eight variables provide a model that accounts for 52% of the variability in the ACT Reading score. Of that 52%, 35% is shared variability and 17% is uniquely contributed by the individual variables. The substantial portion of shared variability indicates that the individual variables 48 may be accounting for some common element. However, what that element is cannot be easily identified by the present study. Except for Female/Male and Reduced School Lunch/Not Reduced School Lunch, the variables GPA, College Courses, and reading assessments all relate to reading comprehension and logically it makes sense that their shared variability would contribute to the variance in the ACT Reading score. However, it would be interesting to further examine the commonality among all eight variables, including Female/Male and Reduced School Lunch/Not Reduced School Lunch, to understand why so much of the variance explained by this model is shared variability. Reading Informational Text score (9%) and Reading Literature score (3%) contributed the most uniquely to the variability in the ACT Reading score, with a total of 12%. This connection between the two state assessment reading scores and the ACT Reading score could mean several things. First, the tests themselves are likely designed to measure similar constructs. This has interesting implications because the state reading test is specifically constructed to measure how well 11th-grade students perform on the Common Core ELA Standards, while the ACT Reading test is most often used to predict performance in college. Perhaps this analysis will provide state educators with some evidence that teaching students to meet the Common Core ELA Standards (i.e., succeed on the state assessment) also helps them succeed on the ACT Reading score. The next two variables to contribute to the model are Cumulative GPA and College Courses, each contributing about 2% to the variability of the ACT Reading score. The connection between a strong GPA and a high ACT Reading score is not surprising. Indeed, students who earn good grades often do well on standardized tests. However, this study reveals that the connection is relatively small, suggesting there are other factors 49 that explain achievement on the ACT Reading score. Indeed, the bivariate correlation between Cumulative GPA and ACT Reading score alone offered a Pearson r of 0.48. However, as part of our standard multiple regression, sr2 for GPA was just .019. This illustrates that earning good grades in high school course work is likely insufficient preparation for college reading as measured by the ACT Reading score. Students who take more college preparation courses (AP, IB, or dual enrollment) in high school also showed slightly better scores on the ACT Reading test according to this study (sr2 = .016). This raises the question whether the normal high school curriculum in this state adequately prepares students to succeed on the ACT Reading test. There appears to be some advantage in taking classes with college-level content, which may support an argument for more rigor in regular high school courses. Unexpected Results The first unexpected result of this study is particularly noteworthy. The number of reading-intensive (English literature and social studies) courses did not contribute significantly to the variability in the ACT Reading score. It was expected that students taking more courses with a heavier reading load would score higher on the dependent variable. This was not the case. There are two possible explanations for the result. It may be that the English and social studies courses examined in this study are not, in fact, reading intensive. While other studies define them as such (Allen, 2013; Allen & Sconing, 2005; King, Rasool, & Judge, 1994), the present study did not examine course content and, therefore, cannot verify the level of required reading. It could also be that performance in these courses is a moderating factor not examined here. Perhaps taking 50 additional English and social studies classes and earning high grades in those classes would produce a stronger connection to the ACT Reading score, but it appears that taking more of these courses alone has very little impact. Next, it was unexpected that the correlation between gender and ACT Reading score, although slight, favored males (r = -.001). This is contrary to a large body of research indicating that females do better on this assessment. Gender was a significant factor in our final model but contributed less than 1% of the variability to the ACT Reading score. Finally, while only tangentially related to the research questions in this study, it was surprising to note that the average GPA of students included in the final analysis was 3.31. Due to the large number of cases, this positively skewed mean did not affect the overall results of the study but offers a strong argument for examining the possibility of grade inflation among this population. Connections to Previous Research As noted in Chapter 1, Lanier (1994) also examined course work taken in relation to the ACT Reading test. Results of that study showed that taking more courses in ACT tested areas such as social studies and science (in addition to multiple test attempts) resulted in higher ACT Reading scores. My results do not support that finding. I demonstrated that only the number of advanced or college courses taken, not the number of regular courses, positively connected to the dependent variable. When examining the Noble, Davenport, and Pommerich (1999) study, we see that taking English and social studies courses did not contribute significantly to their model 51 either. They explain that this is likely due to early elimination in their analysis caused by collinearity with other variables. However, in the present study, collinearity with other variables was not an issue and number of English and social studies courses taken was still nonsignificant. Also, the model calculated in their study was only able to account for 47% of the variance in ACT Reading scores. The present study accounted for 52%. A more recent study by McNeish, Radunzel, and Sanchez (2015) accounted for just 44% of the variance in the ACT Reading score but also found that GPA contributed significantly to their model as did advanced course taking. Our study validates these findings. Finally, the Conrad-Curry (2011) study examining the gap between genders on ACT Reading test performance can be connected to my results in two ways. First, unlike their study, my results showed that gender explained only a very small amount of variance on the ACT Reading test. The Pearson r was -.001, indicating a slight advantage for males. In the final model, sr2 for Female/Male was just .002. However, when looking at the moderating effect of gender on college courses taken in relation to the ACT Reading score, I found that boys may gain slightly more from these advanced classes than girls. This finding indicates that the widely accepted gender gap in reading comprehension may be narrowed by encouraging boys to take more rigorous courses. Limitations This study is limited in a couple of ways. First, the missing data required a 15% reduction in overall cases examined. Further explanation from the Utah State Board of Education about missing data would have been helpful. Next, no GPA data were 52 available for the different course groups I examined: reading-intensive courses and college courses. Examining not only the number of courses taken by a student but also how well students performed in those courses would have been useful to the study. Future Research These findings further our understanding of how high school academic behaviors influence the ACT Reading score and add to the present body of knowledge by offering an independent study involving officially reported data that accounts for 52% of the variance in the ACT Reading score. Continuing this line of inquiry could help answer several more questions. First, do the variables identified in the model predict first-year college success as well or better than the ACT Reading score? This could be studied by examining first-year college English grades of the students in this sample in relation to the regression model. In other words, would the variables in the model predict college success at the same level (or better) than the ACT Reading score? Next, because the study failed to find a connection between reading-intensive classes and the ACT Reading score, it would be helpful to find out why. As a starting point, research could be done on the connection between ACT Reading scores, reading intensive classes, and grades in those classes. A review of the English and social studies actual course reading requirements would also be beneficial and help answer the question of whether these courses are, in fact, reading intensive in comparison with other high school classes. Also, because our study indicates that males may benefit from advanced course taking to raise ACT Reading scores, future research could provide additional insight. 53 College course taking practices of male students as well as their grades in those classes could be examined over a longer period and with a larger sample in relation to the ACT Reading score. This might further validate my findings and lead to policy changes that encourage male students to take more rigorous coursework during high school. Conclusion This research set out to examine secondary academic behaviors that effect the ACT Reading score and then ascertain whether gender and socio-economic status moderated the influence of those behaviors. Results have provided insight on both accounts. These results contribute to the ongoing national discussions concerning the various uses of the ACT Reading test, high school academic expectations, and college preparation. Such conversations are, and will continue, to shape the educational landscape for secondary and postsecondary students. REFERENCES ACT, Inc. (2006). Reading between the lines: What the ACT reveals about college readiness in reading. Iowa City, IA: ACT, Inc. ACT, Inc. (2013). Updating the ACT college readiness benchmarks. (ACT Research Report Series, 2013-6). Iowa City, IA: ACT, Inc. ACT, Inc. (2014). The Technical Manual: The ACT. Iowa City, IA: ACT, Inc. Retrieved from: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&cad=rj a&uact=8&ved=0ahUKEwj72tnSr_DLAhXju4MKHYzsAOYQFggmMAI&url=h ttps%3A%2F%2Fwww.act.org%2Fcontent%2Fdam%2Fact%2Funsecured%2Fdo cuments%2FPlanTechnicalManual.pdf&usg=AFQjCNFMaEuFUy8_tDgS6AelY CclUZPXyg&bvm=bv.118443451,d.amc ACT. (2015). The Condition of college and career readiness 2015. Iowa City, IA: ACT, Inc. Retrieved from https://www.act.org/research/policymakers/cccr15/ Adelman, C. (1999). Answers in the tool box: Academic intensity, attendance patterns, and bachelor's degree attainment. Washington, DC: U.S. Dept. of Education Office of Educational Research and Improvement. Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and interpreting interactions. Newbury Park, CA: Sage. Ali, R., & Jenkins, G. (2002). The high school diploma: Making it more than an empty promise. Oakland, CA: The Education Trust. Allen, J., & Sconing, J. (2005). Using ACT assessment scores to set benchmarks for college readiness. (ACT Report Series, 2005-3). Iowa City, IA: ACT, Inc. American Diploma Project. (2004). Ready or not: Creating a high school diploma that counts. Achieve, Inc. Retrieved from http://www.achieve.org/publications/readyor-not-creating-high-school-diploma-counts Baker, W. E., (1974). Reading skills (2nd ed.). Englewood Cliffs, NJ: Prentice Hall. Barnett, E. A., Fay, M. P., Bork, R. H., & Weiss, M. J. (2013, May). Reshaping the college transition: States that offer early college readiness assessments and transition curricula. New York: Community College Research Center. Retrieved 55 from http://ccrc.tc.columbia. edu/media/k2/attachments/reshaping-the-collegetransition-state-scan.pdf. Bragg, D., & Taylor, J. (2014). Toward college and career readiness: How different models produce similar short-term outcomes. American Behavioral Scientist, 588, 994-1017. doi: 10.1177/0002764213515231 Calcagno, J. C., & Long, B. T. (2008) The impact of postsecondary remediation using a regression discontinuity approach: Addressing endogenous sorting and noncompliance (NCPR Working Paper). New York: National Center for Postsecondary Research. CCSS Appendix A. (2015). Retrieved from http://www.corestandards.org/ELA-Literacy/ CCSS Development Process-Common Core State Standards Initiative. (2016). Retrieved from http://www.corestandards.org/ELA-Literacy/ CCSS English Language Arts Standards-Common Core State Standards Initiative. (2016a). Retrieved from http://www.corestandards.org/ELA-Literacy/ Conley, D. T. (2003). Understanding University Success. A Report from Standards for Success. Center for Educational Policy Research, Web site: http://www.s4s.org. Retrieved from http://eric.ed.gov/?id=ED476300 Conley, D. T. (2007). Redefining college readiness. Eugene, OR: Educational Policy Improvement Center. Retrieved from http://eric.ed.gov/?id=ED539251 Conley, D. T. (2012). A complete definition of college and career readiness. Eugene, OR: Educational Policy Improvement Center. Retrieved from http://www.epiconline.org/. Conrad-Curry, D. (2011). A four-year study of ACT Reading results: Achievement trends among eleventh-grade boys and girls in a midwestern state. Journal of Education, 191(3), 27-37. Crisp, G., & Delgado, C. (2014). The impact of developmental education on community college persistence and vertical transfer. Community College Review, 42(2), 99. http://doi.org/10.1177/0091552113516488 Fields, R., & Parsad, B. (2012). Tests and cut scores used for student placement in postsecondary education: Fall 2011. Washington, DC: National Assessment Governing Board. History (n.d.). In ACT.Inc webpage. Retrieved from: http://www.act.org/content/act/en/about-act/history.html. (n.d.). Holschuh, J. P., & Paulson, E. J. (2013). The terrain of college developmental reading: Executive summary and paper commissioned by the College Reading and 56 Literacy Association. Retrieved from: http://www.crla.net/images/whitepaper/TheTerrainofCollege91913.pdf Horn, L.J., Peter, K., & Rooney, K. (2002). Profile of undergraduates in U.S. postsecondary educational institutions: 1999-2000. Statistical Analysis Report NCES 2002-168.Washington, D.C.: National Center for Education Statistics. Retrieved from http://nces.ed.gov/pubs2002/2002168.PDF Howell, J. S., Kurlaender, M., & Grodsky, E. (2010). Postsecondary preparation and remediation: Examining the effect of the Early Assessment Program at California State University. Journal of Policy Analysis and Management, 29(4), 726-748. Hughes, K. L., & Scott-Clayton, J. (2011). Assessing developmental assessment in community colleges (CCRC Working Paper No. 19, Assessment of Evidence Series). New York: Columbia University, Teachers College, Community College Research Center. Joint Committee on Testing Practices. (2004). Code of fair testing practices in education. Washington, DC: Author. Kobrin, J. (2007). Determining SAT benchmarks for college readiness. College Board Research Note RN-30. New York: The College Board. Lanier, C. W. (1994). ACT composite scores of retested students. (ACT Research Report Series, 94-3). Iowa City, IA: ACT, Inc. Lincoln, Y S., & Guba, E. G. (2000). Paradigmatic controversies, contradictions, and emerging confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), The handbook of qualitative research (2nd ed.) (pp.163-188). London: Sage. McNabb, T. (1990). Course placement practices of American postsecondary institutions. (ACT Research Report Series, 89-5) Iowa City, IA: ACT, Inc. McNabb, T. (1991). Couse placement practices of American postsecondary institutions. (ACT Research Report No. 90-10). Iowa City, IA: ACT, Inc. McNeish, D., Radunzel, J., & Sanchez, E. (2015). A multidimensional perspective of college readiness: Relating student and school characteristics to student performance on the ACT (ACT Research Report Series No. 6). Iowa City, IA: ACT, Inc. Mealey, D., & Nist, S. (1989). Postsecondary, teacher directed comprehension strategies. Journal of Reading, 32(6), 484-493. Merisotis, J. P., & Phipps, R. A. (2000). Remedial education in colleges and universities: What's really going on? The Review of Higher Education, 24(1), 67-85. National Center for Educational Statistics. (2015). How is grade point average 57 calculated? Washington, D.C.: U.S. Department of Education. National Governors Association Center for Best Practices & Council of Chief State School Officers. (2010). Common Core State Standards. Washington, DC: Authors. NCES. (2004). The NCES Fast Facts. (National Center for Education Statistics). Retrieved March 26, 2016, from https://nces.ed.gov/fastfacts/display.asp?id=1 NCME Ad Hoc Committee on the Development of a Code of Ethics. (1995). Code of Professional responsibilities in educational measurement. Washington, DC: National Council on Measurement in Education. Noble, J. (1985). Estimating reading skill from ACT assessment scores. (ACT Research Report No. 88). Iowa City, IA: ACT, Inc. Noble, J. (1991). Predicting college grades from ACT assessment scores and high school coursework and grade information. (ACT Research Report No. 91-3). Iowa City, IA: ACT, Inc. Noble J., M. Davenport, J. Schiel, & M. Pommerich (1999). Relationships between noncognitive characteristics, high school course work and grades, and test scores of ACT tested students. (ACT Research Report No. 99-4). Iowa City, IA: ACT, Inc. Noble, J., & McNabb, T. (1989). Differential coursework and grades in high school: Implications for Performance on the ACT assessment. (ACT Research Report Series, 89-5). Iowa City, IA: ACT, Inc. Organization for Economic Cooperation & Development (OECD). (2007). Education at a glance: 2007. Author. Retrieved from http://www.oecd.org/edu/skillsbeyond-school/educationataglance2007-home.htm Parsad, B., Lewis, L., & Greene, B. (2003). Remedial education at degree-granting postsecondary institutions in fall 2000: Statistical analysis report (NCES 2004101). Washington, DC: U.S. Department of Education, National Center for Education Statistics. Radunzel, J., & Noble, J. (2012). Predicting long-term college success through degree completion using ACT composite score, ACT benchmarks, and high school grade point average (Research Report 2012-5). Retrieved from http://media.act.org/documents/ACT_RR2012-5.pdf Report in Brief: NAEP 1996 Trends in Academic Progress. (2000, August 22). Retrieved March 26, 2016, from https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=97986rev Sawyer, R. (2008). Benefits of additional high school course work and improved course 58 performance in preparing students for college. (ACT Research Report Series, 2008-1.) Iowa City, IA: ACT, Inc. Retrieved from http://eric.ed.gov/?id=ED510472 Sawyer, R. (1989). Validating the use of ACT assessment scores and high school grades for remedial course placement in college. (ACT Research Report Series No. 894). Iowa City, IA: ACT, Inc. Retrieved from https://eric.ed.gov/?id=ED322163 Stahl, N. A., & King, J. R. (2009). A history of college reading. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (2nd ed., pp. 3-25). New York: Routledge. Strauss, V. (2012, September 24). Why ACT overtook SAT as top college entrance exam. The Washington Post. Retrieved from https://www.washingtonpost.com/blogs/answer-sheet/post/how-act-overtook-satas-the-top-college-entrance-exam/2012/09/24/d56df11c-0674-11e2-afffd6c7f20a83bf_blog.html Strong American Schools. (2008). Diploma to nowhere. Washington, DC: Author. Retrieved fromhttp://www.deltacostproject.org/resources/pdf/DiplomaToNowhere.pdf Tabachnick, B. G., & Fidell, L. S. (2014). Using multivariate statistics. London: Pearson. U.S. Department of Education. (2010). A blueprint for reform: The reauthorization of the Elementary and Secondary Education Act. Retrieved from http://www2.ed.gov/policy/elsec/leg/blueprint/blueprint.pdf. Wathington, H. D., Barnett, E. A., Weissman, E., Teres, J., Pretlow, J., & Nakanishi, A. (2011). Getting ready for college: An implementation and early impact study of eight Texas developmental summer bridge programs. New York: National Center for Postsecondary Research. Wilkins, C., Hartman, J., Howland, N., & Sharma, N. (2010). How prepared are students for college-level reading? Applying a Lexile-based approach. (REL Technical Brief, REL 2010-No. 094). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Southwest. Wirt, J., Choy, S., Rooney, P., Provasnik, S., Sen, A., & Tobin, R. (2004). The Condition of Education 2004 (NCES 2004-077). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office. U.S. Wyatt, M. (1992). The past, present, and future need for college reading courses in the U.S. Journal of Reading, 36(1), 10-20. |
| Reference URL | https://collections.lib.utah.edu/ark:/87278/s6d26sfj |



