| OCR Text |
Show The Graduate School - University of Utah GRADUATE COUNCIL REPORT TO THE SENIOR VICE PRESIDENT FOR HEALTH SCIENCES AND THE ACADEMIC SENATE April 25, 2005 The Graduate Council has completed its review of the Department of Pharmaceutics and Pharmaceutical Chemistry. The external reviewers were: Jessie L.-S. Au, Pharm.D, Ph.D. (Chair) Distinguished University Professor Colleges of Pharmacy, Medicine and Engineering The Ohio State University Kinam Park, Ph.D. Professor College of Pharmacy Purdue University David A. Tirrell, Ph.D. Professor and Chair Division of Chemistry and Chemical Engineering California Institute of Technology The Internal Review Committee of the University of Utah included: Charles Grissom, Ph.D. Professor Department of Chemistry Martin Rechsteiner, Ph.D. Professor Department of Biochemistry Randall Stewart, Ph.D. Associate Professor Department of Languages and Literature This report by the Graduate Council's ad hoc review committee is based on the Department of Pharmaceutics and Pharmaceutical Chemistry self study (December 1 2004), the report of the three external reviewers and exit interview with them (December 2004), the report of the internal reviewers (December 2004), and the co-response of the Chair of the Department and Dean of the College of Pharmacy (February 2005). DEPARTMENT PROFILE Overview The Department of Pharmaceutics and Pharmaceutical Chemistry is one of four departments in the College of Pharmacy at the University of Utah. The Department offers degree programs leading to the Master of Science (M.S.), Master of Philosophy (M.Phil.), and Doctor of Philosophy (Ph.D.), all in Pharmaceutics. The Department does not offer an undergraduate degree, but it contributes to the teaching of three courses in the Professional Doctorate of Pharmacy (Pharm.D.) curriculum; pharmaceutics is a major component of the professional pharmacy curriculum. The departmental missions are to advance research in the areas of pharmaceutical chemistry and drug delivery, and to provide excellent educational opportunities for all students that they teach. The Department has attained international recognition and an outstanding reputation in pharmaceutical research, being at the forefront of research on drug delivery and biomaterials. The Department also maintains a graduate program that is ranked within the top tier of the Pharmaceutics graduate programs in the United States. The Department has evolved over time, generally reflecting the changes within the discipline on national and international levels. Currently the Department's focus is on integrating molecular medicine and computational biology to advance the knowledge base in the delivery of small and large molecules. The evolution of the graduate program and research foci, provide the foundation for maintaining the momentum and the scientific leadership in the drug delivery area. According to Dr. Au, who was an external reviewer for this and the prior review of 1998, the Department seems to be in better shape than 7 years ago. She defines this as an outstanding Department poised to continue growing their strengths and one that has been responsive to concerns raised in the last review. In general, external reviewers state that the scale of the program is considered sufficient and provides the necessary critical mass for achieving academic excellence. Professor Jindrich Kopecek led the Department from July 1999 to July 2004. Beginning July 1, 2004, the Department has been headed by an interim chair, Steven Kern, who joined the department four years ago and is currently an Assistant Professor. While Professor Kern has done a commendable job as interim chair, the need for a permanent chair is universally seen as the number one priority for the Department. It is highly unusual for a non-tenured, assistant professor, still developing as a researcher, to be chair. Without more appropriate leadership, the Department risks eroding faculty morale, maintaining the status quo and/or growing, losing faculty and/or facing difficulty in hiring new faculty, and losing funding opportunities. The Department has recently attained an Endowed Chair from the George S. and Dolores Dore Eccles Foundation that 2 will assist in the recruitment of a world-class scholar to head the Department, someone who can bring a new perspective and new critical mass to the faculty core. Faculty There are a total of 13 regular faculty members including 4 distinguished professors, 2 professors, (one is the Dean of the college), 2 associate professors and 5 assistant professors. There are also 5 research faculty members and 24 adjunct faculty members. Of the 13 regular faculty members, two are females and six are of Asian ethnicity, which is reflective of the general population at peer schools in terms of gender and minority representation. The senior faculty is internationally renowned. The junior faculty members have excellent training and expertise, and some have already shown great promise as outstanding researchers and scholars. Most of the faculty members are currently engaged in two core areas of research: Macromolecular therapeutics, and Biomolecular and Cellular Pharmaceutics. These areas of expertise reflect recent growth within the Department and expansion into drug delivery efforts that interweave pharmaceutical chemistry with biology and physiology. The Department anticipates that future Department growth and maturation in these areas will result in a single research emphasis that will primarily reflect biologically-based drug delivery. The level of scholarly activity (especially the activity of the senior faculty), including number of publications and external research funding, is impressive and high compared to peer schools. The level of research support has been increasing over the last several years as more junior faculty begin to obtain significant continuous funding for their research. Several senior faculty members have launched start-up biotech companies, which have led to the creation of about 500 new high tech jobs in the State of Utah. Furthermore, the Department has been organizing the International Symposium on Recent Advances in Drug Delivery Systems (commonly known as the Utah meeting) for the last 24 years. Faculty morale is generally high, and collegiality is apparent. The expectations for teaching, research and service seem well defined and balanced. A formal faculty mentoring program for junior faculty members is in place. Several junior faculty members have indicated that the mentoring program has been very helpful. The Department fully realizes that it is incumbent on senior faculty to continue to support and mentor junior faculty so that their transition to full research productiveness and attainment of tenure is achieved before the phased retirement of the senior faculty members. The collective presence of several distinguished pharmaceutical scientists in the Department has been the key to its success. The upcoming retirement of these senior faculty members and the relatively large number of untenured junior faculty members have created a uniquely urgent need of a new chair to provide the scientific and administrative leadership 3 to maintain and promote the standing of the Department in the University and in the pharmaceutical science community. The policy on Retention, Promotion and Tenure, although in place, is not well communicated to many of the junior faculty members according to the external reviewers. There is a concern that the department does not have the opportunity to present or represent its candidates to the college RPT committee. A second concern is that Department and college expectations are not always aligned, resulting in ambiguity and confusion to some of the untenured faculty members. The average salaries for professors at all ranks are low relative to their peers of comparable levels of accomplishments and productivity. This represents a potential problem for attracting and retaining outstanding senior and junior faculty. Curriculum The Department only grants graduate degrees, accepting students only into the doctoral degree program. Students on average take from 5 to 6 years to graduate. The Department does offer a terminal Master of Science degree for students who are unable to complete their doctoral program but have completed the Department core classes, passed at least the written comprehensive exam, and completed enough research to represent one published manuscript. Tenure track faculty members are responsible for teaching in both the Professional Pharm.D. degree program (3 classes) and the Department graduate core curriculum (5 classes). The professional student classes are taught primarily by junior and middle level faculty members. The Department also participates in the interdepartmental graduate programs in Biological Chemistry and Molecular Biology. The excellent leadership by the past chair has brought consensus in the core curriculum, solving one of the major concerns pointed out in the previous review. The Department has done an impressive job in improving the quality of the program. The graduate core curriculum has been redesigned to reflect the new research direction and strength of the Department faculty, and the field at large. The curriculum seems to be keeping pace with an ever-changing field. The Department offers diverse courses covering all aspects of drug delivery and biomaterials, and relevant related topics. Students have excellent opportunities to learn the state-of-the-art information taught by leaders in the field. Currently there are 7 required core courses taken by all graduate students. It has been suggested to adjust the total number of these core courses, so the selection of the courses is more tailor-made based on individual students whose backgrounds and research interests may be widely different. Students 4 There are currently 34 graduate students. The pool of students seems to be quite strong. Intellectually talented and creative, students enter the program from diverse undergraduate education backgrounds. GRE scores are generally in the 75t h to 99t h percentile. The Department relies on several parallel approaches for attracting highly talented students. Of these, the summer internship program seems to work very well for recruiting graduate students, and it is highly recommended to continue the program with possible expansion. The current pool of graduate students represents many nationalities of both genders; about half are foreign, mainly from China and India. The number of graduate students has increased, and the students appear quite satisfied with the general quality of education they are receiving. They are also pleased with their research opportunities, course work, and job prospects. Strong employment placement and prospects for graduating students reflect the high quality of their education. Graduates have established themselves as highly regarded scientists, assuming positions in academic institutions, and scientific and managerial positions in local, national, and international pharmaceutical companies. Current student support is entirely from research grants either as Graduate Fellows or Graduate Research Assistants; there are no formal teaching assistant (TA) positions. The lack of TA positions raised a concern for the external reviewers. The TA positions are needed not only for teaching undergraduate students, but also as a mechanism of supporting new graduate students during their first and/or second year of the graduate program. All graduate students, though, are expected to teach in at least one department course (either graduate or professional pharmacy) during their studies, which is asking research assistants to take on TA duties. Students who serve in teaching assignments more than once are provided with a stipend supplement to their research assistantship stipend. There seems to be little in-house training and/or mentoring for students in their teaching responsibilities. Facilities and Resources Office and laboratory space for the faculty is spread between three buildings: i.e., Skaggs Building, Biopolymer Building, and Research Park. The laboratory space is adequate, though maintenance and infrastructure issues with the Research Park facility were cited. However, the division of the department space in three buildings represents an obstacle for faculty and student interaction and cohesiveness, and may become a limiting factor for the department. The three separate facilities make sharing of equipment difficult. Communication between the distant facilities is limited, especially when the Internet is down and e-mail is unavailable. Students, faculty and staff encounter problems daily because they cannot easily interact with each other by face-to-face contact. The three separate building sites may also limit the Department from achieving greater heights and, from a more practical standpoint, to take advantage of the unprecedented opportunities and programmatic initiatives offered by the National Institutes of Health in translational research and therapy development, the two areas where the department has significant strengths. 5 Budgetary constraints have affected the operation of all areas of the Department. The operating budget does not adequately support program needs. Faculty members have absorbed numerous costs that the department had traditionally paid, including office supplies. The existing secretarial support for the department is five full-time and two 3/4- time staff. The internal reviewers considered this as inadequate for a staff that deals with and supports the Chair, faculty, graduate students, and numerous research people. Each physical location has a very small library of journals and discipline-specific scientific books, or none at all. The Eccles Health Sciences Library has limited numbers of what is required. There is a great reliance upon on-line journals, which provide up-to-date articles when subscriptions are available, but these resources are very inadequate according to the internal reviewers. COMMENDATIONS 1. The Department is recognized internationally for its excellence in research, and ranks among the best programs in pharmaceutics, with outstanding accomplishments in drug delivery. While this recognition is largely built upon the reputation and funded research of the senior faculty, the junior faculty members are beginning to establish themselves and their research portfolios. Recent hires, since the last review, are endowed with state-of-the-art research expertise and are poised to make excellent contributions to promote academic excellence in the Department. 2. The Department has been responsive to the last review and, to the extent afforded by the available resources, has implemented the recommendations put forth by the external and internal reviewers. 3. The senior faculty members should be commended for their efforts and successes in mentoring the junior members of the Department. 4. The immediate past chair provided critical and able leadership and chaperoned the Department through a renewal process that enabled the integration of biology and molecular medicine into the research and teaching programs, which in turn has provided an excellent foundation for the program to continue to excel as one of the top graduate programs in the country. The current interim chair, who assumed the position July 1 2004, is also to be commended for his commitment and skillful management to stay on course in mamtaining academic excellence. 5. The Department has recently secured an endowed chair to facilitate the hiring of a new chairperson of significant stature. This is the first endowed chair in the College of Pharmacy, and speaks to the strong support of the Dean for the Department. 6 6. The graduate program has been steadily growing since the last review. The ability of providing quality training to graduate students, leading to pharmaceutics-related jobs for nearly all of its graduates, continues to be a major strength of the Department. 7. Morale in the Department is quite high, especially among the junior faculty and graduate students, who enjoy a great deal of camaraderie within their ranks. 8. Some faculty members have been very successful in technology transfer. Their efforts in this arena have led to the creation of several biotech start-ups and companies. This, in turn, has had a positive impact on the State economy. 9. The summer internship program has been very effective in recruiting highly talented students to the graduate program. RECOMMENDATIONS 1. The timely completion of the search for a new Department chairperson is strongly recommended. The external review team saw this as a critical necessity. 2. Continued efforts should be made to consolidate the physical facilities into one building or centralized location. All options for co-location of departmental activities should be explored, and the longer-range objective of raising funds for a new building aggressively pursued. Until this becomes a reality, though, immediate attention should be given to providing better basic irrfrastructure support in Research Park. 3. Although the Department has made significant strides in revising and modernizing its core curriculum, the hiring of new faculty members in the past few years has created opportunities for further curriculum development. The department should continue to examine its graduate course offerings to ensure high quality, appropriate depth and breadth, appropriate balance of core requirements to a student's experience, and inclusion of the most important and timely subjects for graduate education. 4. The Department should examine the following issues regarding the status of teaching assistants: a) graduate student stipends and health benefits are not uniform among research groups; b) the fact that there are no departmental TA positions was seen as a detriment; and c) while all graduates are expected to teach, there seems to be little training for their teaching responsibilities Submitted by the Ad Hoc Review Committee of the Graduate Council Stephen Koester (Chair), Modern Dance Lynne Schrum, Teaching and Learning Harris Sondak, Management 7 Memorandum of Understanding Department of Pharmaceutics and Pharmaceutical Chemistry Graduate Council Review 2004-05 This memorandum of understanding is a summary of decisions reached at a wrap-up meeting on August 25, 2005, concluding the Graduate Council Review of the Department of Meteorology. A. Lorris Betz, Senior Vice President for Health Sciences; John W. Mauger, Dean of the College of Pharmacy; Steven E. Kern, Interim Chair of the Department of Pharmaceutics and Pharmaceutical Chemistry; David S. Chapman, Dean of the Graduate School; and Frederick Rhodewalt, Associate Dean of the Graduate School were present. The discussion centered on but was not limited to the recommendations contained in the Graduate Council review completed on April 25, 2005, which addressed the following issues: (1) completion of the search for a new chairperson, (2) consolidating physical facilities, (3) curriculum development, and (4) teaching assistant issues. At the wrap-up meeting, the working group agreed to endorse the following actions: Recommendation 1: The timely completion of the search for a new Department chairperson is strongly recommended. The external review team saw this as a critical necessity. An outside search for a new department chairperson is currently underway. Five candidates have been invited to interview on campus this fall and an offer will be made early in the Spring, 2006 semester. Recommendation 2: Continued efforts should be made to consolidate the physical facilities into one building or centralized location. All options for co-location of departmental activities should be explored, and the longer-range objective of raising funds for a new building aggressively pursued. Until this becomes a reality, though, immediate attention should be given to providing better basic infrastructure support in Research Park. Although it is not possible to consolidate all laboratories and offices into one building at this time, laboratories are being relocated so that working groups across the Health Sciences are located in contiguous areas. The College, in collaboration with the Senior Vice President for Health Sciences, is taking steps to locate funding for a new building to house the departments in the College of Pharmacy. 8 Memorandum of Understanding Dept. of Pharmaceutics and Pharmaceutical Chemistry Page 2 Recommendation 3: Although the Department has made significant strides in revising and modernizing its core curriculum, the hiring of new faculty members in the past few years has created opportunities for further curriculum development. The department should continue to examine its graduate course offerings to ensure high quality, appropriate depth and breadth, appropriate balance of core requirements to a student's experience, and inclusion ofthe most important and timely subjects for graduate education. The department is pursuing several activities to address this set of recommendations. First, they are presently evaluating and revising the 2001 curriculum, the curriculum currently in place. As part of this evaluation, the department is seeking feedback from alumni and industry sponsors. Second, attention is being given to new faculty hires who complement programmatic, curricular objectives. Third, the department is forming an industrial advisory board to provide input into future department growth and planning. Recommendation 4: The department should examine the following issues regarding the status of teaching assistants: a) graduate student stipends and health benefits are not uniform among research groups; b) the fact that there are no departmental TA positions was seen as a detriment; and c) while all graduates are expected to teach, there seems to be little training for their teaching responsibilities. The department is striving to reduce the discrepancy among teaching assistant stipends and support. It is requested that the department develop a five-year plan to improve training graduate students to be teachers. This plan will consider a) offering a wider range of teaching opportunities than is currently available, b) exploring partnerships with the Center for Teaching and Learning Excellence for teacher training, and c) recognizing enrollment in teaching preparation courses as fulfilling elective requirements. The department will report their progress in these areas in their annual report to the Graduate School. This memorandum of understanding is be followed by annual letters of progress from the Department Chair to the Dean of the Graduate School. Letters will be submitted each year until all of the actions described in the preceding paragraphs have been completed. A. Lorris Betz David S. Chapman David S. Chapman Assoc. V.P. for Graduate Studies Steven E. Kern Dean, The Graduate School John W. Mauger September 14,2005 Frederick Rhodewalt 9 The Graduate School - University of Utah GRADUATE COUNCIL REPORT TO THE SENIOR VICE PRESIDENT FOR ACADEMIC AFFAIRS AND THE ACADEMIC SENATE April 24, 2006 The Graduate Council has completed its review of the College of Nursing. The external reviewers were: Karen L. Carlson, Ph.D., R.N. Associate Dean and Professor College of Nursing University of New Mexico Helen R. Connors, Ph.D., R.N. F.A.A.N. Associate Dean and Professor School of Nursing University of Kansas Kristen M. Swanson, Ph.D., R.N., F.A.A.N. Chair and Professor Family and Child Nursing University of Washington The internal review committee of the University of Utah was composed of: Diana G. Pounder, Ph.D. Professor and Chair Department of Educational Leadership and Policy Steven T. Roens, D.M.A. Professor School of Music Debra L. Scammon, Ph.D. Professor Department of Marketing 1 This report by the Graduate Council's ad hoc review committee is based on the College of Nursing self-study, the report of three external reviewers and the exit interview with them, the report of three internal reviewers, and the response from the Dean of the College of Nursing dated March 15,2006. COLLEGE OF NURSING PROFILE Overview The College of Nursing (CON) at the University of Utah has held college status since 1948 after it had operated as a Department of Nursing Education in the School of Education beginning in 1941. It is supported by the University of Utah's central administration and the Health Sciences Center, and promotes the three-fold mission of teaching, research, and practice of the Health Sciences. Through its "two informal divisions" (self-study, page 11), Acute and Chronic Care, and Health Systems and Community-Based Care, it offers two upper division (traditional and accelerated) bachelor degrees, and an RN-to-BS degree; two M.S. degrees, Nursing and Gerontology; and a Ph.D. on campus, as well as a distance Ph.D. Extramural, intramural and private foundation funding support the College's research mission, which is also promoted through the Emma Eccles Jones Nursing Research Center located in CON. Research funding has increased from $80,000 to more than $9,930,000 over the past ten years, and senior scientists, who hold five endowed chairs, direct research interest groups and mentor junior faculty. The College follows a faculty practice plan that supports six faculty practice and student placement sites such as the Stansbury Community Center and University of Utah Health Services. The College administration includes the dean, three associate deans (Research, Academic Programs, Information and Technology), two assistant deans (Finance and Administration, Clinical Affairs), and two division chairs. While the division chairs oversee performance reviews and faculty assignments, they do not control their own budgets, thus making CON in essence a one-department college. However, the College and its faculty are currently reviewing a proposal for reorganization of the two divisions (CON response, page 2). Similarly, the College is evaluating the role of its Gerontology program, which the reviews describe as disconnected from the CON organizational structure and curriculum. Faculty and students in the program consider gerontology an ill fit for the College of Nursing, as the majority of students are associated with Health or Social and Behavioral Science. However, neither is currently interested in relocating Gerontology nor does the program generate sufficient resources to stand on its own. The College enjoys an "outstanding reputation at the community, regional and national levels" (external review report, page 1), and many students apply to the CON programs for that reason. Furthermore, the College is committed to strategic planning that responds to national and state issues, and to quality improvement. Its top priority is to become one of the top research colleges in the nation. 2 Faculty The College of Nursing has a total of 91 faculty, of which 29 are tenure-track, 36 clinical and research, and 26 part time adjunct. Since 2000, the tenure-track faculty headcount has slightly increased from 26 to 27, and currently includes 12 at the full, 10 at the associate, and 7 at the assistant professor levels. The reviews describe the faculty as well prepared and committed to the College's strong research culture. For example, the College supports junior faculty development by reducing their teaching load during their initial three years, offers financial packages for research to new faculty, and provides some summer funding. The College has a good funding record and, moreover, has identified strategies to "break through to the next level of center grants" (internal review report, page 4). Although the College expects that all faculty publish, the internal review notes that faculty in administrative positions face some difficulty in meeting the publication standards, and that part-time faculty are not assigned any FTE for scholarship. The external review reports on some faculty members expressing concern about workload inequity and lack of recognition, for example for curricular contributions. While faculty overall are committed to the stated mission of the College and the University, those associated with Gerontology are disconnected from their peers. They are unsatisfied with their role in relation to the Center on Aging, the CON, and the university as a whole, and perceive a lack of recognition for the historical and current contributions of the Gerontology program. Although CON has been able to recruit a critical mass of junior faculty, the severe shortage of nursing faculty presents a major challenge for the College of Nursing at the University of Utah (as it does for institutions across the country). Salaries below the 75t h percentile according to the external review, and below the 50t h according to the internal review, make CON vulnerable to recruitment of its faculty by other institutions. Both the external and the internal reviews express concern that these factors may negatively affect the College's ability to maintain and enhance its teaching and research strengths. They also contribute to the lack of diversity in the college, as currently only six faculty members come from racial/ethnic minorities. Students The College of Nursing is able to select students from a pool of strong applicants across all programs, and has received HRSA Diversity and Bennion Center grants to support the recruitment and education of racial/ethnic minorities. CON has experienced its greatest growth in pre-majors, from 289 in 2000-20001 to 459 as of November 2005. The number of doctoral students has increased from 28 to 46, with a handful of them receiving their doctoral degree each year. With the exception of Gerontology, students in all programs receive adequate in-person and on-line advising and are able to give input into the curriculum and other issues that affect them. Undergraduate and graduate students are very satisfied with the quality of instruction, the faculty and the supervision they receive through the Clinical Faculty Associate program, a cooperative program with clinical agencies. CON implemented a new plan for TA training in Fall 2005 with required participation in CTLE seminars. The internal reviewers recommend that the clinical faculty associates who support the TA training be included in formal teacher-training and evaluation processes. 3 Students voiced some dissatisfaction with the service-learning component of their program, which they perceive as merely an add-on rather than a well-integrated experience. It should be noted that a task force, appointed in March 2005, has made recommendations for improvements of the service learning program, which will be implemented in Fall 2006 (CON response, page 2). The internal reviewers state some concern about the "C or better" requirement for prerequisites which may leave some students inadequately prepared for key courses such as pharmacology, pathophysiology, and clinical rotations. In a similar vein, the internal review points out that Gerontology has accepted students with fairly low GRE scores, apparently relying on the 3.2 GPA requirement to assess the applicants' potential rather than their exam scores. Curriculum The College of Nursing offers a wide variety of programs that range from preparing entry-level nurses to educating nurse scientists; many of them are supported through partnerships with health resources in the community, for example Intermountain Health Care and the Veterans Administration Health System. The traditional B.S. in Nursing, an accelerated version to meet the critical need for nurses, and an on-line R.N. to B.S. undergraduate degree are all designed as four-semester programs. While CON competes with other undergraduate programs in the state, it is the only one that offers graduate degrees, which include 15 different M.S. tracks (with unique nursing informatics and midwifery and women's health programs) and an on-line as well as an on-campus Ph.D. In response to the nursing faculty shortage, CON is currently focusing growth on the Teaching in Nursing M.S. track. It also houses an interdisciplinary M.S. in Gerontology, which, according to both review teams, needs to be rethought and reinvigorated. Nurses need to be prepared to care for a fast-growing older population, and, furthermore, the program aligns with the current focus on interdisciplinary programs at the University of Utah attracting students from across disciplines to its undergraduate and graduate certificate programs. The reviewers emphasize the need for Gerontology to forge strong connections to the Center on Aging, CON, the Health Sciences, and the University of Utah as a whole. The Ph.D. programs prepare nurse scientists in a research methods intensive curriculum. The innovative distance Ph.D. program, with a focus on oncology, is considered a unique model for doctoral study that limits financial commitment from the host institution. The college is also exploring the development of a Nurse Practitioner Doctorate (D.N.P.) that would be available for students with an M.S. in one of the nurse practitioner specialties. Program Effectiveness - Outcomes Assessment The College of Nursing has a comprehensive assessment plan in place that includes capstones, exit surveys, licensure and accreditation pass rates, and meetings with area employers about the competency of graduates, and program advising. The reviewers consider the College's process evaluation and quality improvement procedures effective, and find the learning outcomes to be clearly articulated. Less clearly stated, according to the external reviewers, is the link between assessed students' abilities and end of program objectives. In its response, CON agrees that it needs to ascertain how the learning outcomes map to program outcomes, for example by 4 conducting a survey with employers of graduates as part of a more comprehensive plan that links outcomes to objectives. Facilities The College of Nursing has been in its current building since 1969 and is in dire need of substantial upgrades. The Nursing building is substandard in many areas, and potentially a safety hazard in relation to fire codes, emergency exits, and seismic stability. Space presents a serious problem as several part-time faculty members currently share offices, and laboratories are insufficient for instruction and simulation of clinical procedures. A comprehensive master plan has led to some upgrades, including the remodeling of the 5t h floor to house the Research Center and the Center on Aging. However, as the self-study and the reviewers suggest, much more is necessary to create a safe and functional space. Library resources and access to electronic resources are sufficient, and the College takes advantage of the vicinity of the Health Sciences and its new HSEB with state of the art classrooms, computer centers, and areas that promote interaction. COMMENDATIONS 1. The College has strong leadership, a thoughtful and well-articulated mission, and a strategic plan that responds well to changes in nursing education, practice, and research. 2. The College has successfully focused on its research mission by substantially increasing extramural funding, having acquired five endowed chairs (and soon a sixth) to enhance scholarly productivity, and providing strong support for faculty development. 3. CON has successfully positioned itself as a leader in nursing education in the state of Utah. It has recognized that the key to addressing the well-known nursing shortage is to train students at the graduate level to become nursing faculty and leaders. 4. CON offers high quality programs across all levels with successful use of instructional technology and innovative teaching ideas. The successful distance Ph.D. program in oncology uses real-time video-conferencing to create a unique community of learning. It has become a model for doctoral study around the country, and will heighten the prestige of the College regionally and nationally. 5. The College has successfully built clinical partnerships to enhance the education and placement of its students across all levels. In particular, the Clinical Faculty Associate program has significantly enhanced the training of the undergraduates who are mentored and supervised by employed nurses. 5 RECOMMENDATIONS 1. The College should be supported in its given authority and responsibility to provide direction and oversight of the Gerontology Program. The College as a whole operates cohesively and with a strong commitment to its mission, and the Gerontology Program needs to be clearly articulated and integrated within the College of Nursing. The focus for this interdisciplinary program should be on coordination and collaboration to increase student enrollment and program visibility throughout the University. 2. The College should make diversity of faculty and students a top priority by seeking grants that specifically target the recruitment and retention of minority faculty, similar to the current HRSA and Bennion Center grants for student recruitment and financial and educational support. The College should work closely with the Associate Vice President for Diversity and articulate its commitment to diversity strongly and highly visibly. 3. The College should continue to review its range of program offerings with an eye to changing market needs and internal efficiencies such as consolidation. 4. In order to compete in the nursing faculty market and to retain its current faculty, the College must find ways to increase its salaries. One suggested strategy is that the College address salary issues in the context of discussions on the consolidation of its graduate programs. 5. The College should continue its efforts to secure external funding, and pursue internal strategies that will provide support incentives such as pilot and bridge grants. If possible, the College should raise the current intramural maximum of $3,000 to at least $7,500. 6. The College should define how it measures its stated learning outcomes and devise strategies for using results to improve curricula and programs. Faculty should participate in this process of "closing the feedback loop." Submitted by the Ad Hoc Review Committee of the Graduate Council Johanna Watzinger-Tharp (Chair), Department of Languages and Literature Lynne Schrum, Department of Teaching and Learning Jingyi Zhu, Department of Mathematics Sharon-Aiken Wisniewski (Undergraduate Council), University College 6 Memorandum of Understanding College of Nursing Graduate Council Review 2005-06 This memorandum of understanding is a summary of decisions reached at a wrap-up meeting on May 30, 2006, concluding the Graduate Council Review of the College of Nursing. A. Lorris Betz, Senior Vice President for Health Sciences; Maureen R. Keefe, Dean of the College of Nursing; David S. Chapman, Dean of the Graduate School; and Frederick Rhodewalt, Associate Dean of the Graduate School were present. The discussion centered on but was not limited to the recommendations contained in the Graduate Council review completed on April 24,2006. At the wrap-up meeting, the working group agreed to endorse the following actions: Recommendation 1: The College should be supported in its given authority and responsibility to provide direction and oversight of the Gerontology Program. The College as a whole operates cohesively and with a strong commitment to its mission, and the Gerontology Program needs to be clearly articulated and integrated within the College of Nursing. The focus for this interdisciplinary program should be on coordination and collaboration to increase student enrollment and program visibility throughout the University. The Senior Vice President for Health Sciences expects and supports the College of Nursing exercising its responsibility in addressing issues related to the Gerontology Program. A new director has been appointed and the College has instructed the program to develop a strategic plan that addresses enrollment growth and integration with the College of Nursing. Recommendation 2: The College should make diversity of faculty and students a top priority by seeking grants that specifically target the recruitment and retention of minority faculty, similar to the current HRSA and Bennion Center grants for student recruitment and financial and educational support. The College should work closely with the Associate Vice President for Diversity and articulate its commitment to diversity strongly and highly visibly. The College of Nursing gives this recommendation its highest priority and is actively developing multiple strategies to increase and sustain diversity among faculty, staff, and students. The College is currently assessing the effectiveness of its current outreach programs (HRSA and Bennion Center funded initiatives) as well as proposing a pre-nursing LEAP seminar. The College is attempting to develop an in-house doctoral level pool for faculty recruitment. Recommendation 3: The College should continue to review its range of program offerings with an eye to changing market needs and internal efficiencies such as consolidation. The College has completed an evaluation of its specialization areas. It is attempting to cluster and consolidate areas as part of the ongoing transition to offering the Doctor Nursing Practice (DNP) degree. The College has discontinued student admissions in two areas, Community Health Nursing and Patient Care Service Administration, as part of the consolidation and repositioning for the DNP degree program. 7 Memorandum of Understanding College of Nursing Page 2 Recommendation 4: In order to compete in the nursing faculty market and to retain its current faculty, the College must find ways to increase its salaries. One suggested strategy is that the College address salary issues in the context of discussions on the consolidation of its graduate programs. The College is conducting a salary analysis for the purposes of developing a plan but sees no obvious immediate solution in sight. Funding from the State Nursing Initiative has been used to increase some base salaries, but this source is limited by the fact that dollars must be matched 1 to 2 by the hospitals. Some of this funding has been used to support the hiring of new FTEs. The College notes that they are ranked at the top in number of endowed chairs in state-supported colleges of nursing. Recommendation 5: The College should continue its efforts to secure external funding, and pursue internal strategies that will provide support incentives such as pilot and bridge grants. If possible, the College should raise the current intramural maximum of $3,000 to at least $7,500. With the support of the Senior Vice President for Health Sciences the College Research Center has raised proposal initiative seed grants $7,500. Recommendation 6: The College of Nursing should define how it measures its stated learning outcomes and devise strategies for using results to improve curricula and programs. Faculty should participate in this process of "closing the feedback loop". The College of Nursing reports that is has clearly specified learning objectives and is working to coordinate outcomes assessments with these objectives. The College will provide documentation in its follow-up reports about how it incorporates feedback into its operations. The College plans to undertake an evaluation of its outcomes assessment model. This memorandum of understanding is be followed by annual letters of progress from the Department Chair to the Dean of the Graduate School. Letters will be submitted each year until all of the actions described in the preceding paragraphs have been completed. A. Lorris Betz David S. Chapman Maureen R. Keefe Frederick Rhodewalt David S. Chapman Assoc. V.P. for Graduate Studies Dean, The Graduate School August 1,2006 8 Departmental Student Language Proficiency Outcome Assessment Proposal Department of Languages and Literature The University of Utah Spring 2006 2 6.^ b I. Purpose To contribute to the efforts for student outcomes assessment at the University of Utah, the Department of Languages and Literature proposes to implement, as part of its overall department-wide student learning outcomes assessment, its language proficiency outcome assessment program beginning the spring semester of 2006. In resonance with the articulated goals ofthe University's commitment to assessment, the department's language proficiency assessment program is aimed at identifying the extent to which the undergraduate and graduate students taking language classes are progressing in a timely matter, whether they are learning what the language programs intend, and to what extent the students feel they have met the goals of the program. The short term goal of this proposal is first to develop an assessment tool and procedure to assess the speaking and writing proficiencies of a smaller group of language learners. Then a pilot run of this instrument will be implemented at the end of the spring semester and during the summer of 2006. The limited scope of this initial endeavor will enable the department to pilot its assessment effort, monitor its effectiveness and improve it on a continual basis. The long term goal of this assessment proposal is to develop and implement language proficiency assessment across all four skills (listening, speaking, reading and writing) in the three modes of interpersonal, interpretive and presentational communication for learners who have completed the courses of 1010-2020 of all languages taught by the department. II. Pilot Assessment During the spring 2006 semester, the department will initiate and implement the pilot assessment program. The pilot program will have the following characteristics: 1. Targeted Test Takers and Language Skills • Given the large number of students taking language courses from the department (there were approximately 3710 students taking 1010-2020 language classes in 2005-06) and the pressing need to train assessment evaluators, the target group of students to be assessed during the pilot 1 phase will be those students who will be completing the second year sequence (2010-2020) in a language study abroad program in the summer of 2006. • Target languages for assessment will be Arabic, Chinese, French, German, Italian, Japanese, and Spanish-currently offered in the study abroad programs. • The target language proficiency skills to be assessed initially for the study abroad students will be speaking and writing in the interpersonal and presentational modes of communication. 2. Assessment Procedures » An online speaking test will be taken by each student of the target group as an entrance and exit language assessment for the study abroad program. This will be a timed test delivered on the PC platform. Each test-taker's speech sample will be evaluated and rated in accordance with the ACTFL speaking assessment criteria. The assessment of the first speaking test will reveal the level of speaking proficiency of students at the beginning of their second year of language study. It will also serve as a baseline Index of speaking proficiency for the student entering a study abroad program, against which the result of the study abroad exit speaking test will show how much progress is made by the students as a result of the additional work completed in the summer. • An on-site writing task will be given to each student before starting and completing the study abroad program. The writing samples will be evaluated and rated in terms of a set of writing assessment rubrics aligned with the ACTFL writing proficiency assessment criteria. • A language learning and achievement portfolio containing students' speech and writing samples, test scores, etc. will be established and evaluated. • Exit interviews, including instructors' evaluations as well as students' self evaluations and comments on all aspects of the language program and their learning experience will be conducted. • Each of the students in the target group will fill out a questionnaire before going abroad and another questionnaire after completing the study abroad program in order to gather information about their perceptions of their language proficiency and progress. 3. Content of Assessment The level of both the speaking and writing tests will be pitched at the Intermediate High level as defined by the ACTFL Proficiency Guidelines (see section 4 for a description of the assessment tool). 2 • Each test taker will be required to perform narrating, describing, explaining, comparing, and elaborating tasks. • These test items will require the test taker to perform the linguistic functions mentioned above to convey biographical information, understanding of day-to-day aspects of the target language culture (such as transportation, travel, holidays, schools, health, etc.), appreciation of the cultural elements of the target language culture vis-a-vis his or her native culture. • There are 15 test items in the speaking tests and five prompts for the writing task. 4. Assessment Tools • For the speaking test-and later for listening and reading in post-pilot assessment tasks, we plan to use the Enhanced Oral Testing Software (EOTS), developed by Brigham Young University, to create the test items. The EOTS is a template that can be used to create tests as well as learning activities for any language. The department has already purchased this software and a site license for its use. For the pilot assessment, audio-visual stimulus prompts to elicit a test taker's responses will be written in English and will be used for all selected languages. • For the writing test, each test taker will be given five prompts to complete the writing task. Depending on the language, the writing test can be computerized as well. e A set of generic across-the-language rubrics will be developed for the assessment of the writing samples. 5. Training for Test Item Development and Evaluators 9 Recruit, at the minimum, two test item developers/evaluators for Spanish and Italian and one for each of the other five languages. An effort will be made to recruit two evaluators for each language so as to provide inter-rater reliability for assessment. • To ensure stability and continuity of the program, these assessment developers/evaluators are preferably practicing full time language instructors at our university or at the community colleges, or individuals who have at least an M.A. in a foreign language, have had experience teaching college level language courses, and have native or near-native fluency in the target language. • Training includes four three-hour mini-workshops on proficiency test development and evaluation and one final two-hour wrap-up session. Evaluators will become familiarized with the functions and contents of the 3 proficiency tests, as well as the proficiency level benchmark rubrics for rating speech and writing samples. 6. Timeline • By February 17, get approval from study abroad program directors. • February 20-24, recruit assessment developers/evaluators. » February 27 - March 8: Workshop #1--EOTS demo; speaking test development Workshop #2-Review and rate sample speaking test items • March 9-15: Workshop #3-Develop and review writing rubrics Workshop #4-Present and review the test by languages • March 16-24: Upload tests onto college server; enable access to tests on workstations in PC Labs in DCET. • March 27-31: Field-test the tests-use three to five volunteers from the 2020 classes; assess and rate • April 3-7: Final two-hour wrap-up meeting for developers/raters; make final adjustment to tests « April 10-25: Identify the students going on study abroad and give them the tests; assess and rate; report ratings. • At the end of each study abroad program, the program director will ensure that the identified students will take the speaking and writing tests again. These tests will be made available online. Speaking and writing samples will be sent to appropriate raters for assessment and ratings. Raters will report assessment results. III. Post-Summer 2006 « Establish a data bank for the language proficiency assessment results from the summer study abroad programs in Fall 2006. • Analyze assessment data to achieve better articulation between language courses offered in regular and summer study abroad programs in Fall 2006. • Prepare target language prompts for the listening, speaking and writing tests in Fall 2006, Spring 2007. « Add a reading assessment component with authentic target language stimulus material in Spring 2007. • Prepare language proficiency outcome assessment for the aforementioned seven language study abroad programs across the four skills in Spring 2007. • Assess students of the same aforementioned languages not attending summer study abroad programs in Spring 2007. • Assess students of languages not offering study abroad programs in Spring 2007: 4 • Training additional assessment evaluators in Fall 2006/Spring 2007 • Preparing assessment tools for these languages in Fall 2006/Spring 2007 Proposed Budget: 1. Pay each of the two workshop consultants a lump sum of $1500 to coordinate the recruitment of test developers/raters, set up and present workshops, oversee the development of the speaking and writing tests, finalize the tests and make them ready for delivery, interface with study abroad programs directors regarding proctoring on-site tests, oversee the collecting and inputting of assessment results, other follow-up work. Total: $3000 2. Pay each of the fourteen test developers/raters $500 to attend the four workshops, the final meeting, work with other participants to develop the generic speaking and writing tests, recruit volunteers to do the field test, rate the speech and writing samples from the field tests. Total: $7000 3. Soft drinks and cookies for the workshops: $100 4. Pay each rater $15 for each set of one speaking and one writing tests. * Estimated number of students per each language: Spanish: 10 Italian: 6 French: 5 German: 7 Arabic: 3 Chinese: 8 Japanese: 8 Total: 47 • Estimated cost of assessing both the entrance and exit speaking and writing proficiency tests of 39 students: $15x37x2x2= $2220 5. Total Proposed Budget: $12.320.00 IV. Department Assessment Task Force: T. Richard Chi, Stacey Katz, Fernando Rubio, Reem Bassiouney 5 ^2-C^, 5 Senate Executive Committee April 17, 2006 Academic Senate, May 1, 2006 Notice of change of Advanced Placement Writing Score Contact person: Maureen Mathison, Director, University Writing Program Undergraduate Council Meeting March 21, 2006 Informational Item The University Writing Program is raising the Advanced Placement Score from 3 to 4 to qualify for exemption from the lower-division writing requirement, WRTG 2010. Rationale Students who score a 3 and higher on the AP English examination are currently exempt from enrolling in the required first-year writing course at the University of Utah. Recent trends in higher education have increasingly raised the standard for exemption, with more and more institutions requiring a minimum AP English Score of 4 or 5. Many more selective institutions are beginning to require ! students to enroll in a mimmum of one lower-division writing course, regardless of AP Score. The higher the caliber of the institution, the more likely it is that scores of 3 are being reconsidered as indices of advanced credit. The University of Utah lags behind its peer institutions in that an AP Score of 3 exempts students from our required composition course, WRTG 2010 (see attached). The same AP Score at other institutions exempts students from the introductory level course, WRTG 1010. This trend in using AP Scores for exempting or placing students into courses is not limited to writing, but includes all subject areas. In addition, research has shown that students with English AP scores of 3 made higher gains in their writing improvement than students with AP scores of 3 who did not enroll in a writing course (Hansen, 2005). Wording of Item Students whose AP English score is 3 will be required to enroll in Writing 2010, effective Spring, 2007. An AP score of 3 will allow credit, and is seen on the same level as WRTG 1010. - 87- AP Writing Score - Peer Institutions Exemptions from First Lower Division Requirement (1010) Exemptions from Entire Lower Division Requirement (2010) University of Utah AP 3 University of California/Irvine None None University of California/San Diego None None University of Cincinnati AP 3 AP 4 University of Illinois/Chicago AP 4 University of Iowa AP 3 (but students still required to take an alternate course) University of North Carolina Chapel Hill AP 4 University of New Mexico AP 4 AP 5 University of Pittsburgh AP 5 with SAT Verbal 600 University of Virginia AP 5 AP 4 with SAT JJ Writing 680 University of Washington None None AP Writing Score - Utah State Institutions Exemptions from First Lower Division Requirement (1010) Exemptions from Entire Lower Division Requirement (2010) University of Utah Index 101 (equivalent to ACT comp 19 with 3.40 GPA) AP 3 Utah State University AP 3 None Weber State University AP 3 None Utah Valley State College AP 3 None Salt Lake Community College None None - 8 8 - LEVEL 0.21515 <.0001 5583 0.09306 <0001 4571 -0.03302 0.0239 4678 0.11563 <.0001 4626 0.08300 <.0001 4625 1.00000 5826 ACT ENGL 0.14703 <.0001 4445 0.62404 <0001 4226 0.22204 <0001 4303 0.84358 <0001 4625 1.00000 4625 0.08300 <.0001 4625 IY VARIABLES h Spring, 2004) ACT COMP 0.14106 <.0001 4446 0.72580 <0001 4226 0.24409 <0001 4303 1.00000 4626 0.84358 <0001 4625 0.11563 <.0001 4626 ONG PRIMA] 1,2002 throug HS GPA 0.21527 <.0001 4501 0.79657 <.0001 4525 1.00000 4679 0.24409 <.0001 4303 0.22204 <.0001 4303 -0.03302 0.0239 4678 RRELATIONS AM< 1 semesters from Fal ADM INDEX 0.23088 <.0001 4396 1.00000 4572 0.79657 <.0001 4525 0.72580 <.0001 4226 0.62404 <0001 4226 0.09306 <0001 4571 SIMPLE CO (includes al GRADE • 1.00000 5584 0 23088 <.0001 4396 0.21527 <.0001 4501 0.14106 <.0001 4446 0.14703 <0001 4445 0.21515 <0001 5583 GRADE IN W RTG 2010 ADMISSION INDEX HS GPA ACT COMP ACT ENGL ACADEMIC LEVEL 3 a© e - 2 u +» © rM I a -' a * £ S ii a « a CD X) T3 1-1 Q 2o©, 5 1 & w 4 O • r-t CO M Isfof l 5 7 Senate Executive Committee April 17, 2006 Academic Senate, May 1, 2006 Report on Student Course Evaluations Contact person: Chuck Wight, Assoc. V.P. Academic Affairs, Jennifer Mabey * * * Uniform Student Course Evaluation: A Report to the Academic Senate Fall semester 2005 concludes the 6t h year that the university-wide student course evaluation instrument commissioned by the Academic Senate has been in use. During those years, the evaluation system has moved to an online format, expanded to facilitate the evaluation of teaching assistants and team teaching situations, reduced costs to departments, increased security, and provided more readily accessible results to both instructors and students all while attaining a voluntary response rate that is one of the highest in the nation. Student Satisfaction Student course evaluations are designed to measure student satisfaction; they are not designed to measure learning outcomes. They provide valuable information about the . classroom experience from the student perspective. They also assign a number to students' perception of faculty competence. Great care needs to be taken to account for sample size and compounding factors such as methodology, content, time of day, efficacy of other instructors, and the cohort effect. Policy and Procedures states: "The University will evaluate its courses and instruction in multiple ways, including by soliciting students' evaluation." (PPM 9-7.14) Students rate their instructors and courses highly, averaging between "Agree" (5) and "Strongly Agree" (6) on all 14 standard items. Results are shown for Summer 2005 semester, but the averages have not changed appreciably since Spring 2003. 2 C -89- University Averages - Summer 2005 The course objectives were clearly stated. The course objectives were met. The course content was well organized.. The course materials were helpful in meeting course objectives. Assignments and exams reflected what was covered in the course. learned a great deal in this course. ~]5 22; | |=,3 Overall, this was an effective course. Course composite " P . f 1 lb 1 m The instructor was organized. The instructor demonstrated thorough knowledge of the subject. 64 The instructor presented course content effectively. 13 "The instructor created/supported a classroom environment that was respectful" As appropriate, the instructor encouraged questions and opinions. The instructor was available for consultation with students. Overall, this was an effective instructor. Instructor composite Strongly Disagree Strongly Agree Historical Perspective In February of 1999, the Academic Senate commissioned the development of an instrument that could be used to evaluate courses campus-wide. After being piloted in Summer and Fall semesters of 1999, the instrument was implemented Spring semester 2000. The creation of a uniform course evaluation instrument provides departments with a convenient means to evaluate all of their courses on a regular basis. The data from those evaluations are provided in report form to instructors by way of departments. The standardized numerical data are posted on the Campus Information System site for students to access when selecting courses. Evaluations Move Online Spring 2003 semester marked the point at which the maintenance of a permanent database of student course evaluation records in the Campus Information System was implemented. At the same time, a system of collecting student course evaluations through a web browser interface was introduced. At the conclusion of Spring semester 2003, most departments evaluated their courses using the traditional paper forms. The only large groups of courses evaluated online were the College of Fine Arts, the Department of Chemistry, and all fully online courses. Over the summer months, new online functions were introduced to handle courses with multiple instructors and/or teaching assistants. Many departments took advantage of those features and at the conclusion of Fall 03 semester, about half of the evaluations were conducted online (56% of classes and 47% of the evaluations collected). -90- Total Evaluations-Collected - online + paper 90000 80000 70000 60000 50000 40000 30000 20000 10000 0 SP SU F SP SU F SP SU F 2003 2003 2003 2004 2004 2004 2005 2005 2005 - •- % of evals collected online (vs. paper) 100% 80% 60% 40% 20% 0% -* - -*- S i f f l P i g i f f i - , - , - ^ - . - , - ^ 1 * r S ? c5? <d* # # # The percentage of evaluations collected online has gone from zero to nearly 100% in less than three years. In addition, the total number of evaluations collected has increased each year. This is probably because the paper forms required more handling and sometimes were misplaced or forgotten. The move to online evaluations has also greatly reduced the number of staff hours it takes to administer the evaluations because proctoring, sorting, and transcription of comments are no longer necessary. Departments no longer have to pay the cost associated with printing and scanning of forms. Instructors no longer need to use class time to administer evaluations. The delay in access to reports is also greatly reduced, with most reports available for departments to download within a day or two of the end of the evaluation period. The widespread use of online evaluations has also decreased the time required to post the results online, resulting in students being able to access more recent data when selecting courses. Student Participation Rates The primary reason for the high student response rate is that early release of grades is contingent upon the student acknowledging the online evaluation. While students are not required to complete the evaluation, they must log in to the Campus Information System and at least decline to complete the evaluation if they wish to view a posted grade within 10 days of the last day of class (for full term classes). Students are, for the most part, choosing to complete the evaluation. The response rate for the past year has averaged 73%. 13 80 0) o 70 JO) col 60 IN 0 50 U] C O 40 A. IN re 30 XI 20 '5) OS 10 a o 0 Response Rate - F05 / t Cl a s s e s end: Day 16 V " 1 i 7 r f Day 1 Day 7 Day 14 Day 21 Day 28 Day 35 Response Rate for Online Evaluations 80% 70% 60% 50% 40% 30% 20% 10% 0% " i l l If" pi J i n _ it11 i SP SU F SP SU F SP SU F 2003 2003 2003 2004 2004 2004 2005 2005 2005 Finals: Days 20-24 The University of Utah departments with the lowest response rates are those which are unable to tie early release of grades to acknowledgement of course evaluations due to a later grading period (/.e.Law). The highest response rate among colleges was the School of Business which had an 83% average response rate for Fall 2006. The only major institutions with higher response rates for online evaluations are Northwestern (73-75%), which does not evaluate classes with fewer than 5 students, and Yale and Polytechnic University of New York (84-90%), both of which withhold access to grades until an evaluation is completed. Schools that do not tie completion of evaluations to viewing of either grades or results are reporting response rates of 40-60%. (Data about other institutions were collected in a recent informal survey conducted by BYU.) The instrument will continue to be adapted to meet the needs of administration, faculty and students. Administrative Computing Services (ACS) continues to fine-tune the software and add functionality. ACS will soon incorporate changes to the user interface to make it more intuitive to use. The uniform student course evaluation instrument is an important component of the University of Utah's efforts to foster a culture of assessment and improvement in teaching and learning. -92- Comparison of Results from Paper and Web-Based Student Course Evaluations: A Statistical Analysis Chuck Wight, Associate Dean of Undergraduate Studies Purpose This study was initiated to address the question, "Do courses and instructors get significantly different responses on student course evaluations depending on whether the evaluations are conducted using traditional paper or web-based evaluation forms?" Methodology The comparison sample consisted of student responses to the 14 standard course and instructor questions in all courses that used paper evaluation forms in Spring 2002 semester and web-based forms during Spring 2003 semester. The sample included 110,014 student responses (approximately 7860 per question) from the Spring 2002 paper evaluations, and 109,908 responses from the Spring 2003 web-based evaluations (approximately 7850 per question). Results Responses to the questions ranged from 0 (strongly disagree) to 6 (strongly agree). The average responses to the 14 standard course evaluation questions for the comparison sample groups are given in the first chart. For each question, the upper (yellow) and lower (blue) bars give the average responses for web-based (2003) and paper (2002) evaluations, respectively. The error bars indicate ± 1 standard deviation of the distribution of scores for each question. For 8 of the 14 questions, the scores for web-based evaluations were slightly higher than for paper-based evaluations. However, the difference between web-based and paper-based responses is much less than the standard deviations of the distributions. ( • Web-2003 "The course objectives were clearly stated. i i • Paper-2002 The course objectives were met. i i I 1 : 1 1 The course content was well organized. i 1 1 1 1 1 ! The course materials were helpful in meeting course objectives, i 1 1 ' 1 1 I Assignments and exams reflected what was covered in the course. j 1 : i 1 1 1 learned a great deal in this course. i H 1 ' 1 1 Overall, this was an effective course. i 1 1 J ' 1 1 ! ihe instructor was organzed. i 1 1 J 1 1 1 The instructor demonstrated thorough knowledge of the subject: 1 1 1 1 1 The instructor presented course content effectively. i The instructor created/supported a classroom environment that woo roopootftil| ' 1 1 As appropriate, the instructor encouraged questions and opinions: 1 1 i 1 • The instructor was available for consultation with students. i Overall, this was an effective instructor i 1 1 • \ 1 ) 1 2 3 4 5 6 Average Response - 2002 P a p e r Results Higher 2003 W e b Results H i g h e r - The course o bjectives 1 rare met. The course content vas well o Tjanized The course mate rial meeting Assignments wtiat was helpful in objectives, reflected cohered in the course. Is were| course 4nd exams! in The course objectives were clearly stated. I learned Overall, The instructor was organized. The instructor den lonstratec thorough know! idge of the subject. The instructor created/sup sorted a < environment hat was la great deal in this course, this was an effective course. In order to assess whether or not the small differences are statistically significant for this large sample, the results were subjected to a standard two-tailed Students t test Starting from the null hypothesis (no significant difference) the averages and standard deviations of the mean for each question were used to compute the value of the Students t for each of the 14 standard questions. The difference was considered significant if the absolute value of t was greater than 1.96 (95% confidence limits). This detailed analysis shows that 8 of the 14 questions have averages that are greater for the 2003 web-based evaluations, although only 3 of these differences are judged to be statistically significant. Likewise, of the 6 questions that had higher average scores for the Spring 2002 evaluations, only 3 questions had averages that were statistically significant between paper and web-based evaluations. cassroom n ispectful. The instrudtor was available FORI consultation with students. The Instructor content presented course effectively, j ] As appro >riate, question! the instructorencouraged and opinions. j Overall, this was an effective Instructor. -1 0 1 Students t V a l u e Conclusions Although a good case can be made for some statistically significant differences between paper and web-based student course evaluations, there is no evidence for an overall bias, either positive or negative, that is introduced in the scores as a result of changing the method by which the evaluations are collected from students. Evaluating Academic Advising Across the Campus Submitted on April 28, 2006 by Sharon Aiken-Wisniewski on behalf of UAAC Assessment Committee \ The University Academic Advising Committee (UAAC) pursued a campus-wide evaluation of academic advising in 2005-06. A survey to evaluate needs, satisfaction, and learning outcomes was developed and implemented with assistance from Institutional Analysis. The web survey was administered in November 2005. Over 10,000 students were invited to complete the survey through campus e-mail. The student response rate was 19% and focused on advising received in departments and University College Advising. The following analysis was shaped from these data: • Advising that offered information on degree requirements, developing a schedule and registration had a high need but also a high satisfaction response (70% or greater). • Items relating to post graduation career options and post-bach education resulted in a high need (80%) but a low satisfaction rate (37%). In addition to low satisfaction, a high percentage of students indicated that they had not received information in these areas (34%). » Items relating to services and resources such as study abroad, tutoring, undergraduate research, etc. received moderate need (56 - 64%) but a low satisfaction rate (40%). In addition to low satisfaction, a high percentage of students indicated that they had not received information in these areas (35 - 42%). • Students know how to use electronic tools for generating a degree audit report, add/drop of courses, and withdrawing. « Through the comment section, students were able to clarify advising behaviors that assisted them in accomplishing their academic goals. The committee developed a list of strategies, short and long-term, for change that could impact students and advisors. The short-term strategies are: • Share results of survey with campus (in-progress). • Develop a new section for 2006-07 Undergraduate Bulletin that clarifies the role of the advisor and the student within the process of academic advising (completed April 2006). • Share student comments about advisors with appropriate colleges (completed April 2006). • Develop a college level sort to allow colleges to review data specific to the college (completed April 2006). Long-term strategies will require more time and resources for completion. These are: • Develop ways to be more purposeful in explaining various parts of the degree for educational connections. (Less checklist orientation). • Develop collaboration between UAAC and ASUU to organize ways to outreach to students to increase understanding about advising (debunk myths, explain DARS, etc.). • Organize a campus-wide Advising Conference for increased knowledge of campus resources that impact student success and exchange of "Best Practices" within campus community. (Annual Event) • Develop a public relations campaign to inform students what advisors do and share positive stories of students who have utilized academic advising with great success. • Regular update regular of department web sites and implement a general web site that contains graduate school tips and information. A budget request for 2006-07 was submitted to the Senior Associate Vice President for Undergraduate Studies to assist with resources for some of the long-term strategies. 6~ General Education Assessment: American Institutions, Math, and Writing Mark St. Andre This document represents a summary of the assessment work that has been done in the General Education areas of American Institutions, Math, and Writing. 1. American Institutions The American Institutions (AI) requirement is met by four courses at the University of Utah: » Economics 1740: US Economic History • History 1700: American Civilization » Honors 2212: American Institutions • Political Science 1100: US National Government Some assessment work has been done in Economics, History, and Political Science which carry the vast majority of the hundreds of students meeting the requirement each year. The Honors class only contains a couple dozen students and they have not yet been asked to participate nor have they implemented an assessment process for that course. In the spring of 2001 the three main AI departments (Econ, History, Poli Sci) participated in a statewide pilot assessment of the AI requirement on all of the Utah System of Higher Education campuses. The results of that assessment are summarized in Appendix 1. However, the data were aggregated so as not to reflect individual differences between campuses. Overall that pilot assessment found the following: "In the American Institutions disciplines there were large consistent gains across disciplines and across institutions. All tests were designed by faculty teaching the courses and were focused upon the educational goals of these general education requirements." Since the spring of 2001, the following work has been done in the three main AI departments: Economics: Economics assessed AI again in the spring of 2005. The assessment was done in one section of the Economics 1740 class (there are typically four-five sections taught each semester). They conducted the assessment in a way that was similar to the methodology used in 2001. An email from Tom Maloney, chair of Economics, summarizes the process: ".. .a pre-test consisting of 12 multiple choice questions was administered during the first week of class. These questions covered material from throughout the term. Then, these questions were incorporated into the ft three exams during the semester (2 on the first mid-term, 6 on the second mid-term, and 4 on the final exam). Note that the student's performance on the pre-test did not affect their grade for the term." They found that students roughly doubled the number of correct answers on the selections over the course of the term, which was a statistically significant increase. They intend to continue testing and at last report had pre-tested and were planning to post-test three sections in the fall of 2005. History: History tested eight sections of their 1700 American Civilizations class in the fall of 2002. The instrument consisted of 25 questions that were chosen by the department. No formal analysis exists for these data, although summary sheets are available. A scan of those summary sheets indicates that students improved their scores from pre-test to post-test by approximately 3-8 points. If five points were the average increase it would represent an increase of 20 percentage points from pre to post test. In the fall of 2005 History renewed their assessment of AI and delivered a pretest and was planning a post-test for all sections of 1700. Political Science: In the spring of 2005, the Political Science Department delivered an assessment of AI to two sections of their 1100 US National Government course. Pre-test results are available for one section and post-test results are available for both sections. The section with both a pre-test and a post-test did not show an increase in scores. There are no further plans to do assessment of American Institutions in Political Science. 2. Math The Math department also participated in the pilot assessment of 1050 College Algebra that was conducted statewide among USHE campuses in spring 2001 (see Appendix 1). The report states: "In Mathematics, there were enormous gains from pretest to posttest in the performance of students on the set of problems used for the assessment." More recently, the Math department conducted an assessment in the beginning of the fall 2005 semester in their calculus classes to determine to what degree their pre-calculus sequence of courses (1010 Intermediate Algebra, 1050 College Algebra, and 1060 Trigonometry) were preparing students. The test results showed that there was no difference between those students who had taken the U's pre-calculus sequence and those who had taken it in high school. About these results Aaron Bertram stated in an email: ".. .it convinced us that what we need to do in the future is to design pre and post-tests for our service courses to assess their effectiveness." 2 Their plan for these assessments can be found in Appendix 3. 3. Writing The Writing Department is embarking on a new assessment of their 2010 course which is the course that most students take to meet the General Education Lower Division Writing requirement. In fall 2005 they collected portfolios from every student and are currently working on developing a model (rubric, etc.) for scoring those portfolios. 3 Appendix 1 General Education American Institutions and Math Assessment REPORT TO GENERAL EDUCATION TASK FORCE ON PILOT ASSESSMENT OF MATHEMATICS AND AMERICAN INSTITUTIONS ACROSS UTAH STATE COLLEGES AND UNIVERSITIES OCTOBER 9,2001 PREPARED BY DAVID H. DODD AND PHILIP KRAMER EXECUTIVE SUMMARY The Spring 2001 pilot assessment project by the nine public institutions of higher education in Utah focused on Mathematics 1050 (College Algebra) and courses meeting the state's American Institutions requirement (Economics, History, and Political Science). Tests were planned by faculty from the relevant departments and administered pretest (early in the semester) and posttest (around final exam time). Results were collected from all of the institutions and for all of the four targeted areas. In Mathematics, there were enormous gains from pretest to posttest in the performance of students on the set of problems used for the assessment. In the American Institutions disciplines there were large consistent gains across disciplines and across institutions. All tests were designed by faculty teaching the courses and were focused upon the educational goals of these general education requirements. A survey assessing the assessment was completed by 32 percent of the faculty who participated in the process (N = 18) on very little turnaround time. Respondents indicated strong support for the process. They generally felt that the tests matched the goals of the course and that it was essential to use the same items in pretest and posttest (for Mathematics, equivalent items were considered appropriate). Many of the respondents were explicit that there were considerable costs, primarily in faculty and staff time. There were suggestions that assessment could be improved through better communication about the process and that there should be a statewide uniform test for each discipline. The vast majority supported how assessment had been done in Utah, affirming its validity as a measurement process, the value of collaborating with other institutions, and the significance of faculty participation in the design of the test. Several mentioned the value of the process, especially for providing information 4 about teaching and learning. There was also some concern about maintaining confidentiality about the results. An overall evaluation of the successes of the pilot experience provides these conclusions: » The pilot engaged a high level of participation in planning and administration. • The tests were linked to the goals of courses from faculty perspective of those who teach the courses. ® Test results showed consistently strong positive outcomes in terms of student learning. • Participants in the process strongly endorsed the pilot and its major elements. Problems were generally related to the severe time pressures encountered: » Test items needed additional screening. • Scoring should be conducted and reported on the same terms within disciplines. « Results should be reported in electronic form with some consistency. Recommendations for future assessment process using this approach: • Uniformity of items and procedures. • Sampling of courses rather than all courses for every term. ( ® Consistent scoring; reporting in electronic files. ® Continued collaboration of administration and faculty across institutions. « Continued anonymity of faculty and institutions participating. ® Budgetary support for faculty and staff time. Because the Olympics makes this Spring logistically too difficult for many institutions, the second round of assessment will occur in the Fall, 2002. On 12 November 2001, faculty will again meet to review last year's effort, suggest improvements, and begin planning for the next assessment effort. The Task Force asks your support in encouraging and funding representatives from your campus to attend. INTRODUCTION The nine public institutions of higher education in Utah participated in a pilot project to assess student learning in Mathematics 1050 (College Algebra) and in courses meeting the state's American Institutions (Al) requirement taught in Departments of Economics (1740), History (1700), and Political Science (1100), plus an Al course unique to one institution, labeled American Institutions, Political and Economic. For all of these efforts, the assessment was planned as course embedded, in that the content and specific format of the testing was intentionally designed to be a direct part of a particular course and, as much as 5 possible, part of normal course activity. For each participating course offering, students took a pretest at the beginning of the term and a posttest as part of the final examination process; test items were essentially the same for both pretest and posttest. ASSESSMENT PROCESS The general process of assessment of student learning was instigated by the State Board of Regents as part of its efforts to develop accountability data related to student learning. The Regents' Task Force for General Education, comprised of representatives from all nine institutions, developed the general plan for the process and was supported by the Commissioner's Office in this effort. The specific testing program, including test items, was developed jointly by faculty from each of the disciplinary areas and many of the institutions. The test items were based on content that was central in each of the courses as judged by faculty from the relevant departments. For Mathematics 1050, the test was a uniform set of five items across the institutions; this was a well organized effort reflecting past collaboration about the content of this course. All items were standard problems to be solved, e.g., a quadratic equation. The test was adrninistered during the first two weeks of the semester and again around the time of the final. The pretest was returned to students after grading, so the posttest was not identical to the pretest. The two tests, however, were nearly identical, varying only in terms of alterations in the specific numbers used, e.g., the coefficients in a quadratic equation to be solved. Individual tests were scored by instructors (or teaching assistants) for the course as such scoring would normally be performed. For each of the remaining departments, the agreement of representatives was to create a joint test bank of multiple choice items. From this bank, a specific department within an institution selected a specific subset of items representing the content of the course as locally taught. In general, scoring was performed by a scanning device in relation to a key for the specific items; the number of items used was variable across disciplines, but was generally consistent within disciplines. The pilot intentionally reflected several important principles shared by the Task Force and the faculty who developed the tests. The central principle was that the process should be driven and developed by faculty in the specific disciplines. Included in the discussion with these faculty were issues of test items, whether tests would be identical across institutions, etc. An additional key principle was that to prevent invidious comparisons or concerns about sanctions for low scores, anonymity of faculty and institutions would be maintained. That is, data reports would provide statewide results without any specific faculty or institutional information. The faculty in the four disciplinary groups were in general agreement that: 6 a) Test items were best developed within their disciplines. b) Identical tests across institutions were preferred by some groups, but not all. c) Identical tests (for math, identical problem types) were to be administered on a pretest at the beginning of the term and a posttest at the end of the term. d) Anonymity of faculty and institution were essential. After completion of the pilot, a survey was sent by e-mail to all faculty who participated in the pilot. The survey asked about the faculty member's role in the pilot, about the appropriateness of the test for the goals of their course, about the actual adniinistration and their experience with it, about the costs (financial, time) associated with the pilot, etc. RESULTS Data were collected and reported from all of the nine institutions and were collected from all four of the targeted areas. At the present time, complete data (pretest and posttest individual scores) have been provided for 20 of the possible set of 34.1 The results are summarized below in sections for each of the four disciplines. Mathematics. Results were provided by 7 of the 9 institutions2 for a total of 699 students; numbers of students per institution ranged from 62 to 157. Unfortunately the scoring scales were widely discrepant across institutions, that is, maximum scores ranged from 3 to 50 depending upon the institution. Every reporting institution found similarly very strong results; data analyses showed statistically significant improvement from pretest to posttest with all t tests highly significant (t = 8.15 to 17.8, p<.001 for all, df = 63 to 155). As an example, one institution found average pretest scores of 8.04 and posttest scores of 29.23. In view of the varying scoring scales across institutions, there is no ideal statistical comparison across institutions. In view of the general similarities of results across institutions, a common measure is percentage of improvement. Individual student improvement ratios, calculated as ((posttest - pretest)/pre- ' B a s e d o n n i n e i n s t i t u t i o n s a n d f o u r d i s c i p l i n e s . O n e c o l l e g e h a s o n l y o n e c o u r s e i n A m e r i c a n I n s t i t u t i o n s rather t h a n the t y p i c a l s e p a r a t e c o u r s e s o f f e r e d at the other institutions in E c o n o m i c s , H i s t o r y , a n d P o l i t i c a l S c i e n c e . 2 A n a d d i t i o n a l D e p a r t m e n t o f P o l i t i c a l S c i e n c e i s p i l o t i n g d u r i n g F a l l , 2001. A t h i r d r e p o r t e d results after the s t a t e w i d e d a t a set w a s a n a l y z e d . 7 test), averaged 169 percent across all students in all institutions, which meant that students more than doubled (nearly tripled) their scores. Economics. The economics tests comprised 8 multiple choice items, individually selected by institutions; the same items were repeated identically from pretest to posttest for a given institution. Complete results were provided by four institutions for 164 students (two other institutions provided partial data, one pretest only and one posttest only). The results were somewhat mixed; all showed improvement from pretest to posttest; two of the four found statistically significant increases from pretest to posttest (for these, t = 2.76, p < .05 and t = 7.21, p < .0001). Individual student improvement ratios, calculated as indicated above, averaged 68 percent. History. The history tests comprised 20 items common to all institutions; pretest and posttest scores were provided by seven institutions based on a total of 1,207 students. These results are remarkably consistent across institutions, that is, the average scores are comparable and all show statistically significant improvement from pretest to posttest (t = 7.10 to 15.65, p < .0001 for all). Across the institutions, the mean pretest score was 12.91 (n = 514); the mean posttest score was 17.69 (n = 500). Individual student improvement ratios, calculated as indicated above, averaged 36 percent. Political Science. One institution2 provided complete data for the political science course, involving 71 students. The scores reported were percentage correct on a multiple choice test. The pretest average was 54 percent and the posttest average was 82 percent (t = 3.59, p < .01). Individual student improvement ratios, calculated as indicated above, averaged 62%. Summary of results. All of the results showed strong gains in student performance on these tests. Note that the tests were designed to reflect the educational goals of these courses as evaluated by faculty and administrators directly involved in the courses. Thus, the gains point to student learning of material directly relevant to these courses and to course goals as general education requirements for students in the Utah System of Higher Education. SURVEY RESULTS Surveys were completed by 32 percent (N=18)3 of participants in the process. All but one of these were faculty. The remaining respondent was an administrator not directly involved in the development and administration of test, who responded only in those terms. As to the respondents, two chaired the committees that planned and developed the tests for their disciplines, ten were involved in designing the 3One of these was a more general e-mail which did not respond specifically to the survey questions. 8 instrument, and twelve were directly involved by administering tests in a class (or classes) under their supervision. Indeed, all except the administrator referred to above were directly involved as a disciplinary representative for test development and/ or administration. All of the disciplines were represented in the survey results by at least two faculty respondents and all of the institutions were represented by at least one faculty respondent. On the opening item about how the test matched the goals for the class, 12 of the 134 who responded agreed that there was a match; one of these hedged by suggesting that the tests do not consider the individual background of the student. Similarly the items about the appropriate match to general education goals was answered yes by 12 of 13 responding, with the same no from the same as above. On the survey item about administering the same items, pretest and posttest, there was complete agreement. The only exception was for the test in mathematics where the view was that different but equivalent problems should be used since the pretest is handed back for student review. The item about whether faculty had taught to the test elicited a range of responses; the most common response was no. If the yes responses were explained, it was in terms involving the test being what the instructor was teaching anyway. Faculty generally completed the item about costs (monetary and time), but the answers were variable. Four indicated that costs were minimal and/ or happily born. The answer from the strong majority of the respondents was that there was considerable time invested, with specific estimates of 15 to 20 hours and the equivalent of teaching a course. Suggestions for improvement in the process were not surprising: clearer guidelines needed for reporting, uniform test needed, need for standard questions, and better communication regarding assessment. There was also the suggestion that a testing expert should be brought in and a request for a demographic section to provide more information about individual students. Finally there was an expressed concern from a single respondent about whether administration test procedures were followed; the complaint is unclear as to referent and might be relatively minor or very serious. Most significantly, the plaudits far outweighed the complaints; the vast majority expressed support for the approach followed, feeling that this is a valid process and that it provides good information on student learning. The most commonly mentioned category of positive remarks focused on collaboration with other institutions, including associating with peers, exchanging ideas with regents,5 developing common goals, and understanding of problems encountered by other institutions. There was also a clear theme indicating the 4The no represented an institutionally unique situation with regard to the relevant course; while not discounting that response, the situation should be noted. 5Most likely this refers to Commissioner's office aa'ministrators. 9 value of faculty participating in the process, e.g., in relation to designing the test experience as a teacher was useful. Further, there were several who mentioned the value of such a process for accreditation, for learning about faculty success in teaching and about student learning (e.g., good information on student learning). One respondent indicated that we have devised a pedagogically defensible test. Fainter praise of that sort was the respondent who doesn't like assessment but this is better than a national test. The issue of confidentiality was also raised by some who expressed concern about the confidentiality matter generally and one who explicitly did not want to be compared with other institutions. In sum, it is fair to say that the pilot process was well received by the strong majority of those who participated; nearly all of the participants supported the process, feeling that there was value in what was learned from the design of tests and their administration on a pretest - posttest basis. There was also strong support for the use of standard questions and the overall process. However, the majority also expressed concern about the serious costs in faculty and staff time. EVALUATION OF THE PROCESS The process of developing this pilot involved a number of elements that resulted in a remarkable success in terms of the overall participation by institutions. Most notable is that the outcome, in terms of student learning, was also a success. Of course, a pilot of this sort must inevitably result in various problems. The participation of all institutions and of the majority of possible departments from institutions is a notable success. This seems to reflect the direct engagement of faculty from the relevant departments; a meeting of these groups was followed by a flurry of e-mail activity as groups created test items and discussed issues related to those items. It should also be noted that this was not a coercive process; there was a sense among faculty that it was important to do the pilot in response to the expectations of the Board of Regents and that continuing efforts were anticipated. There was no reported contention within the disciplinary groups or toward the Regents. The promise of anonymity appeared to be important and has been maintained. The test items were directly linked to the goals of these courses; this was the most important consideration in the development of these tests. For that reason, it was essential to have faculty who teach the courses also create the tests. Thus, it was not a canned test designed by people who knew little or nothing about the specific goals of these courses. The test outcome is, in itself, a remarkable success; there was no clear basis for predicting in advance whether students would show notable improvements across the semester. There is, of course, the presumption that 10 teaching and learning are taking place. However, these are required classes, taken largely by college students in their first year. In addition, these tests were, as will be discussed, developed under time pressure and there was no opportunity for normal test development processes to be undertaken. Thus, it is especially satisfying to find consistently positive results across institutions and disciplines. The importance of confidentiality of the results is seen by most as an important ingredient of the success; this principle was a strong element in producing the cooperative stance of those who participated in planning and administration of the tests. An emphasis on comparing faculty or institutions carries considerable risk for creating a competitive environment that would undermine such cooperation. It is also necessary to note certain problems that were associated with the nature of the pilot process. Central to most of these problems is that the process was undertaken under severe time pressures; items were created during the late Fall semester for tests that were administered in early Spring. There was insufficient time to screen test items to the degree necessary. Similarly, there was very limited time to set up the full set of classes to be tested at each institution or to engage the participation of all instructors, etc. Scoring of tests was conducted as ordinarily done by the specific instructors involved; this means that scoring was not necessarily consistent. While that seems quite workable for purposes of regular instruction, it created difficulties for analyses of the results. Finally the results were generally not available as electronic files, which meant that a great deal of hand scoring and entry were necessary. Also there was often difficulty in matching pretest and posttest. And, given the nature of the information provided, item analysis could not be conducted. RECOMMENDATIONS FOR FUTURE TESTING The pilot experience can lead the obvious conclusion that this is all possible and valuable. The results make sense, are positive, and promise to be useful. However, there are several major issues that must be addressed before any future effort can be undertaken. Some of these are simple to suggest, but carry certain costs; others require complex decisions involving faculty participation and buy in. Uniformity of items in tests and testing procedure across institutions. The same test should be used for each discipline across all institutions; this test should be agreed upon by all representatives within a discipline. For the American Institutions courses, the test should consist of 12 - 20 items that have been carefully vetted by the committee and reviewed by someone with testing expertise. In addition, the testing procedure should incorporate these items into the final examination in such a way that students perceive these items as part of the final examination. 11 Sampling of courses. Should all courses be assessed continuously? It is recommended that the system not do so, both because of the costly nature of doing so and because of the ongoing burden on students, faculty, and administrators. Simple sampling procedures can provide adequate data to evaluate the effectiveness of student learning. Electronic files, including item by item correctness and total score, should be created for each administration of a test at each institution. The specific parameters of the files should be developed and conveyed to administrators and faculty at the institutions. Continuing collaboration with administrators and faculty from these disciplines at all institutions should continue. Any system of assessment will ultimately fail if instituted without reasonable participation at all levels of administration and of faculty involved in the courses. Everyone must understand the value of such an assessment for teaching in their department, the broader consequences for higher education, and the importance of careful (and fairly administered) assessment. Anonymity of instructors and of institutions should be maintained. Still the problem of closing the loop on assessment will need to be addressed, specifically how are the assessment results used to improve the quality of teaching and learning. In sum, the pilot of course-embedded assessment was remarkably successful, which is primarily a credit to the participating faculty from all of our institutions. The suggestions above should help make the next round even more pedagogically useful. 12 Appendix 2 Mathematics Calculus Pre-test Fall 2005 In an effort to assess the effectiveness of our pre-calculus courses, students in two large sections of Math 1210 (Calculus I) took a five question pre-test covering topics from College Algebra. The results are given below: Total number of students who took the pre-test: 254 Average score: 3.5 Students who completed pre-calculus at the U: 48 Average score: 3.3 Students who completed pre-calculus elsewhere: 206 Average score: 3.5 Students who took pre-calculus elsewhere can be further subdivided into those who completed pre-calculus at another college and those who completed pre-calculus in high school: Students who completed pre-calculus at another college: 35 Average score: 3.4 Students who completed pre-calculus in high school: 133 Average score: 3.6 Although at first glance it may appear that the pre-calculus courses at the University of Utah are not as effective as those at other schools, this is not true. The differences between groups are not enough to be statistically significant, and there are other factors that should be taken into consideration. For example, many of the students who took pre-calculus in high school had also taken AP Calculus in high school, and so have a more thorough mathematical background than students who completed pre-calculus at the University of Utah, the vast majority of whom are taking Calculus for the first time and have just completed their pre-calculus courses. Another factor to consider is the small number of students taking the pretest who took pre-calculus at the University of Utah. Approximately 800 students took Math 1050 on-campus at the University of Utah during the last academic year, and so 48 students is only about 6% of that population, which is not enough to say anything significant about the results. The conclusion we can draw from this pretest experience is that we need a better way to measure the effectiveness of our pre-calculus courses, one that will yield significant results. We propose that giving both a pretest and posttest in the pre- 13 calculus classes would be a better way of gauging those courses effectiveness than a pretest in the Calculus classes. Appendix 3 Math Assessment Plan 2005-2006 1. Introduction The service courses taught by the Mathematics Department and taken by a significant proportion of the undergraduate students at the University of Utah distinguish Mathematics from the other departments of the College of Science. These courses are taught by senior faculty, instructors and graduate students. We are confident that our infrastructure of teacher toiriing, course coordinators, student evaluations and oversight by the Associate Chair for Undergraduate Studies provides a good set of checks and balances on the quality of our teaching, but we also recognize the potential utility in assembling assessment data to help us analyze and improve the effectiveness of these courses. For any assessment to be useful, it must address the following issues: Goals: What are the objectives of our service courses? Design: How do we assess progress towards the objectives? Efficiency: How do we implement the assessment efficiently? Utility: How do we ensure that our assessment is meaningful? Analysis: What do we do with the results of our assessment? 2. The Courses The service courses in question are: Math 1010 Intermediate Algebra Math 1030 Introduction to Quantitative Reasoning Math 1040 Introduction to Statistical Thinking Math 1050 College Algebra Math 1060 Trigonometry Math 1070 Introduction to Statistical Inference Math 1090 College Algebra for Business and Social Sciences (This list may ultimately be lengthened to include courses at the Calculus level.) 3. The Plan Our plan this Spring is to develop a web-based system of pre and post-tests for our service courses, administered by a member of our staff and overseen by our 14 course coordinators. These tests will be required of all students and taken online. To remove any incentive for cheating, the scores on the individual tests will not be available to the instructors, who will simply receive a sheet indicating whether students have or have not taken the tests. The scores on the individual problems will be recorded and compared, and a statistical analysis will be performed on the data. The results will be used for assessment. Stage 1: The course coordinators will assemble a list of objectives. (Note: Such lists already exist or can readily be extracted from the syllabi.) Stage 2: The course coordinators and instructors of the service courses will assemble a bank of problems designed to test each objective. Stage 3: The course coordinators will design the pre and post-tests, and the staff member will put them on-line according to the specifications above. (Note: The Department has considerable experience with on-line grading through its "Webworks" homework assignments.) Stage 4: Post-tests will be performed in Spring 2006 to work out the bugs. Stage 5: Full implementation will begin in Fall 2006. Stage 6: A statistical analysis will be performed on the data each semester, and the course coordinators will assemble at the end of each semester to discuss the results, identify weaknesses and strengths, and look for ways to improve. The data will be published and made available to the University. 15 This is an example of the type ofpre- and post-testing done after we convertedfrom a quarter to a semester calendar. The conversion entailed significant changes to the curricula in many disciplines. Math 1030 was developed specifically as a new component in a new set of general education requirements. Math 1030 Review, Spring Semester 2000 The Math 1030 course (Introduction to Quantitative Thinking) was offered in its current form for the first time in the Fall Semester 1998 and has now been in place for almost two years. The purpose of the course review this semester was to determine some characteristics of the students enrolled in the course (where and when they took the prerequisite course, Intermediate Algebra; the college they were enrolled in at university), to measure the impact of the course on specific quantitative reasoning skills. In addition, since the university has been asked by the regents to assess what students gain from their general education classes, this review was an opportunity for a trial run of an assessment plan for general education mathematics courses. The review process involved a pre-test and a post-test on quantitative reasoning skill. The pre-test also included a section on basic Intermediate Algebra concepts and the students were asked to indicate where and when the prerequisite course was taken, and what college they were enrolled in at the University of Utah. The data from the review indicate that the students did increase their scores on quantitative reasoning skill over the semester by a mean gain of approximately 29% of the total points possible. In looking over the background of the Math 1030 students, it is not surprising that students who began the course with a stronger grasp of Intermediate Algebra material also tended to have higher scores on the quantitative reasoning questions both at the beginning and at the end of the semester. Yet, the general level of algebra skill that students demonstrated at the beginning of the Math 1030 was low. The students had a mean score of 55% on the test of basic algebra skills at the beginning of the semester. Where the students took Intermediate Algebra (university, high school, community college) did not appear to have an significant impact on their grasp of this material. However, students who reported having taken Intermediate Algebra in high school did have considerably higher scores on the quantitative reasoning questions given at the beginning of the semester. This initial difference, which disappeared by the end of the semester, may be due to the fact that stronger students are more likely to have taken Intermediate Algebra in high school rather than at university or at a community college. Slightly less than half (46%) of the students report that they did take Intermediate Algebra in high school and another 30% took the course at the University of Utah. Many students in our database reported they were enrolled in either University College (24%) or in the College of Social and Behavioral Sciences (23%). The next largest enrollment was in Fine Arts and Humanities (16%). More detailed information on the results of the review and a brief summary of the review process are given below. Results of Pre-test and Post-test on Quantitative Reasoning Skills The pre-test was given during the second week of classes and had two parts. The first part covered prerequisite material (Intermediate Algebra) and the second part was a set of questions on specific quantitative reasoning skills. A post-test covering similar questions on quantitative reasoning skills was given during the comprehensive final examination at the end of the semester. Both the pre-test section on quantitative reasoning skills and the post-test on the same material, referred to below as QR1, QR2 respectively, were designed to take approximately 25 minutes. Since the pre-test section on quantitative reasoning skills was based on a total of 12 possible points and the post-test on a total of 24 points, the scores on the pre-test were scaled by a factor of 2 before the scores were compared to those on the post-test. The results below are based on data from the 312 students who took the pre-test. These 312 students represent 69% of those enrolled in Math 1030, Spring Semester 2000. Of the 312 students, there are 274 students who took both the pre-test and the post-test, and of the remaining 38 students, 29 withdrew officially or unofficially during the semester (grades, E, W, or EU). Although our intent was to obtain data from all 12 sections of Math 1030 Spring Semester 2000, three sections (two given by DCE, one given by the Math Dept.) were not included because the pre-test was not given as outlined in the review process. Moreover, some students in the sections that were included are not in our database because they were not present in class for the pre-test. A quick overview of the change in the students quantitative reasoning skills as measured by the pre-test and post-test scores shows an encouraging gain over the semester. The mean score on quantitative reasoning skills increased 7.1 points out of 24 possible points, an increase of 29.6%. The results were: A more detailed picture of the change that took place in the students' quantitative reasoning skills over the semester is shown in the following two boxplots where the box represents the middle 50% of the student scores and the line in the box indicates the median score. Boxplot: Pre-test N Mean (Percentile) QR1 pre-test 312 11.658 (48.6%) QR2 post-test 274 18.746 (78.1%) MedwniPercentile) Standard Deviation 12.00 (50%) 4.972 19.75 (82.3%) 4.267 I 1- QRl 0 . 0 6 . 0 1 2 . 0 1 8 . 0 2 4 . 0 Boxplot: Post-test I + - OR2 0 . 0 6 . 0 12 . 0 1 8 . 0 2 4 . 0 The previous comparison looked at the change in the scores of the students as a group from the beginning to the end of the semester. To examine how much students changed their individual scores, regardless of what their initial score was, we looked at the difference in post-test score and pre-test score, (QR2 - QR1), for each of the 274 students who took both tests. The mean gain in score was 6.9 points out of 24 possible points, again about a 29% increase. The results were: N Mean (Percent Change) Median(Percent Change) StDev Difference 274 6.909 (28.8%) 6.500 (27.1%) 4.995 (QR2-QR1) Boxplot: Individual Differences 1 1 1 1 QRdiff 0 . 0 6.0 12.0 18.0 Background of Students in Math 1030 Of the 312 students in our database all most all of the students (92%) reported that they had taken the prerequisite course, Intermediate Algebra, yet many of the students (62%) took Intermediate Algebra two or more years ago. This fact has a big impact on the algebra skills of the majority of the students taking Math 1030 and contributes to low scores on that part of the pre-test covering basic algebra skills (mean score was 55%). Int. Alg. Taken at: UofU High School Comm. College Other No response No. of Students 88 (29.5%) 144 (46.2%) 31 (9.9%) 20 (6.4%) 25 (8.0%) Years since IntAlg less than 2 between 2 and 5 more than 5 no response No. ofStudents 120 (38.4%) 139 (44.6%) 27 (8.7%) 26 (8.3%) Standard Deviation 2.49 Pre-test on Intermediate Algebra prerequisite material. IntAlg Mean (total pts = 10) Median All Students 5.50 5.50 (312) Boxplot: , 1 t -• • -i : 1 1 1 1- I n t A l g 0 . 0 2.5 5.0 7.5 10.0 -SY 7^, b ^ University of Utah 5? d. 7 Licensure Pass Rates, 2001-02,2002-03 2001-02 2002-03 Exam Taking Passed Rate Taking Passed Rate Communication Disorders 19 18 94.7% 26 24 92.3% Foods and Nutrition 7 6 85.7% 7 7 100.0% Occupational Therapy 15 15 100.0% 11 11 100.0% Physical Therapy 34 34 100.0% 35 34 97.1% Therapeutic Recreation 27 27 100.0% 20 16 80.0% Engineering* 105 91 86.7% 102 89 87.3% Law (Utah Bar) 96 91 94.8% 101 90 89.1% Medicine MD 105 104 99.0% 105 105 100.0% Nurse RN 120 108 90.0% 118 106 89.8% Nurse Practioner (nine specialties) Women's Health 3 3 100.0% 3 3 100.0% Midwifrey 6 6 100.0% 3 3 100.0% Pediatrics 6 6 100.0% 2 2 100.0% Family 15 15 100.0% 15 15 100.0% Gerontology 5 5 100.0% 3 3 100.0% Adult 3 3 100.0% 2 2 100.0% Acute Care 0 0 1 1 Total for report: 38 38 100.0% 29 29 100.0% Medical Technology 11 10 90.9% 22 19 86.4% Physician Assistant 32 28 87.5% 32 30 93.8% Pharmacy 34 34 100.0% 49 49 100.0% *First-time test takers only; students must eventually pass test in order to graduate; 2001 -2002 data are estimates based on 2002-3003 actuals. National average pass rate for engineering first-time test takers was 81.0% in 2002-03. University of Utah Student Performance on Graduate School Entrance Exams 2001-02 2002-03 Exam UU National UU National Mean Mean Mean Mean MCAT Verbal Reasoning 9 9 9 9 Physical Sciences 10 9 10 9 Biological Sciences 10 9 10 9 Writing* OP O P Number of UU test scores = 177 173 LSAT Average Score 153 151.9 155 152.3 Number of UU test scores = 231 275 GRE - General Test Verbal 492 476 495 470 Quantitative 602 615 572 582 Analytical 613 597 NA NA Number of UU test scores = 229 334 Medical Lab Tech 489 481 498 484 Number of UU test scores = 11 22 * An alpha scale is used; the farther down the scale the better; a score of O means that the UU is slightly below average. Teachers: University of Utah News Release: August 16,2005 Page 1 of 1 i t New Teachers Trained at the U Achieve Nearly- Perfect Pass Rate on State-Mandated Test Media Contacts > August 16,2005 - The 2005 graduates of the University of Utah's College of Education teacher education program performed extremely well as a class on the newly state-mandated PRAXIS PLT II Principles of Learning and Teaching exam. One hundred percent of University of Utah early childhood teachers, 98 percent of elementary teachers and 98 percent of secondary teachers passed the test at the required state level the first time they took the examination. "The results are most impressive," notes College of Education Dean David J . Sperry. "In addition to meeting the basic standards for performance as identified by the State Board of Education, 44 percent of elementary teachers and 33 percent of secondary teachers received "Recognition of Excellence" status from the Educational Testing Service (ETS) for scoring in the top 15 percent in the nation on the test. This is clear evidence that teachers graduating from the University of Utah are among the finest in the nation." E T S is the world's largest private test and measurement organization. New Utah teachers receive a Level 1 Professional Educator Teaching License and are usually allowed three years to meet Level 2 licensing requirements in order to remain teaching. Passing the PRAXIS II PLT exam is part of the requirements for Level 2 licensing. While the State Board of Education provides up to three years to pass the PLT, graduates of the University of Utah complete this requirement upon exiting their teacher licensure program. This was the first group of University of Utah students to take the test. "Students coming out ofthe University and passing this examination provide added value to employers and the students they teach," Sperry adds. "They not only bring high-level skills and capacity to the classroom, but the school districts that hire them will not have to worry about these new teachers meeting the testing requirements for Level 2 licensing." awth htaes't U Media Contacts: David J . Sperry, U of U College of Education 801 -581 -8221, david.sperry@.ed.utah.edu Mary Burbank, U of U College of Education 801-581-6074. mary.burbank@ed.utah.edu Ann Bardsley, U of U Public Relations 801 -587-9183. abardsleyO.ucomm.utah.edu ews ei e a s e s s ses 1 Contacts for Media Calendar FYI Continuum of Events Newsletter Magazine University of Utah Public Relations 201 Presidents Circle Room 308, Salt Lake City, Utah 84112 Please send comments to t.erick@ucomm.utah.edu, 801-585-9244 © University of Utah - Disclaimer - http://www.utah.edu/unews/releases/05/aug/teachers.html 8/16/2005 2005 Student Praxis Data The attached data summaries reflect the Praxis Pedagogy (PLT) and Content area test performance from students graduating from the Early Childhood, Elementary, Secondary, and Secondary MAT licensure programs in the Department of Teaching and Learning. The data are presented as general performance across all programs, followed by data summaries within individual content areas. Overall students performed above the national average with a high percentage of students receiving ETS certificates of excellence in both content area and pedagogy. Secondary Education Discipline Passed Utah cut score in content area Passed PLT (pedagogy) Utah cut score Received ETS certificate of excellence on PLT (pedagogy) Art 67% 100% 0% Health 80% 100% 40% Humanities 57% 100% 35% Math 86% 100% 14% Science 73% 100% 36% Social Science 76% 100% 35% Elementary & Early Childhood Education Discipline Passed Utah cut score in content area Passed PLT (pedagogy) Utah cut score Received ETS certificate of excellence on PLT (pedagogy) Elementary 95% 97% - 36% Early Childhood 68% 100% 42% tedence Award http://www.ets.org/portal/site/ets/menuitem.c988ba0e5dd572bada20b. Home Tests! Products; Services; Research Store: Contact Us Tests Directory > The PRAXIS Serie: I S I ® D e t a i l s ,.TM E T S Recognition of E x c e l l e n c e Award I n This S e c t i o n ETS has created the Recognition of Excellence program to honor test PRAXIS Home takers who achieve exceptional individual performance on select Praxis PRAXIS Details II tests. Candidates who earn the target scaled-score on any of 1 1 PRAXIS I Praxis I I tests will receive a certificate from ETS, and the award will be PRAXIS I I noted on all Praxis score reports. PRAXIS i n ,. _ , „ How to Qualify for t h e Award State Requirements ' Aieris Take any of 1 1 Praxis tests listed below and earn the ROE scaled-score, Research which is based on the top 1 5 % of Praxis candidates who took each test between March 1998 and March 2003. Certificates are automatically R e l a t e d Links issued for ROE scores earned on tests taken after August 2003. Online Registration Tests included in t h e ETS Recognition of Excellence program Download s PRAXIS Bulletin a r e : Registration Codes Lists T e s t Name ROE Tests at a Glance (TAAG) _ Te- l P r e w a r - i v Score ParaP„ ro As"s"e's^s ment i;E leme••n tar•y Education: Content Knowledg••e••• •(• TestCode 0014)• 1 8 1 : ,. . , , . . . _ . ;Eng ish Language, Literature and Composition: Content 192 The School leadership Series ; K n * w | e d g e Complete Test Directory ; (Test Code 0041) ;Mathematics: Content Knowledge (Test C o d e 0061) 165 iSocial Studies: Content Knowledge (Test Code 0081) 184 D o w n l o a d Free jBioiogy: Content Knowledge (Test Code 0235) 179 Viewer rchemistryT 184 Adobe Render 'POF> |Physi'cs:'c'ontert 177 jGeneral Science: Contentknowledge (Test Code 0435) 185 ;Principles of Learning & Teaching: Grades K-6 (Test Code 185 0522) iPrinciples of Learning & teaching: Grades 5-9 (Test Code 184 10523) :Principles of Learning & Teaching: Grades 7 - 1 2 (test Code 184 0524) Note: Examinees who earned a ROE score on tests administered between September 1998 and download and complete the Recognition of Excellence Request Form (PDF). T h e Recognition of Excellence A w a r d is a n i n c e n t i v e to encourage t he development of h i g h l y q u a l i f i ed teachers and should not be used a s a c r i t e r i on for making d e c i s i o n s about s t a te l i c e n s u r e , h i r i n g , or promotions. For more information, read the ROE FAQs or contact us at 1-800-772-9476 or praxis@sts.org. About ETS Careers Privacy and Security Legal Site Map Copyright © 2005 by Educational Testing Service. All rights reserved. The ETS logo is a registered trademark of Educational Testing Service. Executive Summary Career Services Project Overview hoc~ In Spring 2006, Career Services conducted an internship sui ^z. <L8* learning outcomes. The questions focused on student careei decisions/university value, and skill opportunities and acquis assessment was conducted in Spring 2003, but this survey was based on subject interest rather than learning outcomes. We surveyed students who participated in internships between Fall 2003 and Fall 2005. About 1500 students were invited to participate in our online survey. The survey opened on November 8, 2005 and closed on February 4, 2006. By the closing date, we had received 345 completed surveys. This is about a 23 percent response rate. Data Summary The collected data revealed 63 percent of students worked 21 or more hours per week. The survey results also indicated most internships were completed during the student's spring semester. Of seniors who completed a single internship, 53 percent did so in their final spring semester. Data indicated internships could lead to higher retention rates among students. 73 percent of respondents said their internship strengthened their commitment to complete their degree and made their classroom studies more interesting. This suggests students who participate in internship programs through the University will make stronger connections with their academic studies and degree program. Thus, the students are more likely to remain enrolled in their degree programs at the University. When student interns worked 21 or more hours a week, their classroom performance was more likely to improve than those students who worked 20 or less hours each week. Students who worked 21 or more hours a week or who participated in two or more internships were also more likely to have their internship lead to a full-time professional position. Percent of Internship Leading to a Professional Position: Percentages of tota |