| Publication Type | Journal |
| Creator | Office of Undergraduate Research |
| Title | Undergraduate Research Journal 2016 |
| Date | 2016 |
| Description | Office of Undergraduate Research Journal for 2016 |
| Type | Text |
| Publisher | University of Utah |
| Subject | undergraduate; scholarship; research |
| Language | eng |
| Series | Undergraduate Research Journal |
| Rights Management | © Office of Undergraduate Research |
| ARK | ark:/87278/s6wkvh6j |
| Setname | ir_our |
| ID | 2649358 |
| OCR Text | Show Welcome to the 2016 issue of the University of Utah’s Undergraduate Research Journal. Congratulations to the students who have published their important work here—the Undergraduate Research Journal collects and celebrates the contributions our undergraduate students from all over campus make to scholarship in their fields. We are pleased and grateful that the Undergraduate Research Journal continues to be hosted by the Marriott Library’s e-‐‑publication system, which allows us to accommodate student publications of all shapes and sizes. This journal would not be possible without the dedication of Janet Opel, who, along with Office of Undergraduate Research team members Cindy Greaves and Stephanie Shiver, works to make the Undergraduate Research Journal the high-‐‑quality publication that you are reading today. I would, as always, like to acknowledge the dedication and support of the faculty mentors who are committed to providing high-‐‑quality research experiences for undergraduate students at the University of Utah. The work published here represents countless hours of mentoring on the part of my colleagues all over campus—thank you for all that you do. Happy reading, all! Sincerely, Rachel Hayes-‐‑Harb Director, Office of Undergraduate Research Associate Professor, Department of Linguistics SENSORY EXPERIENCE IN SPACE: AN ANALYSIS OF PHENOMENOLOGY AND WINERIES Drew Emeney, (Ole W. Fischer) School of Architecture Architects commonly design spaces for primary human needs based on design necessities, but dwelling is much more than just being sheltered; it is a subjective human experience. What you see, hear, touch, smell, or even taste can create a certain impression of a space. Therefore, the perceptions of those experiencing the space should be considered when designing that built space. This is especially important with respect wineries not only because wine is a significant part of human culture and history, but also because sensory experience in both smell and taste are essential components of wine appreciation. In this thesis, I aim to emphasize the importance of sensory experience in different architectural spaces and how it has and should influence architectural design in general. To prove my points, I will identify three case studies of wineries in Napa Valley, California which emphasize and explore the principal aspects of sensory experience. I aim to further explore this thesis by proposing a design for a winery in Park City which showcases this experience of space through the senses. BEYOND HOGANS: A SUSTAINABLE FRAMEWORK PLAN FOR DESIGNBUILDBLUFF Erika Longino (Jose Galarza) Department of Architecture DesignbuildBLUFF is a development program wherein graduate architecture students apply custom design and construction skills to make homes for Navajo families in need. 39% of the reservation is under the poverty line and the region is ecologically vulnerable. I conducted a triangulated case study of DesignbuildBLUFF including qualitative and quantitative aspects of the program's function. This research helped create a whole-system framework plan for DesignBuildBLUFF’s sustainable growth. This research was an active practice in case-studying a nonprofit organization and planning for sustained improvement of the Navajo Nation. The assessment helped to inform the program of areas needing improvement as well as providing suggestions for best practices in education, environmental stewardship, and increasing capacity. The suggestion included a site map for the DesignBuildBLUFF campus with more educational and demonstrative living facilites. By creating this analysis and listening to the voices of all parties involved, the potential for social contribution by DesignbuildBLUFF could blossom. Streamlining the project and establishing a campus master plan will enable the project to better serve the Navajo community, the University, the surrounding neighborhood, and the regional ecology. A DesignBuildBLUFF construction project for the chapter house being installed with electric Overhead site map plan for DesignBuildBLUFF’s 2.2 acre campus with proposal for water management, residential, and public spaces. THE INFLUENCE OF OPERATIONAL CHANGES ON EMPLOYEE ENGAGEMENT AND CUSTOMER SATISFACTION AN EXPLORATORY STUDY McKell W Denna (Don Wardell) Department of Operations and Information Systems Due to intense financial pressure companies are increasingly adopting operational changes to cut costs by increasing an organization’s efficiency. While such changes have been proven to provide some competitive advantage for organizations that implement them effectively, they may also have a negative impact on an organization’s employees. Organizations are also increasingly recognizing the value of having a highly engaged work force. William Kahn first introduced the term employee engagement in an article published in the Academy of Management Journal in 1990. For decades researchers at Gallup Inc have studied employee engagement and have found that employee engagement is highly correlated to an organization’s effectiveness and profitability. My research suggests that operational changes may have a negative impact on employee engagement levels where such changes have occurred. This study may also suggest that operational changes have a negative impact on customer satisfaction. Using data from Ameritech College I found that employee engagement and customer satisfaction levels changed in the same direction from 2015 to 2016 and that such changes were found to be statistically significant. Employee engagement increased from 82% to 86% and the percentage of extremely satisfied customers increased from 60% to 72%. The p-values for the changes mentioned above were 0 and 5.21x10-9 respectively making it very unlikely that such changes happened by chance. Although further testing is required to test the validity of this research the finding from this study suggest some operational changes may possibly that have a negative impact on the financial performance of the employee-customer encounter as well. PERUVIAN IMMGRANT PARADOX: EXPLORING DIFFERENCES IN LEARNING CULTURES AND MATHEMATICS CURRICULUM Giulia Soto (Susie Porter) Department of History and Gender Studies Program Math instruction for immigrant students often begins with the assumption that this student population lacks math knowledge and skills necessary for academic success in the United States. Immigrant students in the U.S. are often placed in lower lever math classes due to the misconception that it will help them acquire English language skills. This deficit view of Latino immigrant students disregards the immigrant paradox being that first generation immigrants have better educational outcomes than individuals born in the United States, despite their similar disadvantaged circumstances. This study seeks to understand the immigrant paradox through Peruvian immigrant youths’ educational experiences by: 1) a comparison of Peruvian and United States math curriculum; and 2), interviews with Peruvian immigrants who attended school both in Peru and Utah. My research indicates that a large number of Peruvian immigrants in Utah have a vast knowledge of math skills and language learned in their home country, Peru, which helps students connect concepts and succeed in math courses in the U.S. ATTITUDES TOWARDS FEMALE BREAWINNERS WITH SAME-SEX AND OPPOSITE SEX PARTNERES AND HOUSEWORK RESPONSIBILITIES Kit Camarillo (Wanda Pillow, Ming Wen) Department of Education, Culture & Society, Sociology In this project, I seek to examine the social expectations for female breadwinners in lesbian and opposite-sex households to gain a better understanding of how gender plays a role in the ways couples negotiate working and household roles. The primary research question is: How do expectations of housework contributions differ between male breadwinners with female partners, female breadwinners with male partners, female breadwinners with female partners? Existing research is hampered by the fact that research on the housework performance of breadwinner has mostly focused on male breadwinners. There is a lack of research on female breadwinners and much less is known about female breadwinners who have female partners. Lesbian households make up a small portion of households, but are theoretically interesting. Given the gendered nature of household labor, how do two women negotiate deciding who does household chores? It is important to research how female breadwinner roles and the division of household labor differ between same-sex and opposite-sex couples because it gives a better understanding of nontraditional roles in our society and possibly brings positive change through better understanding. Existing gender roles are harmful for women in both non-traditional and traditional households. The division of labor in the household is one issue that stems from gender role expectations and contributes to inequality and female disadvantage. Men still contribute less time to household chores than their female partners regardless of their financial contribution (Deutsch, 2007). Some argue that the problem with the gender system is the unfair distribution of power that men have. Because of gender inequalities, men have more status, leisure and are paid more (Deutsch, 2007). Ridgeway and Correll theorize the following ways to change oppressive gender systems: equal pay and comparable worth; affirmative action; open information about wages; bureaucratic accountability for work related evaluations; and family friendly work place policies. These structural changes could bring about change at the levels of work and home places (Ridgeway and Correll, 2004). However, women who earn more financially in the household tend to still do more housework than their male partner. Benjamin and Sullivan (1999) found that women with high communication skills and high pay were able to negotiate a more equal division of household labor within their home, but if they lack either high material resources or communication skills then it was unlikely change in distribution of household labor would occur. Compared with heterosexual couples, lesbian couples share more equally in childcare tasks and report higher satisfaction levels with their domestic arrangements than did heterosexual couples (Goldberg, 2013). The division of household labor among lesbian couples seems to be more equal but not completely. Peplau and Fingerhut (2007) explained that social exchange theory predicts that the partner with more education, money or social status increases their power in the relationship but that lesbian women neither want to be dominated or dominate in their relationships. Do lesbian couples divide housework more equally than opposite sex couples? Do we expect that lesbian couples are better at equality in the household? A sevenquestion survey was developed and utilized in order to understand social expectations of female breadwinner in same and opposite sex couples. The survey was completed by 300 women and men aged 18 and over, living in the United States, currently married or living as married. Results were available immediately after each completed their survey. Results from the surveys indicate that attitudes concerning the division of household labor are changing and that society seems to view lesbian relationships as being more inherently equal than heterosexual relationships. This study points to the continued need for research on household labor and gender differences and specifically research that includes a range of households, including lesbian households. In addition, the study raises questions about whether shifts in attitudes toward gender and household labor actually impact what occurs in households. THE RELATIONSHIP BETWEEN PRESCHOOLERS’ READING ENGAGEMENT AND EXECUTIVE FUNCTIONING Aalia Fields (Seeung-Hee Claire Son) Department of Educational Psychology Engagement in literacy-related tasks strongly predicts development of literacy skills and achievement for elementary age children (Guthrie & Wigfield, 2004). Engagement in learning tasks is a multidimensional concept, encompassing behavioral components such as involvement in activities and following directions; cognitive components such as selfregulation of attention and commitment to the learning process; and emotional components such as affective reactions to teachers, peers, and activities (Fredericks, Blumenfeld, & Paris, 2004). Executive functioning (EF) skills— self-regulation of attention, working memory, and inhibitory control—may predict reading engagement, as they are a strong predictors classroom behaviors associated with engagement (McClelland et al., 2007). Our research questions are framed to explicitly examine dimensions of engagement such as: (1) how preschoolers’ reading engagement develops over time; and (2) whether reading engagement and EF scores associated. Participants of the study were low-income preschoolers’ (N=175, aged 35) from three Head Start sites in urban areas of Mountain West Regions. Preschoolers’ skills were assessed during the fall and spring of school year. Teacher surveys evaluated engagement during storybook reading (Son, 2014), and a series of behavioral regulation tasks measured EF (Ponitz et al., 2008). Fall and spring measures of reading engagement indicated that children’s reading engagement improved on average over the school year (t = 11.407, p <. 000). Though reading engagement is associated with EF skills, development of EF skills over a school year did not predict changes in engagement in book reading over time. However, increased reading engagement over time significantly predicted development of EF skills. Correlation analysis showed that fall EF scores and fall engagement scores (r = .442, p <.001), as well as spring EF and spring reading engagement scores (r = .475, p<.001), were associated. Ultimately, results indicate that engagement in book reading and EF skills share a bidirectional relationship. References Fredericks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74, 59-109. doi:10.3102/00346543074001059 McClelland, M. M., Cameron, C. E., Wanless, S. B., & Murray, A. (2007). Executive function, self-regulation, and social-emotional competence: Links to school readiness. In O. N. Son, S.-H. C., Kwon, K.-A., Jeon, H.-J., & Hong, S.-Y. (2014). Head Start classooms and children's school readiness benefit from teachers' qualifications and ongoing training. Child & Youth Care Forum, 42(6), 525-553. doi: 10.1007/s10566-013-9213-2 Wigfield, A., & Guthrie, J. (2004, August). Children’s motivation for reading: Domain specificity and instructional influences. The Journal of Educational Research, 97(6). USE OF CRISPR GENOME-EDITING TECHNOLOGIES TO PROMOTE OSTEOGENIC DIFFERENTIATION IN HUMAN ADIPOSE-DERIVED MESENCHYMAL STEM CELLS Bryton Davis (Robby Bowles, PhD) Department of Bioengineering Spinal fusion surgery is used to treat an array of diseases and conditions. Spinal fusion removes the intervertebral disc between two vertebrae and fuses the vertebrae together. This is done to reduce excess vertebral motion via bone formation that bridges the vertebral bodies. With recovery times up to over a year and a success rate as low as 70%, advancements to this treatment option are necessary. The injection of osteogenically differentiated human adipose-derived mesenchymal stem cells (hAD-MSCs) into the fused vertebrae may help increase the success rate by enhancing its main goal of bone formation. hAD-MSCs are abundant, easily accessible, multipotent stem cells. However, due to their heterogeneity, naïve hAD-MSCs alone have not shown the potential for adequate osteogenesis. CRISPR (clustered regularly interspaced short palindromic repeats) genome-editing technology may be employed in order to enhance the osteogenic potential of hAD-MSCs and reduce their inherent heterogeneity. CRISPR technology harnesses bacterial adaptive immunity to produce sequence-specific targeting of genes in various cell lines. In order to promote osteogenic differentiation of hAD-MSCs, our goal is to replace a bone morphogenetic protein (BMP) antagonist with a BMP signaling agonist. BMP-2 is a BMP signaling molecule that has been shown to promote the osteogenic potential of hAD-MSCs. Noggin is a BMP inhibitor that blocks the binding sites of BMP receptors and other signaling factors in the transforming growth factorβ (TGF-β) superfamily, rendering them ineffective. This leads to the inhibition of the potential for hAD-MSCs to undergo osteogenesis. We proposed that, through the use of CRISPR technology, we could replace the noggin gene in hAD-MSCs with BMP-2 under noggin promotion. To do this, we replaced noggin with green fluorescent protein (GFP). GFP expression under noggin promotion was induced via BMP-2 dosing, which, in large concentrations, drives noggin expression. After these changes, BMP-2 was shown to induce GFP expression under control of the noggin promoter, but these cells were difficult to isolate. We then added to the GFP edit an UbC promotor, which drives constant gene expression, and a nuclear localization signal (NLS). This allowed for consistent GFP expression in the nucleus. We then sorted these cells via fluorescence activated cell sorting (FACS), which demonstrated successful edits in 7% of the cell population. To date, we have shown successful edits of the noggin gene and an ability to replace noggin with the gene of our choice. Ongoing work will replace GFP with BMP-2. Again, we’ll sort via FACS, but select for non-fluorescence instead. Following these edits, we will access the osteogenic potential of these edited cells by exposing the cells to a range of BMP-2 dosing in culture, and measuring by alkaline phosphatase/alizarin red staining and qPCR the Runx-2, osteopontin and osteocalcin genes. Figure 1: FACS Data for Replacing Noggin with GFP. Percentages of GFP+ cells for the transfection of different guide combinations relative to donor template-only transfection control. Figure 2: GFP-expressing cells after flow cytometry. hAD-MSCs showing nuclear-localized GFP expression. Figure 3: BMP-2 Dosing on GFP Expression in noggin-edited cells. The fold-change of GFP expression in noggin-edited cells with BMP-2 dosing versus without. Figure 4: Fold-change in noggin expression in naïve cells vs polyclonal and monoclonal-derived edited hAD-MSCs. DECELLULARIZING TISSUE THROUGH GENE CONTROL Connor Healy (Tara Deans) Department of Bioengineering By removing immunogenic cells, tissue decellularization could turn off-the-shelf tissue scaffolds into viable means of treating patients with severe tissue damage. Despite numerous advances in tissue decellularization technology, donor-derived tissue scaffolds are plagued with microstructural damage and immunogenic contamination. This prevents decellularized tissues from being used clinically as tissue substitutes. To address this, we set out to develop a novel gene-based approach to tissue decellularization that uses a genetic switch (LTRi) to induce apoptosis in cells ultimately leading to decellularization. The results show that LTRi is able to control gene expression in embryonic stem (ES) cells and is therefore a viable means to control gene expression in terminally differentiated cells like those found in tissues being considered for replacement therapy. Furthermore, these results also demonstrate that gene control may be a feasible means of decellularization. TOWARD INTERACTIVE VISUALIZATION OF CONNECTOME PATHS Danielle “Dasha” Pruss (Miriah Meyer) Scientific Computing & Imaging Institute Scientists have worked for generations to understand neuronal communications between cells in the brain and eye, but imaging technology is just now approaching the point where reverse engineering these connections is feasible. Gaining an intimate understanding of cellular connections is important because most neurologic diseases are caused by abnormal cell-to-cell communications, but reasoning about these pathways is very challenging. In typical samples of retinal tissue, there are hundreds of thousands of cells, millions of connections and hundreds of paths connecting any two cells, making it difficult to identify dominant neuronal pathways. Ambiguity in the scientific labeling of cells’ communications compounds this problem and is difficult to overcome through traditional computational techniques. Visualization is essential to understanding this problem because the complexity and ambiguity in the data necessitates a human component, not only for finding important trends in the data, but also for identifying new research questions and correcting mislabeled data. Developing a visualization tool to effectively visualize and analyze the pathways between cells in the retina would greatly help researchers grappling with these data and minimize misconceptions about the data by allowing researchers to dynamically verify their hypotheses, reducing the time necessary to understand neurologic diseases. This tool will allow researchers to find salient paths between types of cells, search for paths meeting a certain criteria, and ultimately identify major neuronal pathways in the eye. REARFOOT VARUS: A STUDY OF THE QUANTIFICATION OF KINEMATICS, KINETICS, AND STRENGTH DISPARITIES AMONG COLLEGE ATHLETES Justin Allen, Eric Battista, Seth Garvin (Charlie A. Hicks-Little) Department of Exercise and Sports Science Rearfoot varus is defined as a medial inversion of the rearfoot at the subtalar joint.1 Individuals with this condition will often times show a lateral protrusion of the ankle. There are many elite athletes that are performing at a high level with this condition suggesting that it isn’t an extreme limiting factor in movement. However, because of the lack of research in the area, no concrete conclusion can be made in regards to the effects of rearfoot varus. The purpose of this study is to analyze the kinematics, kinetics, and lower limb strength of collegiate athletes with the condition. We hypothesized that athletes with rearfoot varus would have lower range of motion, a higher degree of distal tibial varum, decreased horizontal ground reaction force, and greater strength in the peroneal muscles involved in eversion. This study is intended to add to the knowledge of rearfoot varus, and serve as a resource for future research. The subjects were studied using motion capture, force plate technology, and Biodex testing. For the motion capture analysis, subjects were fitted with 34 reflective markers, and instructed to perform a full speed lateral step onto a force plate. The Biodex was used to conduct isokinetic testing at maximum effort for plantar flexion and dorsiflexion, as well as inversion and eversion of the ankle. Lower limbs with rearfoot varus showed a mean peak distal tibial varum 1.20° higher than the athletes’ limbs without the condition (p<0.05), and showed a total ankle range of motion 1.91° higher than the control limbs (p<0.05). The mean force plate contact time was 0.2085 seconds longer for the varus limbs (p<0.05, see figure 3). The mean horizontal and vertical ground reaction force showed no signicance; however, it did showed a trend that the varus limbs had a lower horizontal and vertical ground reaction force. The eversion and inversion tests were the only statistically significant Biodex tests. Particularly for eversion at 30 degrees per second: peak torque, peak torque by body weight (see Figure 3), average peak torque, and average power were greater in the varus group (p<0.05). Additionally, peak torque for inversion at 60 degrees per second was statistically significant (p=0.026); however, when adjusted for body weight, the peak torque no longer showed significance (p=0.062). The plantar and dorsal flexion trials showed no significance. The data suggests that the athletes with rearfoot varus had stronger eversion, increased tibial varum, and apparent decreased horizontal ground reaction force, supporting our hypothesis and current literature. Interestingly, the athletes with rearfoot varus showed an increase in range of motion throughout the lateral step. This contradicts our original hypothesis, but can be explained by noting that the subjects with rearfoot varus must rotate their ankle through a larger range as they are striking more laterally with the foot than subjects with neutral ankle anatomy. This provides further insight into the increase in eversion strength because the peroneal muscle group must work harder to evert the ankle to bring the foot to a mechanically advantageous position flat on the ground. In the future, a larger study population could be utilized to remove any error due to limited sample size. Furthermore, EMG testing could be implemented to further verify and specify any increased peroneal muscle activity in rearfoot varus athletes. Reference: 1. 1. Rearfoot varus. (n.d.). Retrieved November 6, 2015, from http://medical-dictionary.thefreedictionary.com/rearfoot varus COLLEGIATE ATHLETE WITH AN EXTRA VERTEBRAE-‐‑SI JOINT DYSFUNCTION: A CASE STUDY Hailey Augustine ( Charlie Hicks-Little) Department of Exercise and Sports Science Background: A 19-year-old female, freshman collegiate volleyball player presents with lower back pain and lower back muscle spasms. Athlete had a history of hip flexor strains which were first noticed during two a day volleyball camps earlier in the year. Two weeks after camps, at an away tournament, the athlete was in pre-game participating in a moderately intense drill consisting of quick reflexes of diving to the ground to keep the volleyball from touching the ground. As she dove for a ball, she made hard direct contact to the ground and failed to stand back up. She had multiple muscle spasms and sharp constant pain in her lumbar region of the spine. On the Visual Analog Scale she rated the pain 9/10 causing her to cry with pain. The athlete was removed from the game, provided medication to help with her pain and was referred to the team physician. Due to being at an away tournament the athlete was scheduled to see the team physician as soon as she returned. Differential Diagnosis: Iliopsoas (hip flexion) pathology, hamstring, gluteus maximus (hip extension) pathology, rectus femoris pathology, osteochondral defects, labral tear, nerve root impingement, herniated disk, osteophyte present along spinal column, athletic pubalgia, pressure on intervertebral disk on a lumbar nerve root, irritation of the dural sheath, or irritation of the meninges, sciatic nerve irritation/compression, facet joint pathology, SI joint dysfunction, pars interarticularis pathology, sacroiliac pathology, rotated ilium. Treatment: The team physician saw the athlete the following day after she had returned from travel. X-ray imaging revealed an extra vertebra, L6, and it was partially sacralized on the left side leading to Sacro-Iliac Joint Dysfunction. The athlete was prescribed a long term rehabilitation plan and returned to play 2 days after seeing the physician working up to full speed play within a week. Rehabilitation exercises included; hip bridge and hamstring curls, planks, supermans, ball-squeeze with bridge, opposite arm/leg on ball roll-ins, arm/leg donkey kicks, pelvic tilts, hip extension rotation with bands, dead bugs, lawnmowers and pelvic tilts. These exercises were designed to target core strengthening. The athlete performs 5-8 exercises a day/week. As the rehab progresses, the repetitions and difficulty also progress. The treatments include electric stimulation before and after practice with heat or ice for 20 min, lidocaine patches, and hip alignment adjustments. Graston and cupping techniques were also added to her treatment plan during the second half of the season. The athlete plans to continue the season and her collegiate eligibility for the following years with this condition but will continue her intensive rehabilitation and treatment plan. Uniqueness: Only 10% of adults have this congenital anomaly, most common, is the presence of a 6th lumbar vertebra. Rarely does this 6th vertebra cause back pain. It is less rare for the 6th lumbar vertebrae to be sacralized, which does cause pain due to motion stress. Sacralization could lead to a herniated disc, a bulging disc, spinal stenosis, degenerative disc disease or osteoarthritis. Conclusion: A 19-year-old female NCAA division I volleyball defensive specialist athlete was diagnosed with an extra vertebra that is partially sacralized. The athlete has completed rehab for three months and was cleared to return to play two days after diagnosis. This case is important because it provides information regarding rehabilitation plans designed to target this unique condition and will help athletic trainers that may have collegiate athletes with similar conditions. The need for continued focused rehabilitation and treatment is imperative for this athlete to continue to play at her elite level without pain. COMMINUTED PROXIMAL PHALANGEAL FRACTURE IN A DIVISION 1 FOOTBALL PLAYER Kelly Reese Farmer (Charlie Hicks-Little) Department of Athletic Training Background: A 22-year-old healthy male division 1 center football player sustained an injury to his right hand. After finishing a series, the athlete went to the head athletic trainer and team physician to get his finger evaluated. He stated that he did not recall exactly how he injured his hand. Upon removing his glove, his fingers immediately swelled and filled tightly with fluid. The team physician and head athletic trainer evaluated the player by testing ligamentous strength, grasp strength, percussion test, and did not suspect a fracture but thought it was presenting more as a contusion. They taped his finger around the joints for stability and had the player put his finger on ice when he wasn’t on the field playing. The player did not complain of finger pain but noticed a loss in range of motion in his fingers. After the game, the player was provided ice bags to keep on his hand and fingers in order to help control and decrease the swelling. The day after the game, the athlete came to the athletic training facility to receive treatment and have his finger re-assessed. Differential Diagnosis: Fracture of the proximal phalange, dislocation of the proximal phalange at the metacarpal joint, acute compartment syndrome of the second and third finger, contusion of the posterior side of the hand. Treatment: Upon re-assessment of his finger by the head athletic trainer and team physician, the player was taken to the hospital to receive x-rays. X-rays revealed a comminuted fracture at the proximal end of the second proximal phalange. The fractures were located at the proximal end of the bone and at the metacarpophalangeal joint. The athlete was placed in a finger immobilizer splint that kept his finger in 45 degrees of flexion in order to stabilize the joint. The athlete was held from any football activity for 2 weeks and had to always wear his finger brace. The athlete received treatment everyday consisting of a slush bucket of ice and water, diathermy, and hivamat. Progressively the swelling decreased in his fingers and slowly the player started getting sensation and range of motion back in his finger. After week two, the athlete was sent to receive more x-rays. These x-rays revealed that the bones were reuniting and healing properly. The athlete was cleared to play as long as he wore his finger brace, primarily used his left hand to snap the ball and had his hand taped before practice and games. In order to return to his position of center, the athlete practiced snapping the ball left handed with the athletic training staff. The athlete was cleared to play as long as he wore his finger brace, primarily used his left hand to snap the ball and had his hand taped before practice and games. In order to return to his position of center, the athlete practiced snapping the ball left handed with the athletic training staff. Uniqueness: Comminuted finger fractures are not uncommon, but the uniqueness of this case is that the fractures are at the proximal end of the bone at the joint capsule and not along the long shaft of the bone. Due to the fracture location, the player was withheld from activity for two weeks where normally the player would be able to continue activity with a finger splint. Conclusion: A 22-year-old male football athlete sustained a proximal comminuted finger fracture of the second proximal phalange. His finger was placed in a brace that kept his finger flexed at 45 degrees in order to protect the metacarpophalangeal joint. The athlete was held out of activity for two weeks and received constant treatment. After two weeks and follow-up x-rays, the athlete was cleared to play with a brace protecting his hand. This case is important because it provides a basis for rehabilitation for comminuted finger fractures located at the joint. It also reiterates the importance of obtaining x-rays on suspected bone injuries. STANDING POSTURAL SWAY AND BALANCE CONFIDENCE IN PERSONS WITH MULTIPLE SCLEROSIS AT FALL RISK AS COMPARED TO CONTROLS Austin S. Gamblin, (Hina Garg, Lee Dibble, Eduard Gappmaier) Department of Physical Therapy Imbalance and falls are common symptoms in persons with Multiple Sclerosis (MS). Force platforms have frequently been used to assess postural stability, but detailed characteristics of standing postural sway are not well-documented for persons with mild-to-moderate MS. In addition, the relationship between objective postural sway measures and subjective report of balance confidence is unknown. PURPOSE. This study aimed to investigate the differences in standing postural sway parameters and subjective balance confidence in persons with MS as compared to age-matched controls. The relationships between postural sway parameters and balance confidence were also examined. METHODS. Nineteen ambulatory persons with MS at fall-risk (mean ± SD age=53.4 ± 11.7 years, EDSS=4.9 ± 1.0, Disease Duration=16.0 ± 11.4 years) and 14 age-matched Healthy Controls (HC) (age=54.6 ± 11.9 years) were recruited. Participants were asked to stand still for 25 seconds with their eyes open on an in-ground force platform for 10 trials and center of pressure was recorded. Postural sway parameters included sway velocity, sway frequency, Medio-Lateral (ML) and Anterior-Posterior (AP) sway amplitude and total sway path. Balance confidence was assessed by the self-reported Activities-specific Balance Confidence (ABC) Scale. Between-group differences by Mann-Whitney U tests and spearman rank correlations were determined. RESULTS. As opposed to HC, persons with MS demonstrated significantly (p<0.01) increased sway velocity, sway frequency, ML and AP sway amplitude, total sway path and decreased ABC scores. Moderate-to-strong negative correlations were observed between all postural sway parameters and ABC scores. CONCLUSIONS. Persons with MS demonstrated widespread impairments in standing postural sway and balance confidence suggesting poor postural control during a quiet standing task as well as activities of daily living. The objective postural sway parameters were correlated with subjective balance confidence as well. These findings suggest the utility of laboratory and self-report postural stability measures in individuals with mild-to-moderate MS at known fall-risk for balance assessments. Future research should examine the effect of therapeutic interventions on postural stability measures in persons with MS. INTEGRATIVE HEALTH THEORY Jesse Myrick Peery (Glenn Richardson) Department of Health Education and Promotion Much of the literature examining ‘Integrative Medicine’ resigns itself to a discussion of Complementary Alternative Medicine (C.A.M.) or a multi-disciplinary approach. It is the purpose of this paper to urge a deeper context for wellness in the form of Integrative Health (I.H.). The model’s emphasis here on health – not medicine - enriches how thought, emotion, ritual behavior, and even spirituality play a significant role not only within the context of illness, but towards general wellness. With the possible exception of a few nursing subsets within fields such as oncology, western medicine has viewed the practices of mindfulness, meditation, and prayer as harmless, but not inherent to recovery or general health.1 Sacred word and ritual were an acceptable and expected part of the healing process from the time of Aristotle and Hippocrates to the apex of the 19th century. Over time, with the advancement of drugs, surgery, and radiation, consecrative immersion – a communing of mind, body, and spirit - was no longer considered part of an active treatment1. This trend continued until the 1950s when a renewed interest in mind-body medicine emerged. From early pioneers such as Dr. Herbert Benson whose work on relaxation and meditation showed positive responses in health2, to Pert’s psychoimmunologic discovery of “molecules of emotion”,3 to the modern day work of physicians such as Harold Koenig4, research continues to find promise in the neurophysiologic changes brought on not only by prayer, but treatments such as music immersion and pet therapy, among many others. It is with recent research momentum and the re-emergence of an understanding of the healing power inherent in a connection to nature and self that the Integrative Health model must become the central framework for theories and applications of health psychology. There are three driving principles within the model which act as a foundation to any approach to wellness no matter the modality or health provider. The first is found in the patient’s embracing of actions of virtue and personal skill. Virtue in this context has a wider consideration than our present day treatment. In ancient Greece, ‘virtue’ meant any form of excellence. This first precept unifies internal values of growth and gratitude while encompassing the basics of health such as diet and exercise. The second principle of Integrative Health, all too neglected by western medicine, is coherence or life-purpose. Whether examining day to day informal social interactions or religious and sacred belief systems, the I.H. model explores a universality in the transmission and meaning of human experience. Finally, the third principle of this model is a focus on the influence of mind-body medicine from the viewpoints of indigenous belief, eastern qi and prana, western theism, neuroscience, and even quantum physics - all mechanisms of the singular life-force responsible for biological, emotional, and socio-spiritual well being. There is an emergence of positive thought and behavior as an individual finds harmony through nature and the senses (body), contemplation and reflection of belief and purpose (mind), or through virtuous acts of service and aptitude (spirit). ADDRESSING THE EMOTIONAL HEALTH OF COLLEGE STUDENTS THROUGH A THERAPEUTIC COOKING CLASS Chaniece Pollock and Julie Metos Department of Nutrition and Psychology This study hypothesized that a unique cooking class, focusing on cooking and mindful eating to reduce stress, had the potential to improve the emotional health of college students. Cooking meals is shown to improve physical health, but there is a limited amount of research tying cooking skills to emotional wellness (Hartmann, Dohle, & Siegrist, 2013). This study evaluated the effectiveness of two lessons focusing on emotional health through mindful eating and cooking as a mechanism to cope with stress. The aim was to determine if students’ emotional states would be improved after each lesson. Participants were recruited for an experimental group or a control group. The Depression Anxiety Stress Scale (DASS) and The Oxford Happiness Questionnaire (OHQ) were distributed to the control and experimental group during the first week of the semester for a nutrition course to evaluate confounding variables in the experimental group. The experimental group took The Positive Affect and Negative Affect Schedule (PANAS) before and after each cooking class to evaluate the effectiveness of the lessons on the mood of participants. Results were not significant for any of the measures when comparing differences between control and experimental group. The results suggest participants may experience a decrease in negative affect after each lesson. Further research with adequate sample sizes is needed to understand the potential role of cooking classes in addressing emotional health. SYNCRETISM AND PHILOSEMITISM: A CASE STUDY OF “JEWISH MORMONS” Joshua Lipman (Elizabeth Cashdan) Religious Studies Program Over the last fifty years, a faction of the LDS Church have begun to identify themselves as Jewish Mormons. A large proportion of these people have Jewish ancestry but do not understand them or their ancestors’ conversion to Mormonism as a replacement of their Judaism, and that the two faiths can cohabitate. This paper (1) contextualizes the current phenomenon in historical Mormon-Jewish relations discussed in the writings by theologians Richards and Epperson and historians Goldberg and Glanz, (2) describes current JewishMormon practices and rituals, and (3) explores how the theological position of Jewish Mormons can be understood in relation to both Philosemitism and syncretism. This primarily ethnographic research entailed participant observation in the Jewish Mormon movement, B’nai Shalom, and the BYU Passover Seder in addition to semi-structured interviews of prominent members. From this research, I discuss several facets of Jewish Mormon culture and religion: the composition of the Mormon Passover Seder, Jewish Mormon understanding of the Holocaust, sub-identities including Ashkenazi or Sephardic, and use of Jewish food and music. Furthermore, this paper discusses the theological factors that led to this faith syncretism, and how this case differs from traditional syncretic processes. I posit that out of a deep fascination, admiration, and perceived connection to Judaism, Jewish Mormons and some traditional Mormons have built a relatable caricature of the average Jew, and that this perception of similarity significantly influences community relations. LEARNING STYLES OF TEACHERS AND STUDENTS IN A SECOND LANGUAGE CLASSROOM Jessica Loveland (Mary Ann Christison) Department of Linguistics The general research questions for this research study are concerned with learning styles and whether differences in student and teacher learning styles negatively impact students’ perceived grades in second and foreign language classrooms. Participants were asked to take a 30-‐‑minute online questionnaire on Qualtrics to determine their perceptual learning styles, group orientation, extraversion, and tolerance for ambiguity. They come from 11 different ESL or foreign language classes, two English language classes and nine language classes offered at the University of Utah. Participants were split up into two groups, students and teachers. A t-‐‑test determined that there was no statistical difference between students’ and teachers’ learning styles in any subset measured. Thus, the negative perception that students have of their grade cannot be attributed to differences in learning styles alone. For students who participated on a study abroad experience or an LDS mission, the responses were overwhelmingly positive, suggesting that immersion in the foreign language may be important in developing a positive orientation for language learning and may also prepare students to adapt to change. AFRICAN AMERICANS, WOMEN, AND THE 1910 FLEXNER REPORT: PROGRESSIVE MEDICAL REFORM AND PROFESSIONAL EXCLUSION Samantha T. Pannier (Nadja Durbach) Department of History Between the Civil War and the turn of the twentieth century the American medical profession expanded greatly both in size and in attention paid to scientific knowledge. During this time African Americans, women, and even African American women gained access to medical education through the proliferation of new medical schools. But this period of unprecedented access was in the end short-lived. The Flexner Report of 1910 was the culmination of years of effort on the part of the medical establishment to restrict entrance to the profession. Like much of the contemporary Progressive reform of the time, the Flexner Report found efficiency and standardization to be paramount and in the process left many of the best parts of professional expansion behind—the diversification of medical students and doctors in terms of sex and race. While most schools were technically coeducational by its publication, within a few years of Flexner, two of the three women’s schools were closed and all but two of the seven African American schools were shuttered. The Flexner Report marked the beginnings of a concerted effort to raise the standards of medical education in the United States and Canada but had far-reaching consequences for women and African Americans students and physicians as well as implications for the care of their future patients. HOW DO NUCLEAR SCIENTISTS AND ENGINEERS TALK INTERNALLY AMONG THEMSELVES ABOUT THE FUKUSHIMA ENERGY CRISIS? Haoran Yu (Danielle Endres) Department of Communication This project examines how scientists and engineers researching low-carbon energy technologies talk among themselves about the social, political, and cultural implications of their research. It is part of professor Endres’ National Science Foundation (NSF) Collaborative Research Project: The Influence of Low-Carbon Energy Technology Scientists and Engineers on the Composition of Energy Policy. That project examines expert-to-expert discussions among scientists and engineers about low-carbon energy technologies, particularly within two distinct but related energy technology sectors: wind, and nuclear. Within this research project and under the direction of Dr. Endres, I am working on a subproject that examines how nuclear scientists and engineers talk about the implications of the Fukushima disaster on their industry. My research question is: How do nuclear scientists and engineers talk internally among themselves about the Fukushima crisis? I am particularly interested in examining the role that the Fukushima crisis has on the way energy scientists and engineers talk about the future of nuclear technologies in the context of climate change and the need for new energy policy. The significance of this research for this paper is first, since climate change has become an important topic, it is important to see how scientist talk about nuclear energy as a sociopolitical issue in addition to its technical viability; second, there is a gap in rhetoric of science research about how scientists talk among themselves about the sociopolitical aspects of their research that this project fills. In this research paper, I will analyze a subset of the data collected by the research team. The methods are rhetorical and qualitative. Qualitative research is used to collect the data, which is based on participant observation and interviews with key scientists and engineers at the American Nuclear Society conference. Other members of the research team have collected this data. The data has been entered into NVivo qualitative analysis software. Rhetorical methods, which analyze strategies of persuasive discourse, such as narration, description, exposition, and argumentation will be used to analyze the internal expert-to-expert rhetoric of nuclear energy scientists and engineers. Using a coding scheme called Socio-Political Elements of Energy Development (SPEED) developed by one of the project Co-PIs (Dr. Tarla Rai Peterson), I will examine what sociopolitical aspects are important to scientists and engineers when they talk among themselves about the Fukushima crisis. Our potential findings are: first, description of the ways scientists are talking about Fukushima is valuable not only because it it has not been researched before but also because it will add to scholarship in rhetoric of science about how scientists and engineers combine technical and sociopolitical forms of reasoning. Second, there is potential to contribute to our understanding of the role that scientists and engineers have in the development of energy policy. This research is part of a larger collaborative research project that involves the PI (Professor Endres), a co-PI (Professor Peterson at UTEP), a post-doc (at UTEP), two graduate students (at Utah), and myself. This project represents an analysis of one part of the larger data set, in which I will be able to perform an analysis that contributes to the larger project. The results of this analysis, once completed, will be incorporated into the larger research project and hopefully integrated into a collaborative presentation or publication. I started being a research assistant for Danielle’s project in Spring 2015. I presented this project three times during the Spring 2016 semester. I did a poster session for Utah Conference on Undergraduate Research in February, an oral presentation for National Conference on Undergraduate Research and another poster session for the Utah Research Symposium. Through this project, I learned more about humanities research and learned how to use several different softwares. Additionally, I learned a lot about how to code research, gain intercoder reliability, analyze the coded results, and write up the results in both poser and presentation form. Also, the presentations improve my public speaking skills. Through the presentations, it gives me a lot of valuable feedback from different perspectives. THE ROLE THAT LYSOSOMES AND AUTOPHAGY PLAY IN ALVEOLAR SOFT PARTS SARCOMA, CLEAR CELL SARCOMA AND SYNOVIAL SARCOMA Sarmishta Diraviam Kannan, Jared Barrott, (Kevin B. Jones) Department of Orthopaedics, Center for Children’s Cancer Research at the Huntsman Cancer Institute Sarcoma is cancer of the connective tissue and is a very deadly type of cancer. Two of the sarcoma types, alveolar soft parts sarcoma (ASPS) and clear cell sarcoma (CCS) have a unique morphology where there are clear spaces around the nucleus. This unique morphology is absent in synovial sarcoma (SS). We believe that the unusual morphology in ASPS and CCS is caused by the presence of abundant lysosomes. Lysosomes are the cell’s digestive system and also paly a crucial role in the programmed cell death process, autophagy. Autophagy can also help the tumor cells survive under stressful conditions. We hypothesize that ASPS and CCS upregulate autophagy related genes and use autophagy as a survival mechanism. Analysis of RNA sequencing data for the three sarcomas showed that LAMP1, LAMP2A, Beclin and Cathepsin D genes are expressed. Western blot data showed that the protein levels translated from all four genes was highest in ASPS and lowest in SS. CCS showed moderate levels of the proteins. Immunohistochemistry showed that the proteins translated from LAMP1, LAMP2, Beclin and Cathepsin D are localized outside the nucleus and in concentrated pockets. This supports the theory that ASPS and CCS have abundant autophagy-related lysosomes. There was significantly lower staining in SS which supports the theory that SS cells do not contain abundant lysosomes in comparison to SS and ASPS. The next step is to determine whether autophagy is critical to the survival of these cancer cells and if it is, might it provide a possible therapeutic target for these cancers. The Effect of Induced Galactose-1-Phosphate Uridylyltransferase (GALT) Deficiency on the Growth of Liver Cancer Cells Eno-abasi Etokidem (Dr. Kent Lai) Department of Pediatrics Deficiency of Galactose-1-phosphate Uridylytransferase (GALT), an essential component in the Leloir pathway, results in a disease called classic galactosemia. Previously, Dr. Kent Lai’s lab has shown that knocking down GALT gene expression in human liver cancer cells using could significantly constrict their proliferation. The objective of this study is to see if reduced GALT activity using shRNA will lead to a reduction in the number of liver cancer cells. Several methods were employed to create this shRNA including recombinant DNA technology which created a plasmid coding for things including our shRNA of interest, ampicillin and puromycin resistance, and a tetracycline-on promotor and genotyping was used to confirm that our gene of interest was successfully incorporated into the plasmid. After, the shRNA was transfected into cancer cells using a lentiviral vector. The transcription rate of the cancer cell DNA coding for the shRNA was controlled using 0 µl, 0.01 µl, 0.1 µl, and 1 µl of tetracycline for four separate groups of cancer cells. Qualification of the amount of mRNA present in each group of cancer cells was analyzed via qPCR, specifically using TaqMan™ RealTime PCR Assays. Genotyping showed that our designed sequence was successfully incorporated into the plasmid. Moreover, we observed that as the amount of tetracycline increases, the quantity of mRNA decreases as detected by qPCR. Moving forward, we will quantify the change in the number of cancer cells present using cell counting. Further in the future, we will observe the effect of a reduced GALT activity in vivo using nude mice. AXL INHIBITORS FOR THE TREATMENT OF PANCREATIC DUCTAL ADENOCARCINOMA Camila Esposito (Jill Shea) Department of Surgery Background: In the United States more than 46,000 people will be diagnosed with pancreatic cancer. Although it is relatively rare, pancreatic cancer is the 4th leading cause of cancer death in men and women. Gemcitabine, the most common treatment for pancreatic cancer has less than 10% partial response rate. Also, Gemcitabine resistance is common in pancreatic cancer patients. Therefore, there is a clinical need to improve outcomes for patients diagnosed with pancreas cancer. Many cancers, including pancreas cancer, overexpress Axl, a receptor tyrosine kinase, which may provide a survival advantage for the cancer cells. Aim: The aim of the study was to determine the efficacy of an Axl inhibitor to treat patient derived pancreatic cancer xenograft tumors in a mouse model. Efficacy will be evaluated by determining if the analog can curb primary and metastatic tumor progression. Methods: A pancreatic adenocarcinoma tumor was obtained from a patient undergoing a resection at the Huntsman Cancer Institute and was propagated as subcutaneous tumors in female SHO immunocompromised mice (IRB and IACUC approved). The expanded tumor was then implanted orthotopically into the pancreas of 40 SHO female mice. Two weeks later, after the tumor had established, the mice were randomly assigned to the following treatment groups: 1) control; 2) gemcitabine + abraxane; 3) Axl inhibitor low dose; and 4) Axl inhibitor high dose. The mice were treated for 4 weeks and then sacrificed. At harvest the primary tumor was weighed and areas of metastasis were identified. Results: The gemcitabine abraxane group (0.03±0.02g) had statistically smaller tumors than the control group (2.0±1.6g), Axl Low (1.1±0.75g), and Axl High (1.2±1.0g)(p=0.008). The gemcitabine abraxane group (0%) also had a lower incidence of metastasis than the control (70%), Axl Low (30%), and Axl High (40%) groups (p=0.04). Discussion: The greatest efficacy was observed in the tumors treated with gemcitabine abraxane. However, there was an inhibitory effect of treatment with the Axl inhibitor on both primary tumor growth and metastasis. Further studies are needed to determine if the efficacy of treating with an Axl inhibitor could be improved with a combination treatment approach, such as an Axl inhibitor along with gemcitabine or abraxane. The Effect of Induced Galactose-1-Phosphate Uridylyltransferase (GALT) Deficiency on the Growth of Liver Cancer Cells Eno-abasi Etokidem, Manshu Tang (Dr. Kent Lai) Department of Pediatrics Deficiency of Galactose-1-phosphate Uridylytransferase (GALT), an essential component in metabolic breakdown of galactose, results in a disease called classic galactosemia. Previously, Dr. Kent Lai’s lab has shown that knocking down GALT gene expression in human liver cancer cells using siRNA could significantly constrict their proliferation. The objective of this study is to see if reduced GALT activity using shRNA will also lead to a reduction in the number of liver cancer cells by reducing the amount of GALT-producing mRNA that is transcribed. Several methods were employed including recombinant DNA technology used to create a plasmid coding for things including our shRNA, ampicillin and puromycin resistance, and a tetracycline-on promotor. Additionally, genotyping was used to confirm that our gene of interest was successfully incorporated into the plasmid. After, the shRNA was transfected into cancer cells using a lentiviral vector. The transcription rate of the shRNAencoding DNA was controlled using a tetracyclin-on promotor. We added 0 µl, 0.01 µl, 0.1 µl, and 1 µl of tetracycline to four separate groups of cancer cells to vary the concentration of shRNA produced and thus the amount of mRNA produced. Quantification of the mRNA concentration in each group of cancer cells was analyzed via qPCR, specifically using TaqMan™ Real-Time PCR Assays. Genotyping showed that our designed sequence was successfully incorporated into the plasmid. Moreover, we observed that as the amount of tetracycline increased, the quantity of mRNA decreased as detected by qPCR. This shows that the shRNA construct effectively reduced the transcription rate of the GALT gene. Moving forward, we will quantify the change in the number of cancer cells present using cell counting. Further in the future, we will observe the effect of a reduced GALT activity in vivo using nude mice. METHYLATION OF PEG10 ALLELES IN EWING SARCOMA Kiera L. Jorgensen (Jamie Gardiner, Rosann Robinson, Joshua Schiffman) Department of Oncological Sciences Ewing Sarcoma (ES) is the second most common bone cancer (1), and is described as a small, round, blue cell tumor. ES is usually characterized by a translocation and thus creation of the EWS-FLI1 fusion protein (2). However, there are other chromosomal aberrations that have been implicated in ES, such as trisomy 8 (3). Our lab has shown that a subset of those with trisomy 8 have an upregulation of PEG10. PEG10 is an imprinted paternally expressed gene. The paternal allele is expressed while the maternal allele promoter is methylated and thus the maternal allele is silent and unexpressed. PEG10 is also expressed in and essential for placental development. The mechanism by which PEG10 is upregulated in the trisomy 8 subset is unknown. We hypothesized two possibilities resulting in upregulation of PEG10: 1) The maternal allele promoter remains methylated, and the paternal allele is upregulated alone, 2) the maternal allele promoter becomes un-methylated and thus both paternal and maternal alleles are expressed resulting in the upregulation pattern. In order to quantify methylation of PEG10 alleles, bisulfite sequencing will be used. Bisulfite sequencing is a common technique used to evaluate methylation of CpG islands (4). In theory, sequence reads should show us which of the two hypothesis is representative of the PEG10 upregulation pattern. A ratio of 1:1 of un-methylated reads to methylated reads of the promoter regions should tell us that hypothesis 1 may be the mechanism involved, whereas all un-methylated reads may point towards the mechanism involved in hypothesis 2. Bisulfite sequencing works by converting all un-methylated Cs to Ts leaving the methylated Cs unchanged. Thus when obtained sequences are compared to the original sequence, areas of methylation can be determined. We chose 3 islands in the promoter region of PEG10 to bisulfite convert, amplify via PCR, and sanger sequence (Fig. 1)(5). Figure 1: Promoter region of PEG10 with island locations amplified via PCR. Sequencing reads ratios were analyzed for cell lines with and without expression of PEG10 (A673 + expression, CHLA9 – expression, TC252 + expression, Utes1 + expression, HepG2 + expression positive control, Placenta PEG10 expression undetermined). Island 3 seems to be completely methylated across both alleles and therefore may not be involved in imprinting patterns (Fig. 2A). Island 2 appears to follow the canonical imprinting pattern (Fig. 2B). Island 1.2, however does not seem to have a clear pattern (Fig. 2C) and this may be the sight of control of differing PEG10 expression in our cell lines. B A Figure 2: Methylation of Peg10 promoter CpG islands in various cell lines. A) Island 3 seems methylated in all alleles. B) Island 2 looks representative of canonical imprinting. C) Island 1.2 varies in methylation and may be the site of upregulation. C We were curious if the levels of PEG10 mRNA matched the methylation patterns predicted by Island 1.2. Using qRT-PCR, PEG10 mRNA levels were quantified (Figure 3). HepG2 is a hepatocellular carcinoma and overexpresses PEG10 (6), and we used this as our positive control. Figure 3: PEG10 mRNA expression levels determined via qRT-PCR. Normalized to HepG2. PEG10 mRNA expression and protein expression levels in our cell lines are consistent yet they contradict what we would expect based on our methylation data. This could be explained by PEG10 being upregulated by some other mechanism. Alternatively, the methylation patterns observed could be methylation lost between cell divisions and not necessarily reflect the separate alleles. We must determine which of these scenarios are seen in the methylation results. If the methylation results do reflect two alleles, then another mechanism of upregulation must be at play and needs to be determined. Other research suggests that there are other transcription factors that upregulate PEG10 in hepatocellular carcinoma (7), and these transcription factors can be upregulated in Ewing Sarcoma (8). REFERENCES 1. Riggi N, Stamenkovic I. The biology of ewing sarcoma. Cancer Letters. 2007;254(1):1-10. 2. William MA, Gishizky ML, Lessnick SL, Lunsford LB, Lewins BC, Delattre O, Zucman J, Thomas G, Denny CT. Ewing sarcoma 11;22 translocation produces a chimeric transcription factor that requires the DNA-binding domain encoded by FLI1 for transformation. Proc. Natl. Acad. Sci. USA. 1993;90:5752-56. 3. Maurici D, Perez-Atayde A, Grier HE, Baldini N, Serra M, Fletcher JA. Frequency and implications of chromosome 8 and 12 gains in ewing sarcoma. Cancer Genetics and Cytogenetics. 1998;100(2):106-10. 4. Li Y, Tollefsbol TO. DNA methylation detection: bisulfite genomic sequence analysis. Methods in Molecular Biology. 2011;791:11-21. 5. Li LC and Dahiya R. MethPrimer: designing primers for methylation PCRs. Bioinformatics. 2002 Nov;18(11):1427-31. 6. Tsou A, Yu-Chi C, Jin-Yuan S, Chu-Wen Y, Yu-Lun L, Wei-Kuang L, JenHwey C, Chen-Kung C. Overexpression of a novel imprinted gene, PEG10, in Human Hepatocellular Carcinoma and in regenerating mouse livers. Journal of Biomedical Sciences. 2003 Nov/Dec.;10(6):625-635. 7. Wang, et al. PEG10 directly regulated by E2Fs might have a role in the development of hepatocellular carcinoma. FEBS Letters. 2008 Jul; 582: 2793-98. 8. Schwentner, et al. EWS-FLI1 employs an E2F switch to drive target gene expression. Nucleic Acids Research. 2015 Feb;45(5):2780-89. APPLYING THE THEORY OF PLANNED BEHAVIOR TO AGGRESSIVE TREATMENTS AT THE END-OF-LIFE Sara Mann (Stephen C. Adler) Department of Family And Preventive Medicine Physicians and other health care experts are continuously studying the efficacy of treatments and their long-term effects. For instance, research has shown that aggressive end-of-life care could be potentially more harmful to the patient than previously known. Naturally, this has led to a wave of criticism towards aggressive treatments for terminally ill or dying patients. I investigated the dynamics involved with end-of-life care and why patients and their physicians seek aggressive treatment near death. Using the Theory of Planned Behavior, I deconstruct the process by which aggressive endof-life care is pursued and show how this health behavioral model can also guide better approaches for end-of-life care in our society. The Theory of Planned Behavior breaks down the factors that influence certain behaviors into three facets: attitude, social norms, and perceived behavioral control. By integrating several studies that analyze which factors influence quality of life at the end-of-life and several that examine the effectiveness of end-of-life chemotherapy, I offer evidence towards the need for patients and physicians to focus more on comfortable and empathetic end-of-life care rather than aggressive treatments. I also examine the utilization of end-of-life discussions between patients and physicians as a tool to improve end-of-life care. Ultimately, I argue that the dying process needs to be re-evaluated and more efforts should be made to improve end-of-life care to give the patient more control over their own death. CHARACTERIZING THE ROLE OF THE OSMO-SENSITIVE CATION CHANNEL TRPV4 IN RETINAL MICROGLIA Edin Mustafic, Sarah Redmon, Andrew Jo, Monica Lakk, David Krizaj Department of Ophthalmology and Visual Sciences Severe disturbances in homeostatic ionic gradients contribute to pathological reactive gliosis. It is unclear, however, how osmotic changes are transduced within microglia to initiate either protective or degenerative mechanisms. We hypothesize retinal microglia express an osmo sensitize protein, TRPV4, capable of detecting osmotic changes. To test this, retinal microglial cells were exposed to hypotonic stimuli (HTS), antagonists specifically targeting TRPV4 (HC-067) or endogenous metabolites known to activate TRPV4 (pBPB). Calcium dynamics were monitored with the calcium indicator dye Fura-2 AM. We found retinal microglial cells responded to the TRPV4 agonist GSK101 and hypotonic saline with significant [Ca2+]i elevations. HTS evoked a significant increase of 2030.42% (p < 0.0001). The amplitudes of hypotonic evoked [Ca2+]i signals were significantly reduced 67.22 ± 3.97 % by the selective TRPV4 antagonist HC-067 (p = 0.05). To examine the phospholipase A2 pathway in microglial cells we used a selective inhibitor, pBPB, and found hypotonic evoked [Ca2+]i signals were reduced by 54.26 ± 9.64 % (p = 0.05). Furthermore, we found microglial cells were able to reduce their volume after swelling in the presence of HTS. The extent of cell swelling was reduced by pre-treatment with the TRPV4 antagonist, HC-067, and by the phospholipase A2 inhibitor, pBPB. We show that TRPV4 is present in retinal microglial cells and mediates hypotonic evoked [Ca2+]i signaling. By inhibiting TRPV4 channels in dissociated microglia we limited the extent of swelling under anisosmotic conditions. Our findings further our understanding of retinal microglial reactivity to mechanotransduction and osmoregulation as well as provide mechanistic framework for developing new therapeutic strategies which involve mechanical stress and cellular morphology. QUALITY OF LIFE IN TRANSGENDER FEMALE TO MALE PATIENTS UNDERGOING CHEST WALL MASCULINIZATION Kimberly Vargas, Andy Rivera (Cori Agarwal) Division of Plastic Surgery Background: Approximately 700,000 transgender identified individuals live in the United States, representing roughly 0.3% of all adults. Transgender is an umbrella term for individuals whose gender identity or expression differs from that which is typically associated with their biological sex. Our research focuses on Female to male (FTM) transgender individuals; an FTM person is born female but identifies with the male gender. Purpose: The purpose of this study is to assess preoperative and postoperative quality of life (QOL) in transgender FTM patients undergoing chest masculinization. FTM chest masculinization is a surgical procedure to reduce breast size and reconstruct the chest. Methods: QOL is measured in our study through the use of the Body Uneasiness Test A (BUT-A) and Breast-Q surveys; both are validated survey tools. The BUT-A measures overall body dysphoria, breaking down into the following sub-categories: weight phobia, body image concerns, avoidance, compulsive self-monitoring. The BREAST-Q is used to measure the impact and efficacy of a breast surgery from the patient’s perspective. Our study surveyed FTM identified patients, pre and postoperatively, of a single surgeon at the University of Utah Hospital. Patients were emailed an anonymous online survey, up to 2 weeks prior to their date of surgery (DOS) and were issued an identical follow up surveys 6 months postoperatively. Survey data is tracked through the utilization of REDcap, a computer based survey software. Results: Possible scores for BUT-A survey range from 0-5, with higher scores indicating greater body uneasiness. Overall there was an decrease in the Global Severity Index (GSI) (average difference of 2 points), body image concerns (average difference of 2.31), avoidance (average difference of 1.77), compulsive self-monitoring (average difference of 0.42), and depersonalization (average difference of 1.68). Possible scores for BREAST-Q survey range from 0-100, with higher scores indicating greater satisfaction. Overall there was an increase in satisfaction with breasts (average difference of 55 points), psychosocial wellbeing (average difference of 47 points), Physical well-being (average difference of 11.64), and sexual well-being (average difference of 41.11). Conclusion: Preliminary results indicate that top surgery can be a useful option to increase the QOL of FTM individuals. Utilization of the BUT-A and the BREAST-Q surveys help identify satisfaction, efficacy, and quality of life for FTM pre and postoperative to chest wall masculinization surgery. This is still an ongoing study therefore the results are not conclusive. EVALUATION OF AN INEXPENSIVE OZONE SENSOR FOR MOBILE MEASUREMENTS Luke Leclair-Marzolf (John Horel) Department of Atmospheric Sciences With only two ozone monitoring sites, a population of over a million people spread over 280 square kilometers, the Salt Lake Valley is difficult to determine an individual's exposure to air pollutants. Ozone is one such pollutant that occurs during the summer seasons and can have detrimental health effects, including respiratory and cardiovascular issues. With the EPA standard re-evaluated to 70 ppb it is even more crucial to understand individual exposure. Using inexpensive micro sensors may be the key to solve this issue. By making affordable and easy to operate instruments which will allow more of the public to see what their individual exposures are. However these micro sensors have proven to have lag times and calibration issues. These micro sensors will never replace the expensive and EPA regulated instruments, but they can provide a general picture of an individual's exposure. Another benefit of using micro sensors is their application to be mobile units. Since pollution is distributed unevenly throughout the Salt Lake Valley, being able to record mobile measurements is very beneficial. Figure 1 shows a test of the micro sensor on a bike ride circumnavigating the University of Utah's campus. During this transect it is evident that there is a wide distribution of ozone on the spatial scale of just the University. FROM HEADWATERS TO THE CITY: CHANGES IN SOIL NITROGEN CYCLING AND STREAM-SOIL CONNECTIONS ALONG A RIPARIAN GRADIENT, RED BUTTE CREEK, UT. Christina R. Woltz (Gabriel Bowen) Department of Geology and Geophysics Urbanization increases reactive nitrogen (N) deposition to ecosystems, yet the effects of these inputs along wildland to urban riparian gradients are poorly understood. Relationships between soil nutrients and stream hydrodynamic transport in semiarid systems may play a role in the distribution of urban N deposition. The focus of this study was to examine concentrations and isotope signatures of soil N along a montane-t o-urban stream gradient, and to identify links between riparian soil nutrients and stream hydrodynamics. We hypothesized that N concentrations and δ15N values would gradually increase from the montane to urban zone due to anthropogenic inputs, and that stream hydrodynamics (gaining vs. loosing reaches) would exert second-order control on riparian soil N dynamics. Nine sites, varying in land cover (urban or montane) and hydrology (gaining or losing water), were sampled along Red Butte Creek, sourced in a protected area of the Wasatch Mountains and flowing into Salt Lake City, UT. Contrary to our hypothesis, montane sites showed a decreasing trend of soil inorganic nitrogen with proximity to the urban zone. However, urban sites did have higher inorganic N that increased as the creek flowed into the city. Additionally, the δ15N of soil nitrate increased with proximity to the urban zone, suggesting the importance of new, heavier N sources as the stream enters the urban zone. Sites in gaining stream sections had higher % soil C and N, indicating more organic production and/or retention, as well as higher soil moisture. However, local hydrology did not affect inorganic N concentrations or N isotope signatures. Together, these data suggest that anthropogenic N sources increase in importance in riparian systems near the urban zone, but other factors (such as climate) may still regulate the more distant wildland areas. Moreover, local hydrodynamics appear to have a strong effect on organic versus inorganic components of the riparian soil nutrient system. ANTIBODY SPLIT-ENZYM CONJUGATES COUPLING HIGH DRUG DELIVERY WITH LOW OFF-TARGET TOXICITY Elham Hatami (Shawn C Owen) Department Of pharmaceutical Chemistry The objective of this project is to suggest an answer/methodology to the following question: How to deliver a potent cytotoxic drug specifically to tumor cells? In order to solve the problem of delivering high concentrations of potent drug just on the tumor site and not the off-target sites we propose to use inactive enzyme parts which will become active when they are located next to each other at the tumor site (Figure 1). Each of the fragments of the enzyme is fused to two separate Fabs (Antibodies). After binding to the target, the enzyme halves refold to regenerate enzymatic activity. The enzyme can then catalyze inactive prodrugs into active drugs specifically at the tumor site. This method would be able to target many solid tumors - in this study we chose the HER2+ breast cancer. Our objective is to demonstrate the activation of β-lactamase on HER2+ breast cancer cells for prodrug treatment. Figure 1 Illustration of proposed mechanism. 1 & 2) Fabs fused to inactive enzyme fragments (EF1/2). 3) Activation of enzyme by complementation when both Fabs bind target receptor (R). THE EFFECTS OF CLEAN ELECTION LAWS IN MAINE AND ARIZONA Morgan Cassidy (Matthew Burbank) Department of Political Science The clean election laws of Maine and Arizona were instituted to counteract the amount of time a candidate would spend on fundraising during his or her campaign. The idea was that if the state provided the funds, then the candidate would have less dependence on lobbyists and be able to devote more time to other portions of his or her campaign (Campion, 1996). It was also hoped that candidate dependence on corporations, and thus corruption as well, would be cut down in the process. Upon implementation, those in favor of the clean election laws also saw it as a chance to bring about competition by giving qualified candidates a way to enter the race without requiring them to take such massive out of pocket hits. With this in mind, the drafters of the Clean Election Act went to work. In order to implement these laws, those writing them had to stay within the confines of the First Amendment, as laid out by Buckley v. Valeo (Campion, 1996). The Clean Election laws had to avoid the major pitfalls associated with the Federal Election Campaign Act of 1971 (FECA) if they had any hope of being upheld in the courts. Though FECA had been an attempt to institute public control over national elections, ultimately it ran into issues after it made amendments in 1974 regarding a candidate’s rights to raise and spend money. These amendments were brought about by the public outrage over the Watergate scandal and were hoped to bring the democracy back to the voters. However, the United States Supreme Court ultimately ruled against a majority of the bill in Buckley v. Valeo. The Supreme Court made it clear that nearly every means of mass communication required money, and thus, a candidate had the right to exercise his or her free speech through the use of his or her money. The Supreme Court did decide uphold the voluntary acceptance of funds. They determined that a system where candidates could voluntarily opt in did not violate any of their rights (Lazarus, 2000). Lazarus states that the Supreme Court noted the benefits of such a system, saying that it could reduce the influence of interest groups, increase candidate interaction with electorate, and “it frees candidates ‘from the rigors of fundraising’”, thus allowing them to spend more time in other portions of their campaigns. Thus, while other portions of FECA may have been struck down, by upholding the voluntary opt in system, those writing the Clean Election laws had a place to start. Working from there, the Maine Clean Election Act needed to be very careful not to violate candidate’s First Amendment Rights. In order to do this, the Clean Election laws allowed the program to be voluntary, where candidates could choose whether or not they wished to participate within the program. Because candidates had the option to opt out, they were free to spend their own money, and thus, their First Amendment rights were not infringed upon. In doing this, the Maine Clean Election Act succeeded where the Federal Election Campaign Act of 1971 failed. The Maine Clean Election Act was passed in 1996 by a public ballot. Arizona’s clean election law was also passed by a public ballot in 1998, and it maintained a system very similar to Maine’s (Miller, 2008). Because these laws had such similar systems, court decisions affecting one very easily effected another, as we’ll see later when it comes to the matching funds system. In addition to complying with the Constitution, those writing the Maine Clean Election Act and Arizona’s Clean Election Act also wanted to make sure that the candidates that participated were serious about running. Within the laws, there is a process which ensures the candidate must be committed to the program in order to receive public funding. First, he or she must declare that he or she intend to run for office while being supported by public funds, and then, he or she must also go out and collect a set amount of minor $5 donations from voters (Maine Commission on Government Ethics and Election Practices; Citizens Clean Election Campaign). These donations are deposited into the Clean Election fund and serve as the money awarded to participating candidates. Arizona also collects a 10% surcharge from civil and criminal penalties and fines to supplement the $5 donations collected by candidates. The amount of minor donations that a candidate must obtain varies on what position the candidate is running for, with Maine specifying House candidates need 60, Senate 175, and Gubernatorial candidates 3,200. The candidate may also accept $100 donations for campaign seed money any time before the candidate requests certification. However, there are limits to how much money each candidate can accept, once again depending on which position he or she is running for. Once the donations have been accepted and the candidate certified, the state determines the amount of money the candidate will receive based on whether the candidate is running in a primary or general election and whether or not the race is contested. In return for this money provided by the state, the candidate must agree not to accept private funds and not to spend more than the amount provided to him or her. There have been some changes made since the laws were first used in 2000. When the Clean Election laws were first instituted, both Maine and Arizona had a system of matching funds. In the event that the candidate was drastically outspent by an opponent, there was a system in place that would match the amount spent, up to three times the initially provided amount. This portion of the Clean Election law was put in place to make sure that a privately funded candidate could not win simply by outspending the publicly funded candidate. The measure also encouraged both sides to become publicly funded, as being publicly funded would equalize out the amount of money both sides would have (Campion, 1997). However, in the 2011 court case, Arizona Free Enterprise Club's Freedom Club PAC v. Bennett, the United States Supreme Court ruled that the matching funds system burdened free speech and was not justified enough to be upheld (Maine Commission on Government Ethics and Election Practices). Arizona was forced to terminate the matching funds system, and Maine quickly followed suit after the U.S. District Court of Maine ordered it to be struck down. Despite this setback, on November 3rd of 2015, Maine citizens voted to restore the Clean Elections act by what the Maine Citizens for Clean Elections (2016) is calling a “landslide victory”. However, despite this victory, the Maine Clean Elections Act is still having its problems. Recently, advocates of the Act have been pushing Maine legislators to repay the money, totaling $1.7 million, that they used from the Maine Clean Election Act fund to pay for other programs (Mistler, 2016). Mistler states in his article that if Maine lawmakers do not return the funding soon, the remaining funds will run out by November. Even though 201 House candidates and 67 Senate candidates were publicly funded, Republicans in the Maine legislature have been delaying supplying the funds that the government has raided, as Clean Elections funding was not a part of the budget (Mistler, 2016). Many of these Republicans are unsure of the benefits provided by the program and thus, don’t wish to fund it. Most of them dislike the continued use of PACs by candidates, as well as noting that spending by lobbyists and other groups has increased. One of the Republican members of the Maine Legislature, Senator Eric Brakey, has been very outspoken against the program. Brakey notes that the state is already having difficulty paying for welfare programs, there’s no reason they should continue paying for politicians’ “campaign signs and robocalls.” So despite the Maine citizen’s vote to keep the Clean Election Act, its future seems to be in a precarious position, nonetheless. The Republicans do raise a good point, and there are several questions now that need answered in regards to the effectiveness of these laws. Do these laws actually reduce the amount of time and effort that candidates and campaigns spend on raising money? Have these laws increased access to state offices by more candidates? Have these laws affected the competitiveness of elections? And on a related note, do these laws reduce or increase the number and/or competitiveness of third party candidates? Both Maine and Arizona have had the system in place for a few election cycles now, and there has been research conducted on just how effective these clean election laws were. Through thorough examination of the research available, each of these questions will help to fill in the picture of what is happening with these laws. To begin with, do these laws actually reduce the amount of time and effort that candidates and campaigns spend on raising money? One of the main goals of these laws was to be able to answer this question with a definitive yes, and thus, it’s time important to examine this issue to address if whether or not these laws have cut down on the amount of time spent campaigning and the candidates; dependency on private money. Overall, the answer seems to be that these laws were successful in this area. While the GAO deemed the outcome inconclusive in their study, there are other researchers who believe that the GAO used a number of unorthodox indicators, as well as looking only at primaries (GAO, 2003; Mayer, K.R., Werner, T., & Williams, A., 2006) In order to test whether or not candidates spent less time fundraising, Francia and Herrnson (2003) sent out a survey to a random sample of state legislative candidates. They then narrowed the responses down to major party candidates running against another major party candidate(s). This study was careful to control for various factors, such as incumbency, political experience, and whether or not the seat they were running for was open; all of these were variables that the researchers believed would affect the candidate’s ease of pulling in campaign contributions. After several more controls and other data processes, the study provided their multivariate results. The study used the dependent variable of “The Percentage of Time State Legislative Candidates Devote to Fundraising” and various independent variables that fell into categories: public finance laws, candidate characteristics, election conditions, and state level conditions. Francia and Herrnson found that candidates who accepted public funding spend 15% less time devoted to fundraising than candidates who did not accept public funds. This is a rather high correlation between the two variables and gives the researchers stronger footing to make their conclusion on. It is also important to note from the same study, the independent variable “Legislative professionalism”, used here to mean how much political experience the candidate has, had an unstandardized coefficient of 13.98 when controlling for other variables, which the study notes indicate that as a candidate gains more political experience, they spend more time raising finances. Overall, the study comes to the conclusion that candidates participating in full public funding spend less time raising money for their campaigns. They do point out, however, that those candidates who only receive partial funding still spend just as much time campaigning as their privately funded competitors. The study indicates that there is a drop off in the effectiveness of clean election laws cutting down on fundraising time in states that provide only partial funding. The next question that needs to be addressed is have these laws increased access to state offices by more candidates, and related to that, have these laws reduced the overall amount of spending on political campaigns? Briffault (1999) concludes in his research that there are two main factors that prevent candidate participation: legal constraints and resource constraints. It is the resource constraint that the clean election laws of Maine and Arizona seek to abolish, so as to provide increased access to the system for qualified candidates. The researcher concludes that the incumbent in an election generally starts out ahead of the challenger, as those donating money want access to the winner. In his article, Briffault states: The real impact of campaign donations appears to be far subtler than the direct exchange of contributions for the votes of elected officials. Donors emphasize, and officeholders agree, that what contributions produce is "access"-the required entry ticket for getting something done. (1999, p. 580) One of the major goals of the clean election laws was to give candidates without deep pockets access to the political arena, and while Briffault may conclude it a success, there are other researchers who think studies in this area are more guesswork than solid evidence to conclude its effectiveness (Mayer, Werner, & Williams, 2006). Aside from providing access to candidates from the major parties, it’s also important to examine the effects that the clean election laws have on third party candidates, and ask do these laws reduce or increase the number and/or competitiveness of third party candidates? Lazarus (2000) concludes that though Maine’s system might be unfairly geared towards the candidates of the major parties, the system that triggers an increase in campaign funding highly encourages third party candidate’s participation. Because third parties do not compete in primaries, under the Maine Clean Election Law Act, they are unable to obtain funding until the general election. While this puts them at a disadvantage, Lazarus points out that the level of funding they can obtain by using the public financing option “likely coerces third party candidates to accept public funding”. In addition to candidates having access to the system, it’s also important to assess how voters have access to the candidates. In his research, Briffault (1999) delved further into this idea of voter access. He highlighted how voter access to these candidates changed based on whether or not they were accepting campaign contributions from larger donors. While those who donate large sums of money make up a high percentage of campaign funding, they are not representative of the constituents that the elected official is supposed to be representing. This creates an inequality in voter access based on wealth, where those with large sums of money can have more of a voice than those who are not able to make as large of a contribution. While Briffault does not claim that public funding will solve all of the problems faced by the current campaign system, it does give voters a more level playing field where they can have a more equal voice without regard to their wealth. Have these laws affected the competitiveness of elections? While the other questions may have had a fairly one sided answer, the conclusion on increased competitiveness in these clean election states is a bit more mixed in result. Briffault states that “If an election is financially uncompetitive, it is usually politically uncompetitive too” (1999, p. 570). While financially these elections might give more access, it doesn’t necessarily mean that all of these elections have become more competitive. Briffault ultimately concludes that public funding promotes competitiveness, and Malhotra’s (2008) research would agree with this conclusion. However, there is some research that has examined the clean election laws and concluded that competitiveness does not come hand in hand with public funding. When Malhotra looked at whether competitiveness was created by the clean election laws in Maine and Arizona, he started by determining how to measure this dependent variable. Because many common indicators can be misleading, Malhotra chose to use two dependent variables the inverse Herfindahl-Hirschman Index (HHI-1) and the margin of victory. Malhotra chose the HHI-1 in order to “assess the robustness of the findings” because he saw more commonly used indicator, margin of victory, as having limitations. After examining Maine and Arizona separately, Malhotra found that the two effects of the two systems parallel each other. He believes that when a challenger participates in the system, they are able to mount a viable campaign using the money provided through public financing. Brogan and Mendilow (2012) replicated Malhotra’s experiment using data from the Senate. While using the same system of randomly assigning candidates, Brogan and Mendilow achieved the same results. However, the researchers wished to also check for self-selection bias. They found after changing the original randomly assigned candidates to randomly assigned non-incumbents, that the results were not statistically significant in both estimates (they don’t state whether it was the Herfindahl-Hirschman Index or the margin of victory). In the middle ground is Mayer, Werner, and Williams’ (2006) research, which came to the conclusion that clean election laws appear to increase competition, though perhaps it’s too soon to tell for sure. The researchers raised issues with the GAO’s study as perhaps missing data and using incorrect indicators. When examining their data from both Maine and Arizona and comparing it to partial public funding states like Wisconsin and Hawaii, the researchers see the results of their study as “a mixed picture.” While the data seems to show that Maine and Arizona have become much more competitive states since the clean election laws were instituted, the researchers became more confident in their conclusions since the Maine 2004 elections' data remained at the same level. Even if there might be a lack of competition in all districts, Mayer, Werner, and Williams point out that if the current candidate in a district is representing the voters properly, then competition might not be necessary. If the current candidate in office is adequately fulfilling the needs of the voters, then there is no reason for that district to receive an increase in competition. While not all agree that clean election laws of Maine and Arizona create competition, Mayer, Werner, and Williams believe that if there are good candidates that the voters are already happy with, the district might not need competition. However, there are still those that hold a contradictory opinion. Mayer, Werner, and Williams (2006, p. 263) state a conflict of views with colleagues who were publishing in the same journal. According to their article, researchers Primo, Milyo, and Groseclose ‘argue that “the jury is very much still out on clean election laws”…’ pointing out that it’s entirely possible the changes noted by Mayer, Werner, and Williams may be a temporary result. In their own article, Primo and Milyo (2006) make the argument that there isn’t sufficient scientific finding to justify the adoption of these laws. They argue that there is no “systematic impact of existing funding programs”. Their argument is that there isn’t any scientific evidence that supports any claim regarding the effectiveness of these clean election laws, stating that Mayer, Werner, and Williams’ research had mixed results (p. 8-9). Though Primo and Milyo largely contradict many other research articles, it’s worth taking into account the findings of their paper. Clean election laws seem to work as more candidates take advantage of the opportunity. This frees up time for them to pursue other methods of campaigning, as well as, it partly frees them from the control of lobbyists while also encouraging candidates to take more interest in the values of constituents. In its report, the GAO found that between 2000 and 2002, the number of candidates/legislative members funded by clean election laws increased in Maine and Arizona. However, overall, the GAO (2003) had inconclusive findings about whether or not the clean election laws were effective. There were several issues raised with the findings of the GAO as many of the articles thought that the GAO significantly underestimated the amount of competitiveness created by the clean election laws, as well as using “unorthodox” indicators (Mayer, Werner, & Williams, 2004). The researchers also found that the GAO might have made an error in not evaluating both the primaries and the general elections, looking only at primary elections. On the other hand, Miller (2011), in his article After the GAO Report: What Do We Know About Public Election Funding?, thought that the GAO report was a good opportunity for researchers to compare notes, even though he agrees that the GAO inadequately evaluated the effectiveness of the clean election laws. Ultimately, we have looked at the research that might answer the questions posed for the research, but in the end, it is hard to know for certain whether the clean election laws in Maine and Arizona have had their intended effect. While the two laws have been in effect since 2000, there are still reports from the GAO, Primo and Milyo, and other political scientists who do not believe they can say with conviction if the data found by other researchers is solid. At the same time, many researchers believe that the clean election laws have been effective in many areas. Francia and Herrnson came to the conclusion that clean election laws allowed candidates to spend less time fundraising as long as they receive full, rather than partial, public funding. Briffault believed that clean election laws gave candidates and voters alike increased access to the system. Because candidates were not held back by their lack of deep pockets, they were better able to enter the political arena. This access in turn is believed to also create competition within these states, as seen from the researcher conducted by Mayer, Werner, and Williams, as well as Malhotra. Maine and Arizona have been the beginning of many changes in the campaign system, and there are those who would like to continue research into these laws. It’s essential for states looking to follow in their footsteps that researchers make new findings in the areas discussed above to better flesh out the research and perhaps find flaws that can be improved upon. Two areas that could use this research to be better understood are candidate access to the system and the findings of the GAO. Both topics would benefit from more research in order to have more confidence about the effects of each area. At this time and with the research currently available, it would seem reasonable to conclude the clean election laws have been successful in accomplishing their goals. Based on all of the research gathered, and despite the few issues raised, the clean election laws have seemed to be successful in their goals. While the laws may not be a perfect solution that completely solves all of the problems faced by the current system, they are a good start to begin solving the issues and brainstorming adjustments that would improve upon the United States’ current process. Works Cited Briffault, R. (1999). Public funding and democratic elections. University of Pennsylvania Law Review, 148, 563-590. Brogan, M. J., & Mendilow, J. (2012). The telescoping effects of public campaign funding: evaluating the impact of clean elections in Arizona, Maine, and New Jersey. Politics & Policy, 40(3), 492-518. General Accounting Office (GAO). (2003). Campaign Finance Reform: Early Experience of Two States that Offer Full Public Funding for Political Candidates. Washington, DC: United States General Accounting Office. Campion, M. E. (1997). Maine Clean Election Act: The future of campaign finance reform. Fordham Law. Review, 66, 2391. Citizens Clean Elections Commission. Retrieved April 09, 2016, from http://www.azcleanelections.gov/en/about-us/what-is-clean-elections Francia, P. L., & Herrnson, P. S. (2003). The impact of public finance laws on fundraising in state legislative elections. American Politics Research,31(5), 520-539. Lazarus, T. (2000). Maine Clean Election Act: Cleansing public institutions of private money, The. Columbia Journal of Law and Social Problems, 34, 79. Levin, S. M. (2006). Keeping it clean: Public financing in American elections. National Civic Review, 95(4), 8-27. Maine Citizens for Clean Elections (2016). Retrieved April 7, 2016, from https://www.mainecleanelections.org/. Maine Commission on Government Ethics and Election Practices. Seed money contributions. Retrieved April 7, 2016, from http://www.maine.gov/ethics/mcea/seed.htm Malhotra, N. (2008). The impact of public financing on electoral competition: Evidence from Arizona and Maine. State Politics & Policy Quarterly, 8(3), 263-281. Mayer, K. R., Werner, T., & Williams, A. (2006). Do public funding programs enhance electoral competition? Miller, M. G. (2011). After the GAO Report: What Do We Know About Public Election Funding?. Election Law Journal, 10(3), 273-290. Mistler, S. (2016, April 06). Clean Elections advocates appeal for replenishment of raided funds. Retrieved April 07, 2016, from http://www.pressherald.com/2016/04/06/maine-clean-election-advocates-demandrepayment-of-raided-funds/ Primo, D. M., & Milyo, J. Public Financing of Campaigns: A Statistical Analysis. Engage, 96. Saxl, M., & Maloney, M. (2004). Bipartisan Campaign Reform Act: Unintended Consequences and the Maine Solution. Harvard Journal on Legislation, 41, 465. EVOLUTION, LIVING PATTERNS, AND MITOCHONDRIAL GENETIC VARIATION IN CHIMPANZEES Bryce R. Christensen (Leslie A. Knapp) Department of Anthropology ABSTRACT The study of genetic variation in chimpanzees allows researchers to determine evolutionary origins, population dynamics, living patterns, and more; the focus of this thesis is on the Pan troglodytes verus subspecies of chimpanzees of western Africa. After reviewing the literature, I set out to test two hypotheses: first, males within a community of western chimpanzees in Senegal will share mitochondrial haplotypes with each other that they do not share with the females; second, Pan troglodytes verus will have mitochondrial DNA that is similar to those of other chimpanzee subspecies, implying a similar evolutionary origin. Fecal samples were noninvasively collected from the Fongoli area near the Niokolo Koba National Park in Eastern Senegal, and then transported to the University of Utah’s Department of Anthropology for analysis in an evolutionary genetics laboratory. DNA extraction was attempted for six of the twelve samples received. After optimizing the conditions, mitochondrial DNA derived from these extractions was then replicated using PCR, and checked for successful amplification via agarose gel electrophoresis testing. Amplification was successful for four out of the six samples: LI and TM (female), and SI and FO (male); PCR purification was performed on these samples before sending them to the HSC DNA Sequencing Core Facility at the University of Utah. DNA sequences were analyzed using MEGA; the sequences were compared to each other, and to DNA sequences for the Pan troglodytes verus mitochondrial genome already in the literature. Results were also compared to DNA sequences for other chimpanzee subspecies, to bonobos, and to humans to show evolutionary trends and relatedness. Results from my experiments came back mostly inconclusive due to poor quality of the DNA sequences. Future research can be done using the PCR conditions optimized through my experiments. Final optimization results* Fecal extraction results** *Gel lanes from left to right: 100bp ladder, 8 samples (6 yielding significant DNA), and a negative control. **Gel lanes from left to right: 100bp ladder, the four successful samples, a negative control, two unsuccessful samples, another negative control, and a positive control. TRANSITIONING CONTROL Micah Lee Crapo Jason M. Watson, Ph.D. Department of Psychology, University of Utah Attention is a distinguishing component in recognizing the complexity of our short-term memories, and thus the need for a more explanatory model (i.e., Working Memory). Working Memory (WM) is a hierarchy of multiple functions, in which short-term memory storage components sub serve a domain-free, limited-capacity of controlled attention. In addition, WM is also a non-memory storing process of controlled decision making functionality; whereupon certain behaviors, schemas, and competing information can be analyzed, acquired, and essentially learned. It is within this construct that our research in cognitive control, as displayed through attention focused responses to competing stimuli, was measured into quantifiable data obtained from the voluntary participation of over 400 psychology students. This competition between incoming information is commonly referred to as interference, and serves as a “spotlight-lens” in more accurately observing when higher order cognition takes place, along with how well it performs. Congruent and incongruent trials, and the amount in which each is displayed, plays a large role in studying automatic and controlled processes. Using high-congruency proportions encourages participants to respond on the basis of the most salient aspect of the stimulus, which increases the conflict when rare incongruent trials are presented (Kane & Engle, 2003; Miller, 2014). Simon Task Figure 1. Partial Representation of the Simon Task: Congruent Trial [automatic & controlled compatibility]; Incongruent Trial [suppression of automaticity]; Fixation Trial [neutral]. Castel et al., 2007) Our research used a WM span-task in order to filter out high and low-span differences between individual performances (usual criteria on performance is categorized by the speed and accuracy in problem-solving and recall abilities; highspans are those with higher speed/accuracy scores as opposed to low spans). Creating, manipulating, and measuring interference serves as an integral part in studying WM and attentional control processes. Performance on the working memory task (math/memory combination) successfully showed predictive values on the abilities to transition in/out of cognitive attentional-control during the Simon Task. FEASIBILITY OF THE LAKE POWELL PIPELINE DEVELOPMENT ACT AND PROPOSED WATER CONSERVATION ALTERNATIVES Kyle L. Criddle (Leslie P. Francis) Department of Philosophy and College of Law After four years of drought, the Utah State Legislature was tasked with the role of addressing the increasingly limited water supplies. At first, state agencies decided the construction of a pipeline from Lake Powell to Washington and Kane counties would be required to meet the water needs of a growing population in Southwestern Utah. However appealing the project first appeared on paper, many in the academic and scientific community are now skeptical about the feasibility of the project and its economic and ecological impacts. In addition, alternative measures to meet future water needs are numerous, inexpensive, and feasible, which puts the proposed policy in question -‐‑ especially in a fiscally conservative state such as Utah. Using current data projections, legislative audits and a range of reports, this paper will determine the economic and ecological feasibility of the Lake Powell Pipeline Project. Next, the paper will review the strengths and weakness of three alternative policies including instream flows, rainwater harvesting, and agricultural water conversion. Based on a comparison of these various policies, the paper will then conclude with some suggestions on how to reach the best outcome overall for the state of Utah. GENE DIVERSITY IN THE MAJOR HISTOCOMPATIBILITY COMPLEX OF WILD AND CAPTIVE GORILLAS Tsivya Devereaux (Leslie A. Knapp) Department of Anthropology The major histocompatibility complex (MHC) controls a major part of the immune system. The MHC is highly variable, which leads to gene variation. This variation allows for resistance to certain pathogens, such as Ebola. While humans were severely impacted by the Ebola outbreak, so were western lowland gorilla populations. Previous research included two populations of western lowland gorillas, one had been severely affected by the Ebola outbreak in 2004 and one had been secluded and was not affected. In one of the populations of affected gorillas, the mortality rate was 95%. The second population affected by Ebola was hit in two waves, the first wave killing 91% of the individually known gorillas and the second killing 95.8%. Fecal matter was collected before and after each of the outbreaks with an average time span between samples of three years. The samples were then extracted and genotyped for neutral regions of the genome (microsatellites). The results showed that the microsatellites near MHC genes were associated with Ebola pathogenesis. Over the summer and fall of 2015, research has been conducted on the best primers to use in PCRs and DNA amplification, thus optimizing the samples. It was the goal of the research in the fall to use the take the optimized techniques to get DNA sequences and, primarily, to align the DNA sequences from captive, and some wild, gorillas. These unpublished sequences will contribute to the genome of the gorilla, specifically in the MHC-DRB region. The research used Gel Electrophoresis and DGGE to analyze PCR data as well as move forward past the raw data, fecal matter. DNA was sequenced using a computer program called MEGA. The samples were compared against each other to look at the diversity in this region of DNA. While the research is still ongoing, we have found some variation in the MHCDRB between each wild gorilla as well as captive and wild gorillas. References: Knapp, Leslie A. "Denaturing Gradient Gel Electrophoresis and Its Use in the Detection of Major Histocompatibility Complex Polymorphism." Tissue Antigens 65.3 (2005): 211-19. Knapp, Leslie A. "The ABCs of MHC." Evolutionary Anthropology: Issues, News, and Reviews 14.1 (2005): 28-37. Le Gouar, PJ, Valet D, David L, Bermejo M, Gatti S, et al. "How Ebola Impacts Genetics of Western Lowland Gorilla Populations." PLoS ONE 4(12): e8375. 18 Dec. 2009. DOES SOCIAL RANK BUFFER THE NEGATIVE EFFECTS OF ANXIETY ON SLEEP QUALITY? Jennifer H. Ellis, Jeremy L. Grove (Timothy W. Smith) Department of Psychology *This research was conducted from 9/2013 to 5/2014 while I was an undergraduate student at the University of Utah. It is a well-known fact that anxiety is associated with poor sleep quality (SQ; Jansson-Fröjmark, & Lindblom, 2008). However, prior research has shown that psychosocial factors may influence this relationship (e.g., social support; Baglioni, Spiegelhalder, Lombardo & Riemann, 2010). One psychosocial variable that has not been investigated in this regard is social rank. Social rank has been linked with a variety of health outcomes (e.g., hypertension; Rivers & Joseph, 2010), and thus could also have an important association with SQ. Specifically, few if any studies have investigated whether social rank could moderate the effect of anxiety on sleep. As such, we examined dominance and prestige (two components of social rank; Cheng, Tracy, Foulsham, Kingstone, & Henrich, 2013) as potential moderators for the relationship between traitlevel anxiety and poor SQ. Methods: In a campus computer lab, undergraduate students (N=188, 63% Female) were administered a computer-based survey, which included measures of anxiety, SQ, dominance, and prestige. Trait-level anxiety was measured using the State-Trait Anxiety Inventory (STAI; Spielberger, 1989), SQ was measured using the Pittsburgh Sleep Quality Index (PSQI; Buysse, Reynolds, Monk, Berman, & Kupfer, 1989), and dominance and prestige were measured with the Dominance-Prestige Scale (Buttermore, 2006). Results: Multiple linear regressions were performed, where SQ was the dependent variable. Results indicated a significant main effect for anxiety (β= .384, p< .000), such that higher levels of anxiety are independently associated with poor sleep quality. While there was no significant main effect for dominance, analyses revealed a significant interaction between anxiety and dominance in predicting sleep quality (β= -.157, p= .023). Simple slopes illustrated that at low levels of anxiety, those with low dominance reported better sleep quality. At high levels of anxiety, those with low dominance had the worst sleep quality. No significant effects were found for prestige. Discussion: These results suggest that while one’s dominance level moderates the relationship between trait-level anxiety and SQ, one’s prestige level does not. Further, while low dominance levels were associated with the worst SQ when combined with high anxiety, low dominance levels were also associated with the best SQ when combined with low anxiety. This variance in sleep quality may be due, in part, to the high levels of stress experienced by people in subordinate positions, which may make them more vulnerable to the effects of anxiety (Rivers & Joseph, 2010). Overall, these findings provide preliminary evidence for low dominance as a potential risk factor for poor SQ in individuals with high levels of anxiety. Future research should examine this relationship further, using more objective measures that rely less heavily on self-report methods. CORRESPONDENCE BETWEEN MOTHERS’ AND DETAINED YOUTHS’ REPORTS OF TRAUMA EXPOSURE AND POSTTRAUMATIC STRESS SYMPTOMS: THE ROLE OF RELATIONSHIP QUALITY Kristina Holman, Shannon Chaplo, Crosby Modrowski (Patricia Kerig) Department of Psychology 90% of juvenile justice-involved youth report at least one traumatic event in their lifetime (OJJDP, 2015). Hence, it comes as no surprise that rates of posttraumatic stress disorder (PTSD) and posttraumatic stress symptoms (PTSS) are disproportionately high in this population (Abrahm et al., 2004; Kerig & Becker, 2010). Previous research shows the importance of caregivers in buffering young people from the negative effects of trauma exposure (Bal et al., 2004; Kaufman et al., 2004). However, if parents are not aware of their child’s exposure to trauma or PTSS, they may be unable to effectively support their child. This parent-child communication about trauma likely is affected by the emotional closeness between the youth and parent. Although past work has examined the correspondence between parent and youth reports of trauma exposure (Johnson, 2013; Oransky et al., 2013; Smith et al., 2010), these patterns have yet to be examined among juvenile justice-involved youth. In addition no studies have systematically examined the role of the quality of parent-youth attachment as a mediator of this association. To address these gaps in the literature, we examined the correspondence between mother and youth report of PTSS and PTSD in a sample of families recruited from a detention facility in the Western United States. Youth participants consisted of 374 youth ages 12 to 19 (m=16.09, sd=1.29); 45.5% ethnic minority. Mothers and youth were administered self-report measures of youth trauma exposure and PTSS, and their perceptions of the quality of the attachment (e.g., trust, communication, and bonding) in the mother-youth relationship. Results indicated that the rates of PTSS and PTSD among youth and parent were low (Table 1). Based on youth report of PTSS, 32.5% of youth met full or partial criteria for PTSD versus 2.1% of youth when based on mother report of symptoms. Next, we used a series of regressions to examine 1) whether gender and attachment predicted the discrepancy between youth and parent report of PTSS (conceptualized as a difference score), and 2) if attachment from both the mother or youth’s report statistically mediated the relation between the difference score and PTSD. Results indicated that gender, age, ethnicity, and the mother’s perception of attachment were non-significant predictors of the difference score, p>.05. However, difference scores between parent and youth reports of PTSS were lower when youth perceived themselves as having higher attachments with their mothers , b=-.05, t=-2.69. Results of bootstrapped mediation indicated that the youth’s perception of attachment partially mediated the relation between the difference score and youth’s reports of PTSS (Figure 1). This pattern did not emerge when using mother’s perception of attachment or reports of PTSS. These results illustrate the significance of different perspectives of youth and mothers on youth trauma exposure and PTSS in detained samples. Our results illustrate that differences in mother and youth report of PTSS symptoms are related to whether or not youth meet criteria for PTSD. Hence, interventions designed to improve caregiver-youth relationships and communication about PTSS may be especially important for this population of at-risk youth (Berkowitz,2010). Table 1. Rates of agreement between mother and youth’s report of trauma exposure, PTSS and PTSD Trauma Exposure Mother 41.8% Youth 99.2% Kappa .01 p value .39 Intrusions 18.6% 47.2% -.02 .78 Avoidance/Numbing 5.4% 23.9% .02 .58 Hyperarousal 8.4% 49.4% .03 .44 Full PTSD 1.0% 13.2% .02 .36 Full or Partial PTSD 2.1% 32.5% .02 .36 Note. Youth and Mothers were asked if the youth had been exposed to # potentially traumatic events. For youth who had a trauma history, youth and mothers were asked to report if youth had experience 17 symptoms of DSM-IV criteria for PTSD. Full PTSD is defined as an endorsement of Criterion A (trauma exposure) as well as the required number of symptoms for Criteria B, C, and D whereas Partial PTSD is defined as an endorsement of Criterion A as well the required number of symptoms for any three other symptom clusters Figure 1. Mediation of the association between mother and youth PTSS discrepancy score and PTSD by youth perception of attachment. Youth report of motheryouth relationship -0.02* (.01) -1.13* (.45) Difference Score 0.02 (.02)* Youth report of PTSD Youth report of PTSD Difference Score 0.07(.05) Note. The discrepancy score was calculated by subtracting the mother report of PTSS from youth’s report of PTSS and a possible range of 0-17, with 0 meaning perfect agreement between youth and mother report. Unstandardized B coefficients are displayed with standard errors in parentheses. Indirect effect 95% CI [.01, .08]. * denotes p < .05. MARGIN OF LICENSED DOG AND CAT POPULATIONS AND ADOPTIONS FROM ANIMAL SHELTERS IN UTAH COUNTIES IN 2013-2014 Marli Stevens (Tom Cova) Department of Geography Different counties and cities in Utah require dog licensing and cat licensing. Out of the 29 counties, 3 do not require licensing at any level. The licensing requirements can help track dog and cat populations in Utah and keep a record of the levels of dog and cat population growth or decline overtime. If compared to animal shelter adoption numbers, these records can determine where these animals are coming from. If the population number is growing but the adoption numbers from shelters during the same time period do not meet that growth pattern, then it can be assumed that the majority of the animals are coming from pet stores, breeders, migration, or puppy/kitten mills. Especially in the rural areas and counties of Utah, these records can potentially determine whether there are puppy/kitten mills in business and investigations can be done to find whether these businesses are abiding by business and animal rights laws. Many pet owners do not license their pets even when required, but licensing is the only way to get an exact number of pet dogs and cats living in an area. Using GIS, a thematic map of Utah has been made to depict the different counties in Utah that require licensing at various levels. Since licensing requirements vary in Utah and many counties do not have the tools to access the licensing numbers, it is very difficult to track the population numbers per year for dogs and cats. This research can help improve our understanding of pet licensing in Utah. The subject of dogs and cats as pets, where they come from, and what their lives entail is also an educational outcome of this research. Keywords: GIS, geography, population, dog, cat, Utah, puppy, kitten, mill, pets, shelter, adoption, map, education EXAMINING THE DBT WAYS OF COPING CHECKLIST AND THERAPIST EXPECTANCIES AS PREDICTORS OF SUCCESS IN DBT GIFT GROUP PARTICIPANTS Paige Malia, Julia Chandler (Sheila Crowell) Department of Psychology Researchers have long focused on which variables play a role in managing the stress-illness relationship, and more specifically, emotion dysregulation (Linehan, 1993; McCrae, 1984). The current study examined psychologically dysregulated individuals (n=22), who had been recommended by their primary therapists, to participate in an 18 week outpatient DBT skills group. We were interested in determining whether participants’ strategies for coping changed from maladaptive to adaptive over the course of group, specifically between pre-intervention waitlist (Initial Assessment) and after completing the first module of the skills group (Reassessment 2). The DBT Ways of Coping Checklist (DBT-WCCL, Neacsieu et al., 2010) was used as a way of determining change across time and a DBT Deficiencies measure was sent to participants’ therapists every six weeks. Correlations between the DBT Deficiencies measure and DBT-WCCL Dysfunctional subscale revealed moderately significant findings, suggesting a possible relationship between therapists’ predictive assessments of client deficiencies, and subsequent reports of their clients after a period of time. Analyses of the DBT-WCCL subscales suggested moderate improvement from Initial to Reassessment 2 with a 9.7% increase in reported DBT coping skill use, and a 7.6% decrease in dysfunctional coping skill use. Despite discouraging attrition rates, implications of the study include an added support for the DBTWCCL in monitoring adaptive and maladaptive skill use in clinical populations. Finally, data demonstrated moderate changes in skill use after only six weeks of DBT skills group, which supports the efficacy of the DBT-WCCL measure, as well as the structure of the DBT group’s structure as it pertains to positive changes in participants. POLY-CYTIDINE MICROSATELLITE SEQUENCES ADOPT STABLE IMOTIF FOLDS UNDER PHYSIOLOGICAL CONDITIONS Ashlee Danielle Burton (Cynthia Burrows) Department of Chemistry It is well established DNA is capable of forming a double-helical structure; in addition, unique DNA sequences are also capable of forming alternative secondary structures, such as G-quadruplexes and i-motifs. These more complex structures of DNA may be important in the regulation of gene transcription. G-quadruplexes are single-stranded DNA structures formed by four guanine bases in a tetrad structure, with two or more tetrads stacked on top of each other. DNA strands capable of forming G-quadruplex structures are well documented, but little is known about their complementary cytosine-rich strands. I-motifs are folded in an intercalated structure, formed by two parallel duplexes whose strands are held together by hemi-protonated cytosine-cytosine base pairs. Because of the hemi-protonated cytosine-cytosine base pairs, they normally need acidic conditions to fold. Cytosine-rich strands are found in gene promoters and oncogene promoters. Microsatellites greater than C17 in the promoter region of genes have higher mutation rates compared to other homopolymer sequences. If stable i-motif structures are capable of forming at pH values relevant to biological conditions, it will support research that these runs of cytosine may fold and possibly regulate biological processes. This research was aimed at determining which pH values are conducive to the formation of stable i-motifs in polycytidine sequences found in microsatellite regions of the human genome. I used three different methods of analysis while working with synthetic oligomers of poly-C ranging in length from 12-30 nucleotides. These were purified by HPLC, and then their CD spectra were examined at pH values from 5.0-8.0 in 0.25 pH increments. UV difference spectra were examined as well as thermal melting analysis. CD spectroscopy revealed pH-dependent transitions for the i-motif sequences. Chain length versus pH shows a periodicity of 4 nucleotides. C13, C14, C15, C17, C18, C19, C21, C22, C23, C27 and C30 all had pH transitions above 7.0, which are physiologically relevant. UV difference spectra were examined, and all had the characteristic negative peak at 320 nm indicative of folded i-motif strands. Thermal melting analysis denaturing temperatures indicate poly-C strands are present at temperatures found in the human body (37 °C). On the basis of these three lines of experimental data, we conclude the transition of folded i-motifs to their unfolded form occurs around pH values comparable to living cells. These data support poly-C strands >C17 can adopt stable structures and possibly affect genetic processes. FLUID DYNAMICS AND TRAFFIC FLOW Stephen McKean (Don Tucker) Department of Mathematics The flow of liquids and gasses is well explained through physical laws and theories. However, in terms of mathematical theories and predictions, traffic flow is not understood to the same extent as the physics of matter flow. In this project, we establish and evaluate assumptions that allow traffic to be treated as a fluid. Then, following the law of conservation of mass for fluids, we derive a conservation law for traffic flow[1]. Finally, we seek to loosen the assumptions and improve the model. Initially, we make the following assumptions: (i) there is only one lane and no overtaking is allowed; (ii) there are no “sources” or “sinks” for cars – that is, no cars may enter or exit the system under consideration; (iii) the average speed of traffic is not constant and depends solely on the density of the traffic. While the first assumption is not directly involved in the derivation of a conservation law, it allows for simplicity in our model. The latter two, however, are fundamental assumptions in the derivation. Most traffic phenomena of interest occur with multiple parallel lanes, similar to laminar flow in fluid dynamics[2]. We now hope to remove the first assumption, thereby enabling a model of multiple lanes of traffic. We preserve assumptions (ii) and (iii), and establish the following new assumptions: (iv) given two parallel lanes, fix a lane L such that the density of L never exceeds that of its adjacent lane, L’; (v) cars will flow from L’ to L until their densities are equal; assume these lane changes occur instantaneously. Assumption (iv) simplifies our model, and assumption (v) permits a relationship between L and L’. By formalizing a relationship between these adjacent densities, a linked system of differential equations can be generated to model this two-lane system and, by extension, traffic systems with arbitrarily many adjacent lanes. Due to assumption (ii), the total number of vehicles in the system must remain constant. The density of the entire traffic system over a given interval must therefore remain constant as well. As of yet, we have not found a closed solution to this higher-order problem. Our proposed differential equations used to relate the densities of L and L’ are not readily solvable. A possible topic for further research is to address this problem. Once a solvable relation is established, we can derive a new conservation law for the two-lane model and study ensuing model behaviors, such as equilibrium between the adjacent lanes. [1] Salsa, Sandro. Partial Differential Equations in Action. Milan, Italy: Springer-Verlag. 2008. [2] Russell, George. Hydraulics. New York: Henry Holt and Company. 1942. THE ROLE OF ENDOPLASMIC RETICULUM OXIDOREDUCTIN1 IN THE FOLDING OF CONOTOXIN Henrik Yde O’Brien (Helena Safavi-Hemami, Pradip Bandyopadhyay, Baldomero M. Olivera) Department of Biology Cone snails (Conus) are a genus comprising approximately 700 species of venomous marine mollusks. Each snail contains a variety of conotoxins most of which are specific to that species. Conotoxins are small peptide neurotoxins that are highly specific ligands for ion channels and receptors of the nervous system. Most conotoxins contain disulfide bonds, which are critical for their biological activity. Protein disulfide isomerase (PDI) is an enzyme responsible for the formation of disulfide bonds during protein folding in the endoplasmic reticulum of eukaryotes. As PDI introduces disulfides it becomes reduced and must be reoxidized to continue the formation of disulfides. Another enzyme, endoplasmic reticulum oxidoreductin1 (Ero1), is involved in the process of re-oxidizing PDI allowing for the continuation of disulfide formation. Ero1 makes use of the cofactor flavin adenine dinucleotide (FAD), which allows it to donate electrons directly to molecular oxygen. This study aims to monitor the ability of Ero1 isolated from Conus geographus, a species of cone snail, to aid PDI in the formation of the correct disulfides using conotoxins as substrate. cDNA was synthesized from RNA isolated from the venom gland of Conus geographus. DNA encoding Ero1 was amplified by PCR using specific primers designed based on previous transcriptome data. The amplified DNA was cloned in E. coli and its sequence determined. Sequence analysis revealed that the DNA did encode Ero1. The encoded protein was 432aa long and contained 16 cysteine residues. It was then aligned with 100 other species, including vertebrates and invertebrates, to determine conserved cysteine residues. Vertebrates are known to have two isoforms of the Ero1 enzyme that differ slightly in cysteine residues. Ero1 from the cone snail was compared to both isoforms to determine if it was more closely related to either of the vertebrate isoforms. The alignment showed that some key regions, such as the FAD binding domain, were conserved but the enzyme as a whole was not. When compared with the two vertebrate isoforms, cone snail Ero1 contained the essential cysteine residues of both. After sequence analysis, Ero1 was expressed in E. coli using the pGEX-6P-2 vector. The vector expresses the Ero1 with a glutathione S-transferase tag, creating a GST-Ero1 fusion protein. The GST-Ero1 fusion protein was purified by affinity chromatography with glutathione beads. PreScission Protease was then used to cleave the GST from Ero1. The Ero1 was further purified by repeating the affinity chromatography to remove GST, but also by size exclusion chromatography to remove Ero1 dimer. A folding assay with Ero1 and PDI from Conus geographus was performed by adding synthetic linear conopeptides to a mixture of the two enzymes, and following the time course of the reaction after quenching the reaction with formic acid. Using HPLC the samples were examined for the amount of properly folded toxin. The folding assay revealed that there was more properly folded toxin when Ero1 was present in addition to the PDI. This project determined that cone snail Ero1 is an active enzyme and that it interacts with PDI to more efficiently fold conotoxin. ENDOSOMAL TRAFFICKING IN DAMAGED OR SEVERED AXONS IN C. ELEGANS Joe Thomas (Michael Bastiani) Department of Biology When an axon is subject to severe damage, the neuron must compensate by engaging in a series of stress-induced biochemical responses. These responses are aimed at up-regulating transcription factors to assist the axon with mounting a successful regenerative response. This response must include increased synthesis of materials for the growth cone and newly formed axon which must be transported to the growing tip of the regenerating axon. This up-regulation of transport can be visualized by the increase in organelles trafficking from the cell body to the axon. This includes organelle such as the Golgi apparatus, endoplasmic reticulum, lysosomes, and endosomes. A recent RNAi screen completed in 2014 by Nix et. al. identified at least 50 genes of Caenorhabditis elegans that contain either growth-promoting or growth inhibiting functions. One of these genes was unc-16, which encodes the homolog of mammalian JIP3, a JNK-interacting protein that has been found to bridge the activity of JNK-1 and JNK kinases (Byrd et al., 2001). It has been proposed that the unc-16 gene could serve as a 'gatekeeper' for organelles in the axon initial segment (AIS). The AIS serves as a regulatory junction between the axon and the neuron cell body. It plays a vital role in maintaining the axon organelle composition that could support the axon during stressful biological events such as axonal injury. A study using unc-16 loss-of-function mutants showed that axons lacking this gene accumulated endosomal organelles at a rate that was 5-7 times higher than the wild type worms (Edwards et al., 2013)1. In this study we addressed the relationship between the quantity of endosomes transported to the site of injury and the likelihood of a successful axonal regeneration. We used still-imaging and time-lapse imaging techniques to determine if there was a significant difference between endosomal count in wild type and unc-16 worms and if increased organelle trafficking contributes to successfully regenerating an axon. Our findings suggest that there is a substantial increase in endosomal accumulation in the AIS of severed axons in unc-16 mutants compared to willd-type. However, although there is a strong improvement in regeneration in unc-16 animals compared to wild-type animals (90% compared to 70%), we could not correlate this improved regeneration with increased endosomal trafficking in each specific wild type and unc-16 regenerating axon. References: D.T Byrd, M Kawasaki, M Walcoff, N Hisamoto, K Matsumoto, Y Jin UNC-16, a JNK-signaling scaffold protein, regulates vesicle transport in C. elegans Neuron, 32 (2001), pp. 787–800 Edwards SL, Yu S-C, Hoover CM, Philips BC, Richmond JE, Miller KG. An Organelle Gatekeeper Function for Caenorhabditis elegans UNC-16 (JIP3) at Axon Initial Segment. Genetics. 2013;194(1):143-161.doin10:.1534/genetics.112.147348 EXAMINING HUMAN TRAFFICKING DOMESTIC SERVITUDE CASES IN CALIFORNIA Cristina Aguayo Romero (Annie Fukushima) College of Social Work Domestic workers are a significant part of the U.S labor force as they work in private households providing services that allow American families to function in their daily lives. Often times they work without clear terms of employment, are isolated from other workers, and typically excluded from labor protection laws. The hidden nature and intimacy of their work renders many domestic workers to physical and psychological abuse. Domestic workers that are trafficked into the U.S are especially vulnerable to abuse as most survivors are foreign born and unfamiliar with U.S laws and customs. Using human trafficking cases where individuals were trafficked into the state of California as domestic servants, common elements were found to understand what factors lend survivors to be seen as victims when a case was prosecuted. These elements include physical/sexual abuse, threats of deportation, visa fraud, isolation, and debt manipulation. Utilizing scholarly research, this study also discusses the challenges in prosecuting human trafficking cases. Current data finds that police officers' perceptions about human trafficking do not support the identification of a broad range of cases as human trafficking. This research also compares human trafficking policies established in California to Utah as they relate to domestic work, and provides recommendations for future research and policy initiatives. RESETTLEMENT EXPERIENCES OF CHILDREN WHO ENTERED THE UNITED STATES AS REFUGEES Elizabeth Katherine Gamarra (Joanna Bettmann Schaefer, PhD LCSW) College of Social Work Approximately 19.5 million refugees exist globally and nearly half are children (United Nations High Commissioner for Refugees, 2014). As families become acclimated and accustomed to the U.S culture, they face numerous challenges. In fact, according to this study, children specifically experience a significant number of stressors during resettlement, impacting them within their family structure, among their peers and in different social interactions. This qualitative study sought to answer the question: What are the core issues confronting children ages 8-14 with a refugee background as they resettle in the United States? Participants were recruited from Sudanese, Somali, Bhutanese and Karen communities. In addition, a variety of other people working with these children such as service providers and parents were interviewed. This study identified and explored a total of six core themes inclusive to school, emotional health, cultural identity, social interactions, laws and safety, and changed family dynamics. These themes indicated that though there are positive aspects of resettlement for families to experience, more needs to be done to support parents and children in their adaptation and transition to the United States. Findings indicated how unfamiliar cultural and social systems created a lot of post migration stress. For instance, several parents indicated the issue with lacking the knowledge of laws and legal systems in the United States. Furthermore, stakeholders highlighted the significant challenges related to adapting and understanding social, and cultural norms within the school systems. Findings therefore, suggest that additional time and resources should be devoted to communities, social agencies, and schools in order to facilitate greater transition for children and their parents. These findings contributed to the development of a cultural orientation curriculum for children with refugee status, ages 8-14. Furthermore, it acknowledged “refugee status” as a more inclusive, and accurate term when referring to children with refugee background. This is the beauty of using a community-based research approach; it is meaningful to the community as it is to future research. |
| Reference URL | https://collections.lib.utah.edu/ark:/87278/s6wkvh6j |



