| Title | Evaluating the effectiveness of orientation indicators with an awareness of individual differences |
| Publication Type | dissertation |
| School or College | College of Engineering |
| Department | Computing |
| Author | Ziemek, Tina Renee |
| Date | 2010-08 |
| Description | Understanding how users perceive three-dimensional (3D) geometric models can provide a basis for creating more effective tools for visualization in applications such as CAD or 3D medical imaging. This dissertation examines how orientation indicators affect users' accuracy in perceiving the shape of a 3D object shown as multiple views. Multiple views force users to infer the orientation of an object and recognize corresponding features between distinct vantage points. These are difficult tasks, and not all users are able to carry them out accurately. A cognitive experimental paradigm is used to evaluate the effectiveness of four types of orientation indicators on a person's ability to compare views of objects presented in different orientations. The orientation indicators implemented were colocated, noncolocated, static, and dynamic. The study accounts for additional factors including task, object complexity, axis of rotation, and users' individual differences in spatial abilities. Results show that a colocated orientation indicator helps users the most in comparing multiple views, and that the effect is correlated with a person's spatial ability. Besides the main finding, this dissertation helps demonstrate the application of a particular experimental paradigm and analysis as well as the importance of considering individual differences when designing interface aids. |
| Type | Text |
| Publisher | University of Utah |
| Subject | Orientation indicators |
| Subject LCSH | Three-dimensional imaging; Orientation |
| Dissertation Institution | University of Utah |
| Dissertation Name | PhD |
| Rights Management | © Tina Rene Ziemek |
| Format | application/pdf |
| Format Medium | application/pdf |
| Source | Original in Marriott Library Special Collections, QA3.5 2010 .Z54 |
| ARK | ark:/87278/s6m04kz6 |
| DOI | https://doi.org/doi:10.26053/0H-5M8W-PMG0 |
| Setname | ir_etd |
| ID | 192830 |
| OCR Text | Show EVALUATING THE EFFECTIVENESS OF ORIENTATION INDICATORS WITH AN AWARENESS OF INDIVIDUAL DIFFERENCES by Tina Renee Ziemek A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science School of Computing The University of Utah August 2010 Copyright !c Tina Renee Ziemek 2010 All Rights Reserved Th e Uni v e r s i t y o f Ut ah Gr a dua t e Sc h o o l STATEMENT OF DISSERTATION APPROVAL The dissertation of Tina R. Ziemek has been approved by the following supervisory committee members: William B. Thompson , Co-Chair 6-9-2010 Date Approved Sarah H. Creem-Regehr , Co-Chair 6-8-2010 Date Approved Christopher R. Johnson , Member 6-8-2010 Date Approved P. Thomas Fletcher , Member 6-8-2010 Date Approved Mary Hegarty , Member 6-4-2010 Date Approved and by Martin Berzins , Chair of the Department of School of Computing and by Charles A. Wight, Dean of The Graduate School. ABSTRACT Understanding how users perceive three-dimensional (3D) geometric models can provide a basis for creating more effective tools for visualization in applications such as CAD or 3D medical imaging. This dissertation examines how orientation indicators affect users' accuracy in perceiving the shape of a 3D object shown as multiple views. Multiple views force users to infer the orientation of an object and recognize corresponding features between distinct vantage points. These are difficult tasks, and not all users are able to carry them out accurately. A cognitive experimental paradigm is used to evaluate the effectiveness of four types of orientation indicators on a person's ability to compare views of objects presented in different orientations. The orientation indicators implemented were colocated, noncolo-cated, static, and dynamic. The study accounts for additional factors including task, object complexity, axis of rotation, and users' individual differences in spatial abilities. Results show that a colocated orientation indicator helps users the most in comparing multiple views, and that the effect is correlated with a person's spatial ability. Besides the main finding, this dissertation helps demonstrate the application of a particular experimental paradigm and analysis as well as the importance of considering individual differences when designing interface aids. "Oh, the places you'll go! There is fun to be done! There are points to be scored. There are games to be won." - Dr. Seuss CONTENTS ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii LIST OF TABLES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x ACKNOWLEDGEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi CHAPTERS 1. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Three-dimensional visualizations of geometric objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.1 Increasing user accuracy with orientation indicators . . . . . . . . . . . . . . . 5 1.1.1.1 Different types of orientation indicators . . . . . . . . . . . . . . . . . . . . . 5 1.1.2 Variables that may influence the effectiveness of a visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2 Evaluation via a cognitive paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2.2 Mental rotation paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2. BACKGROUND AND RELATED WORK . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1 Three-dimensional visualizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1.1 Overview of visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1.1.1 Scientific visualization applications . . . . . . . . . . . . . . . . . . . . . . . . 12 2.1.2 Using external representations to facilitate internal cognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.1.3 Are visualizations effective for all users? . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.1.4 Spatial reference frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.1.4.1 Object-based and viewer-based reference frames . . . . . . . . . . . . . . 15 2.1.4.2 Reference frames in virtual environments . . . . . . . . . . . . . . . . . . . . 16 2.2 Increasing effectiveness of visualizations through cognitive support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.1 Tasks where orientation indicators could benefit users . . . . . . . . . . . . . . 17 2.2.1.1 Mechanical CAD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.1.2 Medical visualizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.3 Variables that may affect 3D visualizations . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3.1 Task, stimuli, axis of rotation, and level of interactivity may affect task-performance with a 3D visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3.2 Individual differences may affect task-performance with a 3D visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3. EXPERIMENTAL DESIGN FOR EVALUATING ORIENTATION INDICATORS . . . . . . . . . . . . . . . . . . . . 25 3.1 Orientation indicators evaluated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.1.1 Colocated or noncolocated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.1.2 Static or dynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.2 Stimuli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.3 Subjects' spatial abilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.4 Experimental design and procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.4.1 Choose-two-of-four task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.4.2 Same/different task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.4.3 Subjects and research setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4. EVALUATING ORIENTATION INDICATOR EXPERIMENTS . . . . . 40 4.1 Results and discussion of choose-two-of-four experiments . . . . . . . . . . . . . . . . 40 4.1.1 Accuracy score . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.1.1.1 Colocated vs. noncolocated indicators . . . . . . . . . . . . . . . . . . . . . . 40 4.1.1.2 Individual differences in spatial ability . . . . . . . . . . . . . . . . . . . . . . 42 4.1.1.3 Class of objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.1.1.4 Axis of rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2 Results and discussion of same/different experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.2.1 Accuracy score . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.2.1.1 Colocated vs. noncolocated indicators . . . . . . . . . . . . . . . . . . . . . . 46 4.2.1.2 Individual differences in spatial ability . . . . . . . . . . . . . . . . . . . . . . 47 4.2.1.3 Class of objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.2.1.4 Axis of rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.2.2 Response time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.2.2.1 Response time and spatial ability . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.3 Comparison and contrast of the accuracy results of the two tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 5. DISCUSSION AND CONTRIBUTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.1 Summary of this research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.1.1 Type of orientation indicator and spatial ability . . . . . . . . . . . . . . . . . . 58 5.1.2 Dynamic vs. static orientation indicators . . . . . . . . . . . . . . . . . . . . . . . . 58 5.1.3 Factors that influence task-performance with a colocated static indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.1.3.1 Time pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.1.3.2 Axis of rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.1.3.3 Spatial ability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 5.1.3.4 Ceiling and floor effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.1.4 Object complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.2.1 Object space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 5.2.2 Room space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.2.3 Environment space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 vi 5.2.4 Evaluation of cognitive support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5.3.1 Theoretical contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5.3.2 Practical contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 APPENDIX: EXPERIMENTAL INSTRUCTIONS . . . . . . . . . . . . . . . . . . . . . 70 REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 vii LIST OF FIGURES 1.1 The term visualization can describe internal visualizations that occur in the mind, or external visualizations such as those used in scientific visualization and computer-aided design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Visualization of heart and lungs shown as multiple views. The user must es-tablish a correspondence between the different points of view. Images courtesy and copyright of Scientific Computing Institute, University of Utah. . . . . . . . . 2 1.3 Noncolocated static orientation indicator on left, colocated static orientation indicator on right. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Theories and methodologies from cognitive science can be used to system-atically evaluate 3D computer applications. Controlled experimentation also allows us to account for individual differences of users such as spatial ability, profession, gender, and age. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1 Some users may think relevant information can be seen from a back projection even if it can only be viewed from a side projection. Image courtesy of Johnson et al. [1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2 Viewcube orientation indicator implemented in Autodesk products. The Viewcube displays the orientation of the 3D scene in each view. . . . . . . . . . . . . . . . . . . . . 18 2.3 Colocated orientation indicator similar to the one implemented by Stull et al. [2]. Stull and colleagues found that the orientation indicator helped stu-dents learn anatomy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4 Computer-aided design is often done using multiple views of a 3D model. Noncolocated orientation indicators are used to indicate object's orientation. . 20 2.5 Visualization application 3D Slicer is used for surgical planning, image-guided intervention, and clinical studies. Image courtesy and copyright of David Gering. 20 2.6 Students view 3D structures shown at various orientations to learn anatomy. Images courtesy and copyright of Primal Pictures Ltd. . . . . . . . . . . . . . . . . . . . 21 3.1 Example trials: Choose which two of the four objects on the right match the target object on the left. Noncolocated orientation indicator on top, colocated orientation indicator below. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.2 Ten stimuli used in experiment. Mechanical parts on top, anatomical struc-tures below. Each stimulus shown in 0! orientation. . . . . . . . . . . . . . . . . . . . . . 29 3.3 Examples of paper-and-pencil tests used to measure individual's spatial ability. Paper folding task shown on top, Vandenberg and Kuse [3] mental rotation task shown on bottom. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.4 Stimuli used in practice trials. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.5 Four axes of rotation were assessed. Clockwise from top left: horizontal axis, oblique axis one, oblique axis two, vertical axis. All objects are shown rotated 45! from initial position. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.6 Example choose-two-of-four trials with mechanical stimuli rotated about oblique axis two. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.7 Example same/different trials: Are the objects the same object shown in different orientations, or are they different objects? Subjects were presented with one type of aid, all subjects had trials where no aid was present. . . . . . . . 35 3.8 Example trials. These two objects are different objects. . . . . . . . . . . . . . . . . . 36 3.9 Research setting where subjects took experiment. . . . . . . . . . . . . . . . . . . . . . . . 39 4.1 Mean score on Experiment 2, with and without colocated static orientation indicator with vertical and horizontal rotations, by spatial ability. . . . . . . . . . . 43 4.2 Mean score on Experiment 3, with and without colocated static orientation indicator with oblique one rotation, by spatial ability. . . . . . . . . . . . . . . . . . . . 43 4.3 Mean score on same/different task with and without noncolocated static ori-entation indicator by spatial ability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.4 Mean score on same/different task with and without colocated static orienta-tion indicator by spatial ability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.5 Mean score on same/different task with and without colocated static orienta-tion indicator by axis of rotation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.6 Mean response time on same/different task with and without noncolocated static orientation indicator by spatial ability. . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.7 Mean response time on same/different task with and without colocated static orientation indicator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 5.1 Three sizes of spaces to analyze in future research. Application areas stated, as well as additional variables to evaluate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 ix LIST OF TABLES 3.1 Number of subjects in each experiment by spatial ability and gender. Female (F), Male (M), Total (T). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.1 Accuracy results for the choose-two-of-four experiments. 40 subjects per experiment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2 Accuracy results for the same/different experiments. Rotation always about horizontal axis or vertical axis. 40 subjects per experiment. . . . . . . . . . . . . . . . 48 4.3 Response time (RT) results in seconds for Experiments 4 and 5. . . . . . . . . . . . 54 ACKNOWLEDGEMENTS I came to graduate school because I didn't know any other avenue that would allow me to study perception. I leave graduate school knowing more about perception, but also more about myself. I would like to thank my advisors, Dr. William Thompson and Dr. Sarah Creem-Regehr, for their guidance. I am grateful to Bill for teaching me to account for the details, to have concrete arguments for my ideas, and to fly fish. I am grateful to Sarah for helping me with the details, opening up the world of cognitive science to me, and showing me ways to measure people's perception of the world. I also thank my family. I give gratitude to Sandi, Norbert, Todd and family, Troy and family, Terry and family, and Tracy and family for their love and light. I especially thank my father for working extra hard so I could attend school. I am also grateful for the friendships I made in Utah. I especially thank Scott Alfeld, Jason Beck, J. Dean Brederson, Jeremy Archuleta, Daniel Murphy, Justin Polchlopek, Dr. Scott Kuhl, Ben Kunz, Patrick Kelley, J. Dylan Lacewell, Subodh Sharma, Manu Awasthi, Amlan Ghosh, and Mina Jeong. I also would like to thank my yogi friends, break dancing crew, and snowboard posse. I am also grateful for having such wonderful colleagues. I thank my committee members, Dr. Mary Hegarty, Dr. Chris Johnson, and Dr. Tom Fletcher. I also thank Dr. Drew Davidson, Dr. Pete Shirley, and my SIGGRAPH friends. I give special thanks to Dr. Alyn Rockwood for introducing me to the world of computer graphics, art, and interactive techniques. Lastly, I am thankful for the support I received while attending the University of Utah. The School of Computing has wonderful administrative support; thank you to Karen Feinauer and Jessica Johnson. Special thank you to Dr. Martin Berzins and Dr. Erin Parker for supporting outreach efforts, and allowing me to be a part of them. I am also grateful for financial support provided by the National Science Foundation through grants 0745131 and 0914488. I also thank Google for awarding me a scholarship and encouraging underrepresented groups to participate in computer science. CHAPTER 1 INTRODUCTION Until recently, the term visualization referred to the construction of visual or mental models that are represented in a person's mind. In computer science the term has now adopted a second meaning that refers to external graphical representations of data or concepts. Such visualizations are external artifacts and can aid in performing a task by offloading some of the mental processing associated with the task [4, 5]. Visualizations and three-dimensional (3D) models are now being used in engineering, architecture, science, and medicine to comprehend large amounts of data, observe the attributes of data, enable patterns to become apparent, and form hypotheses [5, 6, 7]. Medical education has already made a dramatic shift toward using 3D visualizations and digital representations of anatomy in academic curricula. Since visualizations are created to depict data and communicate in-formation, it is critical that people accurately perceive the computer generated 3D geometric representations. See Figure 1.1 for example visualizations. However, extracting important, relevant information in 3D applications such as com-puter aided design (CAD) and visualization tools is a difficult task for some users, and the lit-erature shows that not all users may benefit from the advantages of a 3D environment [8, 6]. This present work focuses on the problem of accurately perceiving visualizations that are shown in multiple, simultaneous views. Multiple views are both common and useful [9]. Previous research has evaluated multiple views for information visualization [10, 9], whereas the present research specifically addresses multiple views of 3D geometric entities. Multiple views allow users to simultaneously view an object from different viewpoints and allow features to be seen that would otherwise be occluded from view [11]. However, multiple views force users to establish a correspondence between perspectives, keep track of an object's features between views, and potentially recognize changes in features across vantage points. For some users, especially users with low spatial abilities, these tasks may be difficult or not carried out accurately, and they may distract from the primary task for which the 3D application is intended. The goal of this work is to increase a user's ability to compare and comprehend multiple views of a 3D visualization. See Figure 1.2 for an 2 Figure 1.1. The term visualization can describe internal visualizations that occur in the mind, or external visualizations such as those used in scientific visualization and computer-aided design. Figure 1.2. Visualization of heart and lungs shown as multiple views. The user must establish a correspondence between the different points of view. Images courtesy and copyright of Scientific Computing Institute, University of Utah. 3 example of a visualization displayed as multiple views. We achieve this goal by evaluating the effectiveness of a selection of orientation indica-tors, which are in-scene graphical aids that illustrate rotational changes of an object. Prior research has raised awareness of the difficulties users may have when working in a 3D virtual environment [12, 11, 13], and orientation indicators are one solution to help users maintain orientation in a virtual space [8, 2]. Orientation indicators may provide users with cognitive support, which can be defined as assistance from an artifact to help a user to think and solve problems [14], and free cognitive resources that modeling and visualization applications may unnecessarily impose on users. We specifically examine the effectiveness of orientation indicators that can be colocated with the target object or noncolocated (displaced) from the object, and those that are static or dynamic. See Figure 1.3 for noncolocated static and colocated static orientation indicators. To date, there is no guarantee that users will benefit from even the most well intentioned and technically developed tools [15]. Thus, this work presents an evaluation of orientation indicators with an established cognitive experimental paradigm. Perceiving shape and spatial relationships are fundamental aspects of visualization tasks [16] and appropriate visual cues are necessary to accurately perceive spatial relationships in computer generated images [17]. Decades of work in spatial cognition has demonstrated that visuospatial thinking and mental representation can be systematically evaluated [18, 19, 20, 21]. We evaluate users' abilities to perceive the orientation of 3D objects with the mental rotation paradigm [20, 3]. Mental rotation tasks are most commonly used to evaluate the mechanism underlying spatial reasoning and the internal construct of mental imagery, however many researchers have had success in using the mental rotation paradigm to evaluate the perception of computer graphics- e.g. [22, 23, 24, 25, 26, 27, 17]. This methodology allows us to use objective, controlled experimentation to evaluate the influence of an orientation indicator on the perceived orientation of a 3D object. This paradigm also allows us to test several factors that may influence the effectiveness of an orientation indicator. The present work seeks to answer two questions about users' task-performance when viewing visualizations: 1. Can orientation indicators increase a user's task-performance with a 3D visualization presented as multiple static views? There are several variables that may effect the influence of orientation indicators on object-orientation judgments of 3D visualizations. In the work presented here, we vary the task, complexity of 4 Figure 1.3. Noncolocated static orientation indicator on left, colocated static orientation indicator on right. an object, axis of rotation, and presence of dynamic information to determine the effectiveness of an orientation indicator. In particular, we use two mental rotation tasks, the choose-two-of-four task [3] and the same/different task [20], to assess users' performance. The choose-two-of-four task measures accuracy while the same/different task measures both accuracy and speed. These tasks allow us to understand 3D applications where the user's accuracy is key and 3D applications where the user's performance may be influence by time pressure. 2. Do individual differences in visuospatial abilities influence the effectiveness of an orientation indicator? Individual differences between users may effect the extent to which he or she benefits from an orientation indicator. Users with high spatial ability may benefit more or less from an orientation indicator than users with low spatial ability. This dissertation seeks to test several scientific hypotheses regarding a user's task-performance with visualizations shown as multiple static views. First, we hypothesize that orientation indicators will help users perform more accurately on two tasks which assess a user's ability to maintain the orientation of 3D virtual objects shown on a desktop display. We believe that different types of orientation indicators will have different effects on a user's accuracy; aids that are colocated with an object may be more effective than aids that are not colocated with an object. Furthermore, the complexity of the 3D model and the axis of rotation may impact the effectiveness of an orientation indicator. Users may benefit more from an aid when the 3D object is abstract or when the axis of rotation is an arbitrary oblique axis. Finally, we hypothesize that a user's spatial ability will impact whether he or she benefits from an orientation indicator. Users with high spatial ability may benefit more 5 from an orientation indicator than users with low spatial ability. Conversely, users with low spatial ability may benefit more from an orientation indicator than users with high spatial ability. 1.1 Three-dimensional visualizations of geometric objects Although there are several types of visualizations, such as information visualization and flow visualization, the present work examines scientific visualizations of 3D geometric entities. In these visualizations the user is viewing 3D geometric shapes rendered from numerical data and computer generated models. 1.1.1 Increasing user accuracy with orientation indicators Orientation indicators have the potential to increase users' accuracy in perceiving the 3D structure of an object presented as multiple views. These in-scene graphical aids illustrate rotational changes between views and may compensate for ambiguous information about an object's orientation. For instance, without an orientation indicator users may incorrectly assume which way the object is positioned, they may think they are looking at the "top" of an object when they are looking at the "bottom". With an indicator the user will not have to rely solely on the object to infer the object's orientation in space. It has been shown that individuals can benefit from additional information about an object's orientation in a mental rotation task. Hinton et al. [28] found that participants benefit from advanced information of an object's orientation before the object was presented. Specifically, participants would see an arrow that would indicate the orientation of an object prior to its appearance. Pani et al. [29] found that participants were more accurate in rotating an object when it was in a wooden box than an object that was presented by itself. However, males tended to be able to use the information the box provided more efficiently than females. The present work builds on these previous findings by examining the relative effectiveness of different type of indicators for orientation in an abstract virtual space. 1.1.1.1 Different types of orientation indicators Various orientation indicators have been implemented in computer aided design (CAD) and medical imaging software applications. The ViewCube is one orientation indicator that is implemented in Autodesk, Inc. 3D modeling packages [8]. The ViewCube is an iconic in-scene aid; the ViewCube's position in space reflects the 3D model's position in space. The user can also click on the "front" face of the cube to view the front of the 3D model. 6 Prior to the ViewCube, Autodesk, Inc. implemented other aids to facilitate orientation including the user coordinate system icon and the ViewCompass. The user coordinate system icon displayed the orientation of the major coordinate system axes, x, y, and z [30]. The ViewCompass provided direct viewpoint selection [8]. Orientation indicators have been implemented in medial imaging software in the forms of bounding boxes, virtual human figures, and aids that depict the left, right, anterior, and posterior sides of an object [31]. To our knowledge these orientation indicators have not been quantitatively assessed. We analyze orientation indicators that are either colocated or noncolocated with the object, and either static or dynamic. This implementation led to four different types of orientation indicators: colocated static, colocated dynamic, noncolocated static, and non-colocated dynamic. In all instances the object stimuli were static. As shown in Figure 1.3, the noncolocated orientation indicator is placed above the 3D object and the colocated orientation indicator shares a center point with the 3D object. An orientation indicator that is placed apart and not attached to the object may cause users difficulty because they have to transfer information from the aid to the object. It may be that a user would benefit more from an orientation indicator which is colocated with the object since the user is not required to transform information from the aid to the object. The static aids show the orientation of each 3D object; the dynamic aids show the path of rotation between two objects.1 The motion from the dynamic orientation indicator may affect task-performance. A dynamic orientation indicator may facilitate cognitive processes better than a static orientation indicator. 1.1.2 Variables that may influence the effectiveness of a visualization There are a variety of possible factors that influence task-performance with a 3D ap-plication. Visualizations could be more or less effective depending on the task being performed, and the complexity of the rendered 3D model. The present work analyzes users' task-performance with two versions of a cognitive paradigm and two classes of objects that vary in complexity. A user's abilities/spatial ability may also influence the effectiveness of a visualiza-tion. Kozhevnikova et al. [32, 33] and Blazhenkova et al. [34] suggest that different people might have different preferences for how visual imagery is represented. We predict that a subject's performance with an orientation indicator will correlate to his or her visuospatial 1Examples of the dynamic indicator can be viewed at http://www.cs.utah.edu/!tziemek/dissertation 7 abilities. Visuospatial abilities are necessary for many common activities and have also been linked to job performance in occupations such as engineering, aircraft piloting, and surgery [18]. For these reasons the present work takes visuospatial abilities into account. Lastly, the level of interactivity may affect how the visualization is used. There are vari-ous types or levels of interactivity with a 3D visualization. Some 3D tools do not permit the user to interact with the 3D visualization, they only present information. Three-dimensional visualizations can be static (i.e., traditional print and maps), animated (dynamic motion), or interactive (responds directly to user input). This dissertation analyzes the utility and ease of use of static visualizations. We evaluate static visualizations in order to utilize an objective evaluation criteria, and suggest attributes of visualizations that may cause difficulty for users of static, animated, and interactive visualizations. The present work provides a foundation for future work which could examine noninteractive dynamic and interactive visualizations. Furthermore, as outlined in Section 2.2.1, there are a variety of applications which present visualizations as static images. In addition, some visualizations may not be able to be presented as dynamic or interactive because of the complexity of the underlying data. Also, it has been suggested that interactivity is not essential for a visualization to be effective (e.g., Keehner et al. [7]). We do take into account the effectiveness of dynamic orientation indicators, although the underlying visualization is static. 1.2 Evaluation via a cognitive paradigm 1.2.1 Motivation Computer applications display information visually in order to communicate information to users. However, computer users may fail to extract relevant information from a display. Despite the designers efforts to make an intuitive and effective interface, users often do not see a vast amount of information, and this problem is worsened because users are not aware that they are not seeing all of the information the designer has made available [35]. It may be that designers think users can process more visual information than they are actually capable. Research on the visual information a user attends to within an interface can be used as motivation to evaluate scientific visualizations. It has been found that users often do not see useful information within an interface, that a user does not always attend to all of the locations on the screen that contain important information, and the user wrongly assumes that he or she has an accurate representation of information that is presented [35]. We cannot assume a user will process and benefit from all of the visual cues within a 8 visualization. Furthermore, it may be that the user will misinterpret the information shown within a visualization. For these reasons, it is imperative that we evaluate a user's experience in a 3D envi-ronment with controlled experimentation. Methodologies from cognitive psychology can be used to carry out this experimentation, and the results can be used to inform the design of 3D computer applications. By objectively measuring a user's task-performance we can reduce biases and complexities that would otherwise be introduced if 3D applications were used to measure performance. See Figure 1.4 for a the types of 3D computer applications we can evaluate using ideas, methodologies, and theories from cognitive science. Furthermore, evaluating a user's perception of information with controlled investigation allows for the analysis of several factors that may impact a user's experience with a visualization in a systematic way. It also allows for testing a user's own individual differences such as gender, spatial ability, age, and profession to determine whether these variables influence how a user perceives a 3D application. For these reasons we have chosen to utilize a class of response measures in which the perceptual psychology community has much experience, extending this prior research in ways that are useful in understanding 3D applications. 1.2.2 Mental rotation paradigm We evaluate the effectiveness of orientation indicators with the mental rotation paradigm. The tasks we use are similar to the Vandenberg and Kuse [3] and Shepard and Metzler [20] mental rotation studies. The established methodology and body of research on mental rotation provides a basis for its use to evaluate the influence of an orientation indicator on the perceived orientation of a 3D object. Through a series of experiments we assess how orientation indicators can help users understand the orientation of a 3D object in an abstract virtual space. We measure accuracy and response time to determine the effect the orientation indicator has on user performance in a 3D desktop environment. The mental rotation paradigm also allows us to examine variables that may affect the utility of an orientation indicator, including the difficulty of the task (accuracy or time pressure), complexity of the 3D objects (simple or complex), a user's spatial ability (high or low spatial ability), and the presence of dynamic information (static or dynamic). 1.3 Contributions There are two main goals of this dissertation. The first is to demonstrate a systematic evaluation of visualizations. The second is to demonstrate the benefits of cognitive support 9 Figure 1.4. Theories and methodologies from cognitive science can be used to system-atically evaluate 3D computer applications. Controlled experimentation also allows us to account for individual differences of users such as spatial ability, profession, gender, and age. within visualizations. Using a cognitive experimental paradigm, we illustrate the effective-ness of orientation indicators on visualizations presented as multiple static views. We found that orientation indicators that are colocated with the 3D object are more effective than orientation indicators that are noncolocated with the 3D object. Furthermore, the presence of dynamic information does not increase the utility of an orientation indicator. Finally, a person's individual differences in spatial ability are likely to effect the usefulness of an orientation indicator. These results can inform the design of 3D applications and are important for four particular reasons. First, if an individual has difficulty with a 3D application, we show that a colocated orientation indicator can be used to help alleviate problems. Second, we found that a noncolocated orientation indicator has less impact in increasing task-performance than a colocated orientation indicator. Therefore, this research can help engineers make objective decisions in regards to the type of orientation indicator to include in 3D software packages. Third, this work highlights the importance of evaluations based on controlled experimentation using theories and methodologies from cognitive psychology. Finally, we demonstrate the need to take into account individual differences and find ways in which all users can benefit from 3D applications. We intend to show that 3D visualizations can be improved with controlled investigation of how users perceive the information in a visualization. By identifying the difficulties users may have when working with a visualization and the benefits of additional information, engineers can implement methods to make 3D visualizations more effective. The implica- 10 tions from this work extend to 3D CAD and medical visualization applications, as these applications could be made more accessible to a broad population of users through the use of in-scene cognitive aids. At the same time, this work can inform our understanding of the processing of complex imagery and assess whether human performance can be improved through the use of a cognitive aid. It has been shown that there is a wide range of people's spatial abilities not only in the general population, but also within specialized populations such as practicing surgeons [36, 18]. Differences in task performance between high spatial and low spatial users may be interpreted as a "superiority" of high spatial learners. An alternative interpretation is that the two groups rely on different aspects of spatial processing to solve the same tasks, leading to apparent behavioral differences. By under-standing these differences we can provide low spatial users with cognitive aids that allow them to solve a task using a different method than a high spatial user would. By identifying users' difficulties with 3D navigation and the benefits of additional information we can make 3D environments more effective. We aim to illustrate methods in which 3D visualizations could be made more usable. With hope this work will encourage additional research on other ways in which 3D visualizations can be improved. 1.4 Organization Chapter 2 provides a comprehensive summary of related previous work, variables that may influence the effectiveness of a visualization, and how these variables may influence the effectiveness of an orientation indicator. Chapter 3 describes the 3D object stimuli, method-ology, and experiment procedures which were used to measure subjects' task-performance. Chapter 4 presents the results of the mental rotation experiments in the present research. Finally, Chapter 5 discusses the mental rotation results as well as the practical and theo-retical contributions of this work. CHAPTER 2 BACKGROUND AND RELATED WORK This chapter introduces previous work that evaluated the effectiveness of 3D visualiza-tions. First, I begin with an overview of 3D visualizations and why visualizations may not be effective for all users. A discussion on frames of reference is included. Second, I discuss how users might gain information from an in-scene cognitive aid and research that has implemented techniques to support effective navigation in 3D tools. Specific applications that can benefit from the present work are given. Third, I describe variables that could affect users' experience and task-performance with 3D visualizations. In particular, I describe how a user's spatial ability may affect how he or she benefits from a visualization. 2.1 Three-dimensional visualizations Advances in computer graphics such as sophisticated rendering methods and hardware have led to the ability to create complex 3D graphics and visualizations. 2.1.1 Overview of visualization Visualization includes the areas of computer graphics, image processing, high perfor-mance computing, information visualization and scientific visualization. The present work focuses on scientific visualizations, which can be defined as 3D graphical representations which are used to gain an understanding and insight to data [5]. Scientific visualization does not include presentation graphics, which communicate information and results in ways that are easily understood (such as a bar chart). The motivation behind scientific visualization is to allow users to comprehend data in ways that are not feasible with the raw data. There are four stages of visualizations: the collection and storage of the data, preprocessing of data to transform it into a form we can understand, display hardware and graphics algorithms produce an image, and the human cognitive system perceives the image [5]. This dissertation focuses on evaluating how accurately the human perceiver comprehends the visualization. There are three stages of the perceptual processing of a visualization. First the viewer extracts the low-level properties of the scene such as features, orientation, color, 12 texture, and movement patterns. Second, the viewer uses contours, regions of the same color, texture, and motion to recognize patterns. Lastly, the viewer carries out a sequential goal-directed processing [5]. For instance, the viewer uses visual search strategies to extract information he or she is seeking. The present work focuses on the low-level property of object orientation. 2.1.1.1 Scientific visualization applications Many disciplines are using visualizations to analyze data. These areas include: engi-neering, fluid dynamics, electronic design, medical imaging, geospatial information sciences, military, meteorology, and geology. Applications with 3D visualizations give users the experience of viewing real 3D objects, and enbable both expert and nonexpert users to visually explore data [37]. Visualizations provide data analysis without the need to formally train users since shapes can be readily perceived [5]. Some practitioners use visualizations to reveal correlations in the data over space and time (see [38, 37, 5]). Visualizations can also be used in clinical studies (see [39, 40]) and in pedagogy (see [41, 42, 43]). 2.1.2 Using external representations to facilitate internal cognition External representations such as visualizations are a visual aid to cognition [44]. A useful framework for understanding how external visualizations facilitate internal cognition is distributed cognition [6]. Distributed cognition is the theory that certain tasks require the processing of information that is distributed across both the internal mind and an external representation [45]. In order to evaluate a distributed task, we must consider both the internal processing and the external representation because each facilitates cognition. Distributed cognition can help us better understand human-computer interaction by putting the focus on what users do in virtual environments and how they perform activities in them [46]. Because the attributes of external representations influence users' cognition, designers of visualizations should consider the user's moment-to-moment actions in a virtual space [6]. Effective interfaces will facilitate cognition by helping the user decide which action to do next [47]. The theories of embodied cognition assume that people will minimize internal cognitive processes by utilizing perceptual-motor processes [48]. For example, instead of imagining an object from a specific viewpoint, users will instead manually rotate the object to that perspective as a means of simplifying the problem-solving task. Kirsh and Maglio [49] found that players of the game Tetris, in which falling block shapes must be rotated and 13 horizontally translated to fit as compactly as possible with already fallen blocks, would use external rotations and transformations to uncover information that was difficult to compute mentally. Furthermore, people may use the environment to solve problems in situations that demand fast responses because the time to mentally compute processes would be costly [48]. These theories predict that scientific visualizations will aid cognition by offloading inef-ficient internal processes onto more efficient perceptual-motor processes, such as externally rotating an object and observing the changes [6, 50, 51, 47]. However, distributed cognition is typically examined using simpler tasks than tasks typically performed by practitioners using scientific visualizations. Therefore, the present work can contribute to the body of literature on distributed cognition and also benefit from this theoretical framework. 2.1.3 Are visualizations effective for all users? Despite the enthusiasm regarding the use of 3D digital representations, research on how users perceive 3D models and the information users gain from a 3D tool is to date limited and inconsistent [4, 6]. Even though designers create visualizations to be aesthetically appealing and intuitive, some users may not understand how to effectively use them. Some research has shown the presence of a 3D visualization is beneficial [52, 6], whereas other research shows 3D visualizations do not provide extra information [53, 4]. Knowing whether a user will benefit from a visualization is a troubling problem for de-signers of 3D visualizations. The information being displayed in a visualization may be very beneficial to users, but users may not be able to comprehend all of the information shown because the visualization is too complex. For instance, previous research has indicated that when individuals from a broad population were assigned a shape-related task that entailed interaction with a 3D visualization, failure correlated with the inability to find an appropriate orientation from which to view the data [6]. In this research the visualization contained useful and relevant data, however the user did not attend to this information. It may be that users have trouble finding important information within a visualization, or that they think they have discovered all of the important information within a visualization when they have not. It is possible that some users cannot access information from a visualization because they get disoriented when working in an abstract 3D virtual environment. The concepts and tools needed to maintain orientation in a 3D scene may be difficult to learn and some users have even rejected using 3D tools [12, 8]. Kheener, Khooshabeh, and Hegarty [6] found that not all users were able to find the most "informative view", i.e., the view that gives key information within a visualization. Similarly, Velez, Silver, and Tremaine [54] 14 reported that some individuals thought the most "informative view" was always the back projection of the object even if it was a side or bottom view. See Figure 2.1 for example orientations in which visualizations are presented. It is also possible that some users have difficulty orienting themselves in 3D desktop environments because many of the cues commonly used to maintain a frame of reference in the real world are absent in these virtual spaces. There is often no sense of an "up" direction in an abstract data space, and this can be confusing [55]. In the real world we can orient ourselves via cues from our bodies, the environment including the horizon, lighting, and objects in the environment. In 3D virtual environments objects are often presented in a vacuum of space and users may become easily disoriented with camera perspectives that are from unfamiliar points of view. Previous research indicates that imagining an object's rotation is difficult when only the object's initial position is given and no other information is provided [56, 57]. Ware and Arsenault [55] found that frames of reference can impact the task-performance of making two virtual objects parallel (i.e., rotating one object until it matches the orientation of a target object). Much of the research conducted on perceived direction has been done in the context of space research to help us understand how people can best orient themselves in a gravity free environment [55, 58]. Howard et al. [59] found that the presence of familiar objects with a known normal orientation, such as a chair, can influence which direction is perceived as up. 2.1.4 Spatial reference frames Environments that allow for users or objects to move through space are often defined in terms of a spatial coordinate system. This coordinate system can be defined as three axes of Figure 2.1. Some users may think relevant information can be seen from a back projection even if it can only be viewed from a side projection. Image courtesy of Johnson et al. [1]. 15 translation (e.g., X, Y, Z coordinates in 3D space) and three axes of orientation (yaw, pitch, roll) [60, 61]. The position and orientation of objects in an environment can be specified by that systems' frame of reference. For some tasks, the user may need to use and transform multiple frames of reference. For example, a construction worker operating a tractor shovel may need to transform the orientation of the shovel (an angular coordinate system) to the location of the tractor on the ground (a two-dimensional (2D) Euclidian system). Transformations of visuospatial mental images depend on multiple spatial reference frames and are important for many reasoning problems, including navigation, understanding of the structure of data and the making and using of tools [21, 60]. 2.1.4.1 Object-based and viewer-based reference frames It is necessary to use a frame of reference to adopt a specific viewpoint of an object or scene [28]. There are two visuospatial transformations that are often dissociated: object-based transformations in which individual objects are updated relative to the object's spatial representations, and viewer-based transformations in which one's personal perspective is updated [57, 62]. When someone performs an imagined rotation or translation using an object-based reference frame the update is done using the object's intrinsic coordinate frame. For example, a car may be represented as having an up-down axis, a front-back axis, and a left-right axis, while an object such as a water bottle may be represented as only having a major (up-down) axis running from the top to the base. People appear to rapidly and automatically assign a major axis and hence a top to objects [21] and such axes play an important role in how we perceive their orientations space [63, 64, 56]. Studies have shown that relationships are updated differently in viewer-based transfor-mations than in object-based transformations and that some tasks may be more easily solved using one transformation over the other [65, 66, 55, 62, 67, 61]. Object-based reference frames can help a person define the relationships between various parts of an object and can also be used to locate an object relative to another object (i.e., "on the stove"). Zacks and Michelon [21] hypothesized that for a same-different task where subjects make judgments regarding whether two pictures were identical or mirror images (a comparison task) subjects would use an object-based transformation to rotate the reference frame. Research has also found that individuals are able to quickly rotate objects around the vertical axis perpendicular to the line of sight, suggesting they are maintaining a "gravitational vertical" or object-based frame of reference [64, 68]. Conversely, Zacks et al. [21] hypothesized that for left-right tasks where subjects make judgments regarding which arm (left or right) of a pictured figure was extended (a classification task) subjects would use a perspective 16 transformation. Individual differences may also impact the frame of reference a person maintains since difference coordinate systems can lead to different strategies to solve a visuospatial task. It has been suggested that the ability to manipulate an imagined object with an object-based transformation and the ability to reorient the imagined self with a viewer-based transformation are separate abilities [69]. Research has indicated that individuals may prefer to use either viewer-based or object-based representations in learning a large-scale environment [70, 71]. Furthermore, high spatial individuals may be more flexible in the coordinate systems that they are able to maintain. For example, in solving a mental rotation task, high spatial ability subjects were able to use a frame of reference that included a nonstandard axis of the world but low spatial ability subjects were not able to use such an axis [72]. 2.1.4.2 Reference frames in virtual environments One frame of reference particularly relevant to virtual spaces is the display frame. The display frame, such as the computer screen, is used to define the orientation and movement of information on a display. This frame of reference might be analogous to the environmental frame of reference used in the real world. The environment reference frame is based on the orthogonal directions and planes from floors, walls, and ceilings [29]. Kozhevnikova et al. [68] found that individuals might maintain different frames of reference in a virtual environment depending on the display. When performing a mental rotation task, subjects were likely to use a display frame of reference when viewing objects on a 2D monitor, and a viewer-based frame of reference when viewing objects in a 3D immersive display [68]. A distinct difference between 3D desktop environments and the real world is that in the real world objects rarely rotate in space in front of us, instead we often change our location and move our head to get a different viewpoint [73]. In 3D desktop environments objects can be arbitrarily rotated in space, and the user cannot discriminate whether the view of the object changed because of the motion of the object, or a change in the observer's position in space. A user may interpret all changes to the view of an object as a change due the object moving since the cues used to maintain body orientation in the real world are absent. One goal for the designer of a visualization is to ensure the interface does not create unnecessary transformations of information from one spatial reference frame to another. These transformations are cognitively demanding and could increase time, error rate, and mental workload [60]. While a viewer-based graphical aid is worth inquiry in future research, we will focus on 17 an object-based graphical aid. Current industry software packages that have orientation indicators have implemented indicators which provide an object-based reference frame (see [12]). It is likely that many 3D visualizations require users to rely on object-based transformations. When using a visualization users may be prone to interpreting changes to the view of an object as a change resulting from object-based movement since the viewer-based cues used in the real world are absent. Furthermore, many visualizations require users to do comparison tasks, which rely on the user attending to an object-based frame of reference. 2.2 Increasing effectiveness of visualizations through cognitive support Research has shown that techniques can be implemented to address the challenges users may have when using 3D tools [74, 11, 75, 76, 77, 12, 8, 2]. Brooks et al. [74] developed haptic displays; Tory and Swindells [75] assessed how multiple viewpoints aided a user; Feibush et al. [77] designed a viewer for navigating terrain; Firzmaurice et al. [12] augmented existing navigation tools; and Khan et al. [8] implemented an orientation indicator called the Viewcube (see Figure 2.2). However, Khan et al. [8] did not compare task-performance between conditions when the indicator was present and when it was not present. Stull et al. [2] found that orientation references helped individuals learn anatomy from a 3D visualization. See Figure 2.3 for an indicator similar to the one implemented by Stull et al. [2]. Orientation indicators have been implemented in medical imaging software in the forms of bounding boxes, virtual human figures, and aids that depict the left, right, anterior, and posterior sides of an object [31, 78]. To our knowledge these orientation indicators have not been quantitatively assessed. 2.2.1 Tasks where orientation indicators could benefit users 2.2.1.1 Mechanical CAD An important trend in mechanical CAD is the move towards 3D solid modeling systems. Prior to the advent of CAD software, mechanical designs of individual parts and objects were typically specified by drafting on paper a set of orthographic views (sometimes called multi-view drawings), representing the parallel projection of the object from various viewing directions. Viewing directions were typically separated by 90! and aligned in some natural way with the object. Research has suggested that orthographic views provide sufficient information for people to create a full 3D mental representation of an object [79]. Modern mechanical software automates this drafting process, and the electronic representation can 18 Figure 2.2. Viewcube orientation indicator implemented in Autodesk products. The Viewcube displays the orientation of the 3D scene in each view. Figure 2.3. Colocated orientation indicator similar to the one implemented by Stull et al. [2]. Stull and colleagues found that the orientation indicator helped students learn anatomy. 19 now be a full 3D description of the object shape (as shown in Figure 2.4). These applications allow users to create, manipulate, and view 3D geometry and scenes on traditional 2D displays [8]. Viewpoints that are difficult to achieve in the real world, such as a bird's eye view, are easily attainable in 3D applications. Users can view objects from any angle and orient the part in any position. However, controlling the virtual viewpoint and understanding the position of the virtual camera in relation to an object is a challenging task for users new to virtual 3D environments [8]. Another consequence is that designers may have to maintain an association between object features in multiple views. This problem may become even more complicated when designers are working with complex objects. Multiple views are used in CAD and modeling software such as Autodesk's AutoCAD, Maya, and 3D StudioMax [80]. See Figure 2.4 for one example of how Autodesk implements multiple views and orientation indicators. 2.2.1.2 Medical visualizations Certain subsets of the medical community have adopted 3D visualizations into clinical practice [39, 40]. Several medical imaging and visualization software packages allow for multiple views of data. Multiple views are used in visualization software such as OsiriX, 3D Slicer, Anatomy Browser, Seg3D, and ImageVis3D [81, 82, 83, 84, 85]. These tools can be used for education, image-guided therapy and also pre-surgical planning and refer-ence [41, 40, 86, 87, 88]. For instance, the application 3D Slicer is used in image-guided surgery and allows the surgeon to view 3D surface models of key anatomical and functional structures [78] from preoperative data in the interventional context. Figure 2.5 shows one way a volume can be oriented in 3D Slicer [39]. Medical education has already made a dramatic shift towards using 3D visualizations and digital representations of anatomy in academic curricula [89, 43]. Educators are recommending digital representations for the study of anatomical structure, function, and spatial relationships [42]. Medical professionals rely on a detailed understanding of spatial structures in the human body [90], but medical students have difficulties achieving this level of understanding [42]. It is believed that realistic 3D models will enhance a student's learning experience [53]. Dynamic visualizations may provide additional depth cues and convey 3D shape to users better than traditional static 2D representations. The motor commands used in interactive visualizations may also benefit users because of the correspondence between their commands and the resulting changes in the object's orientation [6]. There have been major initiatives such as the Visible Human Project to acquire spatial data from human organs and create 3D 20 Figure 2.4. Computer-aided design is often done using multiple views of a 3D model. Noncolocated orientation indicators are used to indicate object's orientation. Figure 2.5. Visualization application 3D Slicer is used for surgical planning, image-guided intervention, and clinical studies. Image courtesy and copyright of David Gering. 21 models which are used for teaching and learning gross anatomy [43]. Figure 2.6 shows images from a digital learning DVD produced by Primal Pictures, Ltd [91]. In this application users are shown multiple views of 3D models to learn anatomy. Spatial cognition is critical for users to be able to interpret medical images [90]. It has been shown that there is a wide range of individual spatial abilities not only in the general population, but also within specialized populations such as practicing surgeons [18, 36]. Prior research has shown that spatial understanding of 3D models by low spatial individuals can be improved to near that of high spatial individuals with the use of cognitive aids [2]. Researchers have encouraged the assessment of students' spatial understanding of 3D anatomical structures [92, 6, 53]. 2.3 Variables that may affect 3D visualizations Published results on the effectiveness of 3D applications are inconsistent. There are a variety of possible factors that influence task-performance with a 3D application. 2.3.1 Task, stimuli, axis of rotation, and level of interactivity may affect task-performance with a 3D visualization It may be easier to comprehend information within certain visualizations than others, and this variation may be influenced by four factors. The difficulty of the task, complexity of the 3D model, axis which the object is rotated about and the level of interactivity given to a user may affect task-performance with visualizations. For example, recalling a view of a familiar object may involve different cognitive processes compared to a task where the user has to find a specific feature of a complex object. Recalling a feature of a known object may be an easier task for a user than finding a particular piece of a complex object. Furthermore, it may be difficult for users to benefit from a visualization when they Figure 2.6. Students view 3D structures shown at various orientations to learn anatomy. Images courtesy and copyright of Primal Pictures Ltd. 22 cannot make informed decisions. For instance, even if users know they need to find a specific feature, they may not know the path to take that will lead them to that feature, and they may not be able to maintain an internal representation of all of the locations where they have already looked for the feature. In short, it might be difficult for users to benefit from a visualization when the visualization is causing them to make decisions to which they do not know the answers. The axis of rotation may also impact task-performance with a visualization. The literate shows that people are quickest and most accurate in determining the position of objects when the objects are oriented around one of their own natural axes [63]. In particular, people tend to be most efficient at rotating objects when the axis of rotation is vertical in the environment [63, 19]. In contrast, comparing objects with oblique or diagonal orientations is much more difficult and people are more prone to make errors [63, 56, 93, 94]. Furthermore, the angular disparity at which an object has been rotated will likely increase the time necessary to make an orientation judgment [20]. Larger angles of rotation will lead to longer response times. This increase in time may interpreted as increased time needed to rotate an object, or could be the result of an increase in difficulty (see Rock et al. [95] for discussion). Lastly, 3D applications either static or dynamic, and some permit the user to interact with the 3D model. While some studies have found that interactivity helps users achieve faster recognition times of objects, other studies have found that individuals with interactive control do not perform better than individuals with an animated 3D model which they cannot control [6, 96]. Furthermore, it has been found that an animated diagram did not lead individuals to have a greater understanding of a dynamic process compared to a static diagram [97]. The quality of the information a user gains depends not only on whether they are permitted to interact with a visualization, but how they interact with it [7]. Moreover, there have been instances where individuals using an interactive 3D model performed worse than individuals who were not given this control [7]. There are limited principles on how to design effective dynamic interactive visualizations for instructional use [98]. Several factors may affect the claim that interactive 3D desktop environments make task-performance better (see Hegarty [99] for discussion). 2.3.2 Individual differences may affect task- 23 performance with a 3D visualization Khan et al. [8] stated that navigation in a 3D environment may especially burden users who have little experience with 3D interaction and visualization. Individual differences in spatial ability may also affect how a user benefits from a 3D application [6]. The term spatial abilities refers to a broad range of skills involving the mental representation and manipulation of information about geometric entities. Research has shown there is a natural variation between people in their spatial abilities; individual differences have been found in a variety of tests of visuospatial abilities [18]. There are three possibilities for how spatial ability could affect the usefulness of a 3D tool for a person [99]. First is the "ability-as-enhancer" hypothesis, which states that high spatial ability is a necessary prerequisite to using a 3D tool and only high spatial ability learners will benefit from 3D models because they have enough cognitive capacity to use them. Second, the "ability-as-compensator" hypothesis indicates 3D models could be particularly effective for low spatial learners; if low spatial learners have trouble constructing their own internal model they might benefit if an external model was given to them. The third hypothesis is that 3D models will benefit everyone equally [4, 6]. Studies have shown evidence for the "ability-as-enhancer hypothesis." Findings show that high spatial ability is correlated with accuracy with a 3D visualization [54]. A 3D tool improved learning for high spatial ability individuals [52], but put low spatial ability individuals at a significant disadvantage [53]. Huk [4] found that only students with high spatial abilities benefited from 3D models. Low spatial abilities learners also have had more difficulty than high spatial abilities learners with complex geometric objects. Velez et al. [54] reported that low spatial ability participants could only solve simple geometric objects such as cubes and cones. There is limited research on whether dynamic spatial abilities, the abilities that are needed to reason about moving stimuli [18], are required for a user to make accurate inferences with animated 3D models. Research has shown that low spatial ability individuals had more difficulties extracting information from a 2D dynamic animation than high spatial ability individuals [100]. It is important to examine ways in which all users can benefit from 3D applications. Three-dimensional graphics are being used in more fields and there is a growing population of people who need to learn 3D navigation to perform their job [12]. Moreover, it has been found that features added to make 3D tools more accessible are not only popular with novice users but experienced 3D users as well [12]. Several researchers have argued for the importance of considering individual differences in the design of human computer 24 interaction systems (e.g., [101, 102, 103, 104, 105, 106, 107, 108, 109]). One of the few investigations of gender differences in 3D user interfaces concluded that the purported poor performance of women compared to men in navigating virtual environments disappeared if users were provided with a wide field of view display [110]. Hubona et al. [111, 112] examined performance on several spatial tasks relevant to visual interfaces and found a male advantage on mental rotation of abstract objects, the use of motion-related cues, and on a task that involved moving and positioning objects. Females were found to be better at estimating relative size. Notably, this work was conducted on professional engineers and computer scientists, who may already be experienced at such operations. Work to date on designing other forms of software with an awareness of the effects of individual differences is also limited (see [113, 114, 115]). CHAPTER 3 EXPERIMENTAL DESIGN FOR EVALUATING ORIENTATION INDICATORS This chapter provides a complete discussion of the experimental methodology used in this work. The experiments were designed to answer four specific questions about a user's ability to maintain an understanding of a 3D model when viewed from distinct orientations. The first and most important goal of the experiments was to quantitatively measure the effects of four types of orientation indicators on users' ability to make object-orientation judgments of 3D objects. To achieve this goal, we used an experimental paradigm that is very established in the psychology community. Each subject was presented with one type of orientation indicator, and they completed the task with the presence of the indicator and in its absence. Our hypothesis is that the graphical aids will improve a user's ability to make same/different judgments on 3D objects shown in different orientations. We predict that colocated orientation indicators will help individuals more than noncolocated orientation indicators in determining the orientation of an object in space. Second, we considered the possibility that objects of varying complexity may effect task performance and the effectiveness of an orientation indicator. We used two classes of 3D objects; mechanical parts that were comprised of distinct pieces, and anatomical parts that were comprised of abstract parts. We predict that anatomical objects will be more difficult for individuals than mechanical objects, and the orientation indicators will help more with anatomical objects than mechanical objects. Third, we examined the influence of individual differences in visuospatial abilities on the effectiveness of the orientation indicator. Since spatial ability has been a predictor in prior research regarding the effectiveness of 3D visualizations, it may be correlated with the effectiveness of an orientation indicator. A user may prefer one type of aid over another depending on his or her spatial ability. It may be that low spatial learners need different cues to aid with orientation than high spatial learners. If high spatial learners outperform low 26 spatial learners when using 3D tools an orientation indicator may help close the performance gap between groups. Finally, we used two tasks to measure user performance. The two tasks give converging evidence on the effectiveness of an orientation indicator. These tasks vary in difficulty and time pressure. They also provide different quantitative information for data analysis. 3.1 Orientation indicators evaluated Orientation indicators were either colocated or noncolocated with the object, and either static or dynamic. This implementation led to four different types of orientation indicators: colocated static, colocated dynamic, noncolocated static, and noncolocated dynamic. In all instances the object stimuli were static. We used an orientation indicator that could be used as either a colocated indicator or noncolocated indicator in order to maintain a controlled experiment and not introduce biases. We based the look of the orientation indicator off of the coordinate system icons often used in CAD programs, but felt additional colored markers would help users who are not experienced with 3D CAD and visualization systems. Subjects were not given instruction on how the aid could help to solve the tasks; they were only told the aid rotated the same amount and direction as the object. In practice, orientation indicators in the style of bounding boxes, glyphs, or aids labeled anterior, posterior, superior and inferior could be used; our goal however, was to evaluate differences between colocated and noncolocated indicators. 3.1.1 Colocated or noncolocated The noncolocated orientation indicators were placed above the stimuli. The colocated orientation indicators were placed such that the object and the aid shared a center point. See Figure 3.1 for examples. Each indicator was shown rotated in the same axis and amount as the object stimuli. Each indicator rotated as an object-based transformation, in other words it rotated in the same coordinate frame as the object shown. It could be that an orientation indicator that is placed apart and not attached to the object leads the user to solve the task first for the aid, and then transform the information about rotation to the object. In this step the user may have difficulty recovering the information from the aid and translating it to the object. The colocated indicator was a larger scaled version of the noncolocated indicator. The indicator had six markers, each had a unique color. 27 Figure 3.1. Example trials: Choose which two of the four objects on the right match the target object on the left. Noncolocated orientation indicator on top, colocated orientation indicator below. 3.1.2 Static or dynamic The dynamic indicators showed the path of rotation between two objects as opposed to the two endpoints of rotation. Cues from motion are very prominent visual cues. Structure from motion is the theory that an object's shape and spatial relationships can be recovered from motion through cues such as optical flow [116, 117]. For instance, when an object is rotating the viewer can use features of the object along with cues from the object's direction and velocity to track the movement of the object over time. The dynamic indicator could help by providing cues to the user as to how the structure of the object would look from one point in time to the next. This information may help the user construct an accurate representation of an object's shape. Additionally, motion may assist a user in mentally rotating an object because it is hypothesized that there is a relationship between the representation/processing of an object in mental rotation and the representation/processing of an object that is seen visually rotating [118]. A person may find it easier to determine whether two objects are the same object if they are given a visual rotation. This visual rotation may provide them with information such that they do not have to create a path of rotation between the two objects on their own. Instead, they can use the path of rotation given to them to determine whether the two objects are the same object. The dynamic aid started in the orientation of one object, rotated into the position of the 28 second object, then rotated back into the original position.1 Subjects were able to watch this path of rotation three times before the indicator stopped in the position of the object on the left. The speed of the indicator was held constant. To account for varying degrees of rotation between two objects, the distance the indicator translated was a function of the degree of rotation. The indicator traveled longer distances for larger degrees of rotation. The minimum amount of time of dynamic movement was 2 seconds; the maximum amount of time of dynamic movement was 10 seconds. 3.2 Stimuli We used two classes of objects since object complexity has shown to affect task-performance with 3D visualizations. One class of objects was mechanical parts that were constructed of distinct pieces, and the other class of objects was anatomical structures that represent blood vessels, an aneurysm, or organism that is composed of abstract parts. These two classes of objects stem from the 3D object perception experiment conducted by Cole et al. [119]. The authors of this experiment used models that people could easily infer shape, did not have a lot of self-occlusion, were not too familiar to subjects, and were somewhat simple without much fine scale detail [119]. We believe this criteria is well suited for both the mental rotation paradigm and also the application areas of 3D CAD software and medical visualizations. See Figure 3.2 for the ten 3D object stimuli. All stimuli were limited to an object manipulation space in which the viewer could see the entire silhouette of the object. The anatomical structures were assembled using digital embryos [120]. All models were modified and rendered with Autodesk's Maya 3D software version 8.5. The type of lighting used were area lights with Blinn shading. The image size of each choose-two-of-four trial was 1530 x 448 pixels. The image size of each same/different trial was 608×448 pixels. 3.3 Subjects' spatial abilities We predict that a subject's performance with an orientation indicator will correlate to his or her visuospatial abilities. In each of these experiments, subjects will be given paper-and-pencil spatial abilities tests. Subjects with high spatial visualization abilities may use the orientation indicator more or less than subjects with low spatial visualization abilities. Although a high spatial visualization ability subject may be able to do the task well without the orientation indicator and not necessarily benefit from the static aid, the dynamic aid may facilitate performance because the motion can confirm confirm his or her 1Examples of the dynamic indicator can be viewed at http://www.cs.utah.edu/!tziemek/dissertation 29 Figure 3.2. Ten stimuli used in experiment. Mechanical parts on top, anatomical structures below. Each stimulus shown in 0! orientation. own mental rotation of the object. Conversely, a subject with low spatial visualization abilities may benefit equally, or more from the static aid compared to the dynamic aid. The low spatial visualization subject may not benefit from a dynamic aid if he or she does not understand the motion of the aid. In other words, the path between two objects may not correspond to how the subject thought the rotation occurred since there are an infinite amount of ways to rotate two objects to be in congruence with one another. Subjects' spatial visualization ability was measured using two paper-and-pencil tests: the Paper Folding Test [121] and the Mental Rotation Test [3]. In the paper folding test, each question illustrated a piece of paper being folded, a hole being punched in it, and the subject was to identify what the piece of paper would look like when it was unfolded. See Figure 3.3 for example paper folding task. The subject was to correctly identify the answer from a series of five possible answers. In the mental rotation test, each question had a target object and four consecutive objects. The subject was to correctly identify which two of the four objects matched the target object but were shown in different orientations. All objects were cubes pieced together to form block like objects. See Figure 3.3 for example mental rotation task. Each test had 20 questions and consisted of two parts that were timed for 3 minutes each. The paper folding test was scored by awarding one point for every correct answer minus a fraction of a point for every incorrect answer. The mental rotation test was scored by awarding two points for every correct answer minus two points for every incorrect answer. Standardized scores (z-scores) were calculated for the two paper-and-pencil tests, and these were combined to create an aggregate measure for each subject (280 total: 151 females, 30 Figure 3.3. Examples of paper-and-pencil tests used to measure individual's spatial ability. Paper folding task shown on top, Vandenberg and Kuse [3] mental rotation task shown on bottom. 129 males). Subjects were classified as high spatial ability or low spatial ability based on a natural break in the distribution of scores that was very close to the median. 3.4 Experimental design and procedure To assess whether orientation indicators affect subjects' performance we created a series of seven computer based experiments. In each experiment we varied object type, axis of rotation, and the presence/absence of an orientation indicator. We tested whether performance would change as a function of the orientation indicator and whether effects would differ depending on spatial ability. Two different designs were used, both of which have been employed extensively in past studies of mental rotation. The first of these, which we refer to below as the choose-two-of-four task, presented a target object and four possible matches. Participants had to pick the two correct matches from the four possibilities [3]. The second design was a same/different task [20] in which participants decided on each trial whether a pair of objects was the same or different. These two designs were used to inform the design of 3D applications that vary depending on whether a user's task-performance is based on his accuracy, or his ability to work quickly and accurately. Some applications allow the user to respond at his own pace and task-performance is judged solely on accuracy. For instance, a student learning anatomy may be able to take his time learning from a 3D anatomy tool. Other applications however may restrict the amount of time a user has to respond or may be used in circumstances where 31 the user is under time pressure. For instance, a surgeon may need to act as quickly and accurately as possible while performing an operation using image-guided therapy. Although the choose-two-of-four task is time-limited overall, the instructions emphasize accuracy and there is no time limit on individual trials. The same/different task however, measures response time on individual trials, and thus can provide additional evidence that people are performing mental transformations of the orientations of objects. It also allows for the evaluation of a dynamic orientation indicator. Together these two designs provide converging evidence on the effectiveness of an orientation indicator. 3.4.1 Choose-two-of-four task In this task participants were shown four objects and they were to decide which two of the four objects matched a target object (see Figure 3.1). Two of the four objects were mirror images of the target object and thus were not congruent in shape to the target object. Experiments 1 and 2 were identical except for the orientation indicator. Experiment 1 assessed a noncolocated static indicator; Experiment 2 assessed a colocated static indicator. Experiments 2 and 3 were identical except for the axes of rotation. Experiment 1 had rotations about the vertical axis parallel to the image plane, here-after vertical axis, and rotations about the horizontal axis parallel to the image, hereafter horizontal axis. In trials with rotation about the vertical axis mirror objects were made by reflecting the object about the horizontal axis such that the left and right of the object were reversed. In trials with rotation about the horizontal axis mirror objects were made by reflecting the object about the vertical axis such that the top and bottom of the object were reversed. Mirror objects were made in this manner to prevent subjects from being able to use strategies other than mental rotation to solve the task. Figure 3.2 shows each object in its original 0! position in which reflections and rotations were based from. Mirrored objects were also rotated from the initial position. Specifically, there were 40 trials total, 20 of these trials showed the orientation indicator and the other 20 did not. Trials with the orientation indicator were setup using the same rotational disparities between the target image and the object choices as the trials without the orientation indicator. The degrees of disparity between the four choices were also the same between orientation indicator and no indicator conditions. Within a condition, 10 trials had rotations about the horizontal axis and 10 trials had rotations about the vertical axis. Of these 10 trials, 5 were mechanical parts and 5 were anatomical structures. The target objects were always shown in either the 0!, 15!, 345!, 30!, or 330! orientation. The four objects to choose from were shown at 0!, 15!, 345!, 30!, 330!, 45!, 315!, 60!, 300!, 75!, 32 or 275! orientations. Objects and the mirror distractors were rotated between 15! and 75! in 15! increments from the target object. Each degree of disparity between the target and four objects was used the same number of times across all conditions (i.e., presence of aid, class of object, and axis of rotation). Thus there was no change to the level of difficulty of a trial between conditions. Subjects were given four blocks of trials, two blocks were with the orientation indicator, and two blocks did not have the indicator present. Two blocks were mechanical parts and two blocks were anatomical structures. We counterbalanced the order of the aid condition and object type condition across subjects and gender to prevent performance differences attributed to practice effects. Subjects were given four minutes to complete each block of trials, with three short breaks in between blocks. Each block had 10 trials. It was possible for a participant to time out and not finish a block of trials. Subjects were also permitted to skip a trial if it was too difficult, and if time allowed they were given another chance to answer skipped trials. Instructions emphasized the importance of accuracy over response time. To ensure subjects understood the task, they were given written and oral instructions. The experimenter verbally explained the task with two example trials. Subjects then had a practice period. They were given two blocks of trials that each had 3 trials; stimuli used in practice periods were not used in the real experiment. See Figure 3.4 for objects used in practice trials. See Section A.1 for instructions. The task was scored by giving two points for every correct answer and subtracting two points for every incorrect answer. This scoring method corrects for guessing and follows the conventional scoring method for Vandenberg and Kuse mental rotation tests [3, 2]. These scores were then normalized on a scale of 0 to 1. Our main goal was to test the effectiveness of the orientation indicator. This variable, along with the class of objects and axis of rotation was varied within subjects to test for differences within the individual. Spatial ability was a between subjects variable. Experiment 3 assessed whether colocated static orientation indicators improved subjects' performance when object stimuli were rotated about oblique axes of rotation. Two oblique axes were evaluated, see Figure 3.5 for example rotations. See Figure 3.6 for example trials. Mirror objects were made by reflecting the object about the vertical axis such that the top and bottom of the object were reversed. 33 Figure 3.4. Stimuli used in practice trials. Figure 3.5. Four axes of rotation were assessed. Clockwise from top left: horizontal axis, oblique axis one, oblique axis two, vertical axis. All objects are shown rotated 45! from initial position. 34 Figure 3.6. Example choose-two-of-four trials with mechanical stimuli rotated about oblique axis two. 3.4.2 Same/different task In this task participants were shown two objects and they were to decide whether these two objects were the same object but shown in different orientations, or whether they were different objects (see Figure 3.7). If they were different objects one object was a mirror image of the other (see Figure 3.8). All same/different experiments were identical except for the orientation indicator implemented. Each experiment assessed whether an orientation indicator improves subjects' perfor-mance on a same/different task with static objects. Each experiment used rotations about the vertical axis and rotations about the horizontal axis. Mirror objects were made identical to that of Experiment 1. There were 160 trials total, 80 of these trials showed the orientation indicator and the other 80 did not. Trials with the orientation indicator were setup using the same rotational disparities between the two objects as trials without the orientation indicator. Within a condition, 40 trials had rotations about the horizontal axis and 40 trials had rotations about the vertical axis. Of these 40 trials, 20 were mechanical parts and 20 were anatomical structures. Of these 20 trials, 10 were same objects and 10 were different objects. The objects were shown at 0!, 15!, 345!, 30!, 330!, 45!, 315!, 60!, 300!, 75!, or 275! orientations. Objects and the mirror distractors were rotated between 15! and 75! in 15! increments from each other. The same object stimulus was used for a given disparity, axis of rotation, and same/different condition. Additionally, one of the objects was shown in the same orientation across aid and no aid condition. For example, for a 15! disparity using an anatomical object rotated about the vertical axis for a same pair, anatomical object number 35 Figure 3.7. Example same/different trials: Are the objects the same object shown in different orientations, or are they different objects? Subjects were presented with one type of aid, all subjects had trials where no aid was present. 36 Figure 3.8. Example trials. These two objects are different objects. 37 five was shown in the aid condition at orientations 45! and 30! and in the no aid condition at 45! and 60!. Each degree of disparity was used the same number of times across all conditions (i-e., presence of aid, class of object, axis of rotation, same/different pair). Thus there was no change to the level of difficulty of a trial between conditions. Subjects were given two blocks of trials, and within these two blocks trials were presented randomly. We counterbalanced the two blocks across subjects and gender to prevent performance differences. Subjects were given 12 seconds per trial; if they exceeded this time limit they were not given a chance to respond and they were presented with the next trial. Subjects were given one short break between blocks of trials. Subjects were not allowed to skip a trial. Instructions emphasized the importance of both accuracy and response time. The same/different task was scored by awarding one point for every correct answer. These scores were then normalized on a scale of 0 to 1. The orientation indicator, class of objects, and axis of rotation were varied within subjects. Spatial ability was a between subjects variable. To ensure subjects understood the task, they were given written and oral instructions. The experimenter verbally explained the task with two example trials. Subjects then had a practice period. They were given 10 practice trials; stimuli used in practice periods were not used in the real experiment. See Figure 3.4 for objects used in practice trials. See Section A.2 for instructions. 3.4.3 Subjects and research setting Subjects had short breaks during the computer portion; during these breaks they read articles from the popular press to prevent them from devising cognitive strategies to solve the task. At the end of the computer portion of the experiment subjects were given a written survey regarding the experimental task similar to the one given in Peters et al. [93]. The survey asked questions regarding strategies the subject used to solve the task, whether the subject was concerned about time pressure, and whether the subject felt more confident when the indicator was present. See Section A.3 for complete survey. Subjects' spatial visualization ability was measured using the two paper-and-pencil tests mentioned prior. See Table 3.1 for numbers of participants in each experiment by spatial ability and gender. All subjects were University of Utah students who were given either psychology course credit or compensation of 10 dollars for their participation. All subjects read and signed Institutional Review Board consent forms prior to the experiment. Subjects were not allowed to participate in multiple experiments. Subjects performed the experiment individually in a controlled experiment room where lighting was held constant. The exper- 38 iment was run on a Windows machine using E-Prime software with a 19 inch monitor. See Figure 3.9 for a picture of the research setting. Viewing position was also held constant with the observer's head located approximately 31 inches from the monitor. Although subjects were instructed to remain seated in one location, head movement was not controlled for. Subjects responded with a button box. For the choose-two-of-four task, buttons were spatially mapped to the object choices on the monitor. 39 Table 3.1. Number of subjects in each experiment by spatial ability and gender. Female (F), Male (M), Total (T). Low Ability High Ability All Subjects F M T F M T F M T Exp 1 13 4 17 8 15 23 21 19 40 Exp 2 15 4 19 9 12 21 24 16 40 Exp 3 14 3 17 7 16 23 21 19 40 Exp 4 17 4 21 4 15 19 21 19 40 Exp 5 13 8 21 9 10 19 22 18 40 Exp 6 15 8 23 6 11 17 21 19 40 Exp 7 16 8 24 5 11 16 21 19 40 Figure 3.9. Research setting where subjects took experiment. CHAPTER 4 EVALUATING ORIENTATION INDICATOR EXPERIMENTS This chapter describes the results of the orientation indicator experiments which were conducted using the procedures described in Chapter 3. Three experiments used the choose-two- of-four task and four experiments used the same/different task. Note that the choose-two- of-four experiment scores should not be directly compared to same/different experiment scores because of the intrinsic difference in how the two tasks are scored. 4.1 Results and discussion of choose-two-of-four experiments A 2(orientation indicator)×2(class of objects)×2(axis of rotation)× 2(spatial ability) ANOVA was performed on the mean scores for each experiment. Cohen's d was calculated as a measure of effect size for the presence/absence of the aid, defined as the difference between the two group means divided by the pooled standard deviations of the two groups. Cohen's d effect size can be indicative of a small effect (.2), a medium effect (.5), and a large effect (.8). All three experiments presented static orientation indicators because of the nature of the choose-two-of-four task. 4.1.1 Accuracy score 4.1.1.1 Colocated vs. noncolocated indicators The colocated orientation indicator increased subjects' accuracy in Experiment 2; this experiment used rotations about the vertical and horizontal axes. Subjects' increase in task-performance is shown by a statistically significant overall effect of the indicator. Participants showed an increase in accuracy with the colocated aid (.76) versus without the aid (.73). The effect size of the indicator is .25, indicating a small effect. See Table 4.1 for statistics associated with the main effect of the indicator. Neither Experiment 1, using a noncolocated orientation indicator with rotations about the vertical and horizontal axes, or Experiment 3, using a colocated indicator with oblique rotations, showed a main effect of the orientation indicator. In Experiment 1, participants 41 Table 4.1. Accuracy results for the choose-two-of-four experiments. 40 subjects per experiment. Choose-Two-of-Four Rotation Effect of Indicator Average Scores Statistics Effect Size no overall .76 without aid F(1,38) = .5 .08 Exp 1: noncolocated horizontal effect .77 with aid p = .5 static vertical significant effect .73 without aid F(1,38) = 8.4 .35 for horizontal .77 with aid p < .01 Exp 2: colocated static significant .73 without aid F(1,38) = 8.5 .25 overall effect .76 with aid p < .01 horizontal significant effect high spatial: .83 both without and with aid F(1,38) = 10.1 .04 vertical by spatial ability low spatial: .62 without aid, .69 with aid p < .01 .73 significant effect .71 without aid F(1,38) = 18.3 .64 for horizontal .79 with aid p < .01 Exp 3: colocated static no overall .76 without aid F(1,38) = 1.7 .27 effect .79 with aid p = .2 oblique one high spatial: .76 without aid, .85 with aid F(1,38) = 3.0 .90 oblique one by spatial ability low spatial: .70 without aid, .73 with aid p < .10 .26 oblique two significant effect mechanical objects: .82 without aid F(1,38) = 4.9 .47 for object type .87 with aid p < .05 significant effect .73 without aid F(1,38) = 7.5 .56 for oblique one .80 with aid p < .01 42 scored nearly the same with the aid (.77) versus without the aid (.76). In Experiment 3, participants scored slightly higher with the aid (.79) versus without the aid (.76). The effect size for each experiment was .08 and .27, respectively; which indicate the aids did not have a strong influence on users' accuracy. The results from these three experiments suggest that a colocated aid is more effective than a noncolocated aid, especially when objects are rotated about the vertical and horizontal axes. See Table 4.1 for statistics associated with the main effect of the indicator. 4.1.1.2 Individual differences in spatial ability Individuals' spatial abilities did impact the extent to which they benefited from an orientation indicator. The two experiments with colocated indicators (Experiments 2 and 3) both showed facilitatory effects of spatial ability and orientation indicator. As Figure 4.1 shows, it is clear that the effect of the indicator in Experiment 2 was driven by low spatial learners. The low spatial group showed an increase in accuracy with the aid (.69) versus without the aid (.62), whereas the high spatial group showed no change (.83) for both conditions. The effect size was much higher for the low spatial group (.73) versus the high spatial group (.04), indicating the aid had a strong effect for the low spatial group and no effect for the high spatial group. The results from Experiment 2 indicate that low spatial ability users can benefit from a colocated aid when objects are rotated about the vertical and horizontal axes. See Table 4.1 for statistics associated with the interaction between spatial ability and indicator. We also found that high spatial learners can benefit from a colocated aid. In Experiment 3, a statistical interaction among spatial ability, aid, and axis indicated that the aid was particularly beneficial for high spatial learners for rotations about oblique axis one. As Figure 4.2 shows, it is clear that the effect of the indicator and axis of rotation in Experiment 3 was driven by high spatial learners. The high spatial group showed an increase in accuracy with the aid (.85) versus without the aid (.76), whereas the low spatial group showed a smaller increase with the aid (.73) versus without the aid (.70). The effect size was much higher for the high spatial group (.90) versus the low spatial group (.26), indicating the aid had a strong effect for the high spatial group and a small effect for the low spatial group. The results from Experiment 3 indicate that high spatial ability users can benefit from a colocated aid when objects are rotated about an oblique axis. See Table 4.1 for statistics associated with the interaction between spatial ability, indicator, and axis of rotation. Lastly, in all three experiments there was an overall difference in accuracy between high spatial ability and low spatial ability groups, p < .01. On average, the high spatial group 43 Figure 4.1. Mean score on Experiment 2, with and without colocated static orientation indicator with vertical and horizontal rotations, by spatial ability. Figure 4.2. Mean score on Experiment 3, with and without colocated static orientation indicator with oblique one rotation, by spatial ability. 44 scored higher (.82) than the low spatial group (.69). 4.1.1.3 Class of objects Each experiment also showed a significant effect on the class of objects, p < .01. Objects that were mechanical parts were easier for subjects to visualize than objects that were anatomical parts. On average, subjects scored higher on trials that presented mechanical objects (.82) versus anatomical objects (.71). Only one experiment, Experiment 3 using a colocated aid with oblique rotations, showed an interaction between class of objects and orientation indicator. This result indicated that the orientation indicator helped more with mechanical parts versus anatomical parts. For mechanical parts, participants showed an increase in accuracy with the aid (.87) versus without the aid (.82). The effect size is .47, indicating the aid had a medium sized effect for mechanical objects. For anatomical parts, participants scored nearly the same with the aid (.70) versus without the aid (.71). These results indicate that if a distinct object is rotated about an oblique axis users may benefit from a colocated static indicator. See Table 4.1 for statistics associated with the interaction between class of objects and indicator. Finally, the effect of the class of objects was modulated by the axis of rotation in Ex-periments 1 and 2, which used noncolocated and colocated aids with vertical and horizontal rotations, respectively. In each experiment the results indicated that vertical rotations were easier than horizontal rotations for anatomical parts. Participants in Experiment 1 showed an increase in accuracy with anatomical objects rotated about the vertical axis (.75) versus the horizontal axis (.69), F(1,38) = 3.7, p < .1. For mechanical parts, participants scored nearly the same with objects rotated about the vertical axis (.80) versus the horizontal axis (.81). Participants in Experiment 2 showed an increase in accuracy with anatomical objects rotated about the vertical axis (.71) versus the horizontal axis (.68), F(1,38) = 6.6, p < .05. For mechanical parts, participants showed an increase in accuracy for objects rotated about the horizontal axis (.81) versus the vertical axis (.77). These results suggest that people have more difficulty when mentally rotating anatom-ical parts than mechanical parts. It may be that people have trouble creating a mental representation of an object that is composed of abstract pieces. People may be more accurate at creating a mental representation of an object that is composed of distinct pieces. Furthermore, if an individual has difficulty mentally rotating a complex object, it may be easier for him or her to perceive the object rotating about the vertical axis versus the horizontal axis. Whereas, if an individual can efficiently mentally rotate a simple object, 45 he or she may be able to perceive the object rotating about the vertical axis with the same ease as the object rotating about the horizontal axis. 4.1.1.4 Axis of rotation As indicated from previous results, the axis of rotation that an object is rotated about may influence task-performance. We found that the axis of rotation influenced a user's accuracy in two experiments. Experiments 1 and 3, which used noncolocated and colocated aids, each showed a significant effect of the axis of rotation on task-performance. In Ex-periment 1, which assessed vertical and horizontal axes of rotation, participants showed an increase in accuracy with objects rotated about the vertical axis (.78) versus the horizontal axis (.76). F(1,38) = 3.3, p < .01. In Experiment 3, which assessed two oblique axes of rotation, participants showed an increase in accuracy with objects rotated about oblique axis two (.79) versus oblique axis one (.77). F(1,38) = 4.1, p < .1. These results suggest that individuals may have an easier time perceiving the structure of an object when it is rotated about certain axes. The axis of rotation may impact a user's ability to effectively use a visualization. In particular, people may find it easier to rotate an object about the vertical axis or an axis which produces rotations that are familiar to them. People may find it more difficult to rotate an object about the horizontal axis or an axis which produces rotation that are unfamiliar to them. Finally, the orientation indicator effect was modulated by the axis of rotation in all three experiments. For Experiments 1 and 2 which involved horizontal and vertical axes of rotation, the presence of the indicator led to increased accuracy for objects rotated about the horizontal axis, but no difference for the vertical axis. In Experiment 1 for rotation about the horizontal axis, participants scored higher with the aid (.77) versus without the aid (.73). The effect size for the horizontal axis was .35, indicating a mediocre sized effect. For rotation about the vertical axis, participants scored higher without the aid (.79) versus with the aid (.76). In Experiment 2 for rotation about the horizontal axis, participants scored higher with the aid (.79) versus without the aid (.71). The effect size for the horizontal axis is .64, indicating a good sized effect. For rotation about the vertical axis, participants scored higher without the aid (.75) versus with the aid (.73). In Experiment 3, which involved two different oblique axes, the indicator helped per-formance in one axis of rotation, but not the other. For rotation about oblique axis one, participants scored higher with the aid (.80) versus without the aid (.73). The effect size for oblique axis one is .56, indicating a medium sized effect. For rotation about oblique axis 46 two, participants scored higher without the aid (.79) versus with the aid (.78). See Table 4.1 for statistics associated with the interaction between axis of rotation and indicator. The axis of rotation which an object is rotated about may influence whether a user will benefit from an orientation indicator. The results indicate that when an object is rotated about an axis that is familiar to a user, such as the vertical axis, the user may not need the cognitive support provided by the indicator. However, when an object is rotated about an axis that is difficult for a user, such as the horizontal axis, the user may benefit from cognitive support provided by the indicator. 4.2 Results and discussion of same/different experiments A 2(orientation indicator) × 2(class of objects) × 2(axis of rotation) × 2(spatial ability) ANOVA was performed on the mean scores for each experiment. Cohen's d was calculated as a measure of effect size for the presence/absence of the aid, defined as the difference between the two group means divided by the pooled standard deviations of the two groups. Cohen's d effect size can be indicative of a small effect (.2), a medium effect (.5), and a large effect (.8). Two experiments presented static orientation indicators and two presented dynamic orientation indicators. 4.2.1 Accuracy score 4.2.1.1 Colocated vs. noncolocated indicators All four experiments showed effects of the orientation indicator. The static experiments showed stronger effects versus the dynamic experiments. The colocated experiments showed stronger effects versus the noncolocated experiments. In Experiment 4 using a noncolocated static indicator, participants showed an increase in accuracy with the aid (.74) versus without the aid (.69). The effect size for the noncolocated static aid was .51, indicating a medium sized effect. In Experiment 5 using a colocated static indicator, participants showed an increase in accuracy with the aid (.75) versus without the aid (.67). The effect size for the colocated static aid was .78, indicating a large sized effect. In Experiment 6 using a noncolocated dynamic indicator, participants showed an increase in accuracy with the aid (.69) versus without the aid (.67). The effect size for the noncolocated dynamic aid was .19, indicating a small sized effect. In Experiment 7 using a colocated dynamic indicator, participants showed an increase in accuracy with the aid (.73) versus without the aid (.69). The effect size for the colocated dynamic aid was .44, 47 indicating a mediocre sized effect. See Table 4.2 for statistics associated with the main effect of the indicator. These results suggest that the effectiveness of an orientation indicator depends on the type of orientation indicator implemented. Our results show that static indicators are more effective than dynamic indicators. We also found that colocated indicators are more effective than noncolocated indicators. Accordingly, a colocated static indicator was the most helpful to users, and a noncolocated dynamic indicator was the least helpful to users. 4.2.1.2 Individual differences in spatial ability Individual's spatial ability may impact task-performance with a noncolocated static orientation indicator. When individuals' spatial ability was taken into account, Experiment 4 using a noncolocated static indicator, showed facilitatory effects of the indicator. As seen in Figure 4.3, the effect of the noncolocated static indicator was driven by high spatial learners. The high spatial group showed an increase in accuracy with the aid (.81) versus without the aid (.73), whereas the low spatial group showed a smaller increase with the aid (.67) versus without the aid (.65). The effect size was much higher for the high spatial group (1.05) versus the low spatial group (.27), indicating the aid had a strong effect for the high spatial group and a small effect for the low spatial group. See Table 4.2 for statistics associated with the interaction between spatial ability and indicator. In contrast, there was no statistically significant interaction between spatial ability and indicator in Experiment 5 which used a colocated static indicator. As Figure 4.4 shows, the colocated static indicator helped both spatial ability groups. The experiments with dynamic indicators also did not result in a statistically significant interaction between spatial ability and indicator. These results confirm the need to evaluate individual differences in spatial ability. It may be that high spatial ability users are able to use a noncolocated static aid more effectively than low spatial ability users. However, both spatial ability groups are able to effectively use a colocated static aid. There was also an interaction between class of objects and spatial ability in both noncolocated indicator experiments. Experiment 4, F(1,38) = 9.7, p < .01. Experiment 6, F(1,38) = 4.4, p < .05. These two experiments indicated high spatial learners scored significantly higher on trials that presented mechanical objects versus trials that presented anatomical objects. On average, the high spatial group showed an increase in accuracy with mechanical objects (.82) versus anatomical objects (.69), whereas the low spatial group showed a smaller increase in accuracy with mechanical objects (.68) versus anatomical 48 Table 4.2. Accuracy results for the same/different experiments. Rotation always about horizontal axis or vertical axis. 40 subjects per experiment. Same/Different Task Effect of Indicator Average Scores Statistics Effect Size of Aid Exp 4: noncolocated static significant .69 without aid F(1,38) = 18.1 .51 overall effect .74 with aid p < .01 significant effect high spatial: .73 without aid, .81 with aid F(1,38) = 6.1 1.05 by spatial ability low spatial: .65 without aid, .67 with aid p < .05 .27 Exp 5: colocated static significant .67 without aid F(1,38) = 41.5 .78 overall effect .75 with aid p < .01 significant effect anatomical objects: .60 without aid F(1,38) = 12.4 1.04 for object type .71 with aid p < .01 significant effect .64 without aid F(1,38) = 5.4 .90 for horizontal axis .75 with aid p < .05 Exp 6: noncolocated dynamic significant .67 without aid F(1,38) = 5.5 .19 overall effect .69 with aid p < .05 Exp 7: colocated dynamic significant .69 without aid F(1,38) = 13.6 .44 overall effect .73 with aid p < .01 49 Figure 4.3. Mean score on same/different task with and without noncolocated static orientation indicator by spatial ability. Figure 4.4. Mean score on same/different task with and without colocated static orienta-tion indicator by spatial ability. 50 objects (.63). These results suggest that people with high spatial ability will be more accurate at perceiving distinct objects than people with low spatial ability. Furthermore, in Experiment 7 using a colocated dynamic indicator, there was a signifi-cant interaction between spatial ability and axis of rotation. F(1,38) = 8.6, p < .01. The high spatial ability group showed an increase in accuracy with objects rotated about the vertical axis (.77) versus objects rotated about the horizontal axis (.74), whereas the low spatial ability group showed a smaller increase in accuracy with objects rotated about the vertical axis (.70) versus objects rotated about the horizontal axis (.69). This result may imply that objects rotated about the vertical axis will be easier for high spatial ability users than low spatial ability users. Lastly, in all four experiments there was an overall difference in accuracy between high spatial ability and low spatial ability groups, p < .01. On average, the high spatial group scored higher (.76) than the low spatial group (.65). 4.2.1.3 Class of objects Each experiment also showed a significant effect on the class of objects, p < .01. Objects that were mechanical parts were easier for subjects to visualize than objects that were anatomical parts. On average, subjects scored higher on mechanical objects (.75) versus anatomical objects (.65). Only one experiment, Experiment 5 using a colocated static indicator, showed an in-teraction between class of objects and orientation indicator. This result indicated that the orientation indicator helped more with anatomical parts versus mechanical parts. For anatomical parts, participants showed an increase in accuracy with the aid (.71) versus without the aid (.60). The effect size is 1.04, indicating the aid had a large sized effect for anatomical objects. For mechanical parts, participants showed a smaller increase with the aid (.79) versus without the aid (.74). See Table 4.2 for statistics associated with the interaction between class of objects and indicator. Users may have more difficulty when mentally rotating anatomical parts than mechanical parts. The results also indicate that if a complex object is rotated about the vertical and horizontal axes users may benefit from a colocated static indicator. If a less complex object is rotated about the vertical and horizontal axes, users may benefit less from a colocated aid. 51 4.2.1.4 Axis of rotation The axis of rotation that an object is rotated about influenced task-performance in three experiments. Experiments 4, 5, and 6 each showed a significant effect of the axis of rotation on task-performance, p < .10. All experiments used rotations about the vertical axis and horizontal axis, and the objects that rotated about the vertical axis were easier for users than objects that rotated about the horizontal axis. On average, participants showed an increase in accuracy with objects rotated about the vertical axis (.72) versus the horizontal axis (.69). Furthermore, the effect of the colocated static indicator was modulated by the axis of rotation. As shown in Figure 4.5, the presence of the indicator made a larger impact for objects rotated about the horizontal axis than objects rotated about the vertical axis. The effect size is .90, indicating the aid had a large size effect for objects rotated about the horizontal axis. See Table 4.2 for statistics associated with the interaction between axis of rotation and indicator. These results suggest that users may have an easier time perceiving the structure of an object when it is rotated about the vertical axis. Users may have a harder time perceiving the structure of an object when it is rotated about the horizontal axis. Accordingly, a Figure 4.5. Mean score on same/different task with and without colocated static orienta-tion indicator by axis of rotation. 52 colocated static indicator may be more beneficial to users when objects are rotated about a difficult axis such as the horizontal axis and less beneficial to users when objects are rotated about a more familiar axis such as the vertical axis. 4.2.2 Response time Response time was analyzed to determine if users took a longer amount of time to respond to trials when the indicator was present than trials when the indicator was absent. The analysis of response time also allows us to compare the response time functions to that of prior research on mental rotation. Typically researchers find response times for mental rotation to be linear; subjects take longer to respond with greater degrees of disparity between objects. Response time was analyzed from Experiments 4 and 5 because participants benefited the most from static aids. A 2(orientation indicator) × 5(degree of rotation) × 2(spatial ability) ANOVA was performed on response time from trials that participants got correct and that presented subjects with two objects that were the same. Data from 36 subjects (18 low ability, 18 high ability) was analyzed from Experiment 4, which used a noncolocated aid, because four subjects did not get at least one trial correct per orientation indicator and degree of rotation. Data from all 40 subjects were analyzed from Experiment 5, which used a colocated aid. Class of objects and axis of rotation could not be analyzed because the majority of subjects did not get at least one of these trials correct for each degree of rotation. In both experiments the orientation indicator had a statistically significant effect on response time. In Experiment 4, subjects had increased response time with the aid (4.8 seconds) versus without the aid (4.2 seconds). In Experiment 5, subjects had increased response time with the aid (5.6 seconds) versus without the aid (4.5 seconds). The orien-tation indicators could increase response time for three reasons. One, users may decide on a response without using the aid and then validate their response with the aid. Two, users may use the aid as features of the object and thus take longer to respond because there are more features to compare. Or three, users may use the aid to develop another strategy to solve the task such as using the aid to eliminate incorrect responses. See Figures 4.6 and 4.7 for graphs of response times in each condition. See Table 4.3 for statistics associated with effects of the indicator. Each experiment also showed increased response times with greater degree of rotation between the two objects. This finding is typical of mental rotation experiments that present same/different tasks. In Experiment 4, response times were higher for disparities of 75! (4.9 53 Figure 4.6. Mean response time on same/different task with and without noncolocated static orientation indicator by spatial ability. Figure 4.7. Mean response time on same/different task with and without colocated static orientation indicator. 54 Table 4.3. Response time (RT) results in seconds for Experiments 4 and 5. Experiment Variable Average RT Statistics Experiment 4 noncolocated 4.2 seconds without aid F(1,34) = 30.7 indicator 4.8 seconds with aid p < .01 Experiment 5 colocated 4.5 seconds without aid F(1,38) = 147.5 indicator 5.6 seconds with aid p < .01 Experiment 4 degree of 4.3 seconds at 15! F(1,136) = 7.8 rotation 4.8 seconds at 75! p < .01 Experiment 5 degree of 4.7 seconds at 15! F(1,152) = 9.2 rotation 5.5 seconds at 75! p < .01 seconds) versus disparities of 15! (4.3 seconds). In Experiment 5, response times were higher for disparities of 75! (5.4 seconds) versus disparities of 15! (4.8 seconds). These results could be interpreted as increasing time needed to mentally rotate one object to match the other, or it could be an indication of an overall increase in difficulty with greater disparity. See Figures 4.6 and 4.7 for graphs of response times for each degree of disparity. See Table 4.3 for statistics associated with effects of the degree of disparity on response time. 4.2.2.1 Response time and spatial ability Furthermore, Experiment 4, which used a noncolocated static indicator showed a main effect of spatial ability. High spatial ability participants showed increased response times (4.9 seconds) versus low spatial ability participants (4.1 seconds). Low spatial ability users took less time to respond than high spatial ability users, both when the noncolocated aid was present and when it was absent. See Figure 4.6 for response time by spatial ability. This result could stem from low spatial ability subjects using different strategies to solve the task than high spatial ability subjects. Qualitative results from written surveys (see section A.3) showed that 58% of subjects used various strategies to solve the task, and 25% used a specific approach. There was no difference between low and high spatial groups in whether they tried various approaches or a specific approach. There was however, a difference between spatial ability groups in whether users mentally rotated the whole figure or whether users mentally rotated a section of the figure when making a comparison. Low spatial users preferred to mentally rotate the whole figure (86%) versus a section of the figure (10%). High spatial users did not have as strong of a preference to mentally rotate the whole figure (58%) versus a section of the figure (37%). Additionally, subjects from both spatial groups reported using verbal strategies and visual strategies to solve the task. Verbal strategies involve solving the task verbally in the mind (i.e., "shorter part up and longer part down"). Visual strategies rely mainly 55 on visualizing the figures and users do not talk themselves through the steps. Both low spatial users (48%) and high spatial users (42%) reported that they thought through the steps verbally in their minds. Low spatial users were less likely to visualize the figures (43%) versus high spatial users (58%). These results indicate that both spatial groups may process information verbally, but high spatial users are slightly more likely to process information visually than low spatial users. Lastly, Experiment 5, which used a colocated static indicator did not show a main effect of spatial ability. Recall that the colocated static aid increased all users accuracy. Qualitative results from written surveys (see section A.3) showed that high spatial ability subjects were slightly more likely to use various approaches to solve a task (47%) compared to low spatial ability subjects (38%). Both high spatial ability subjects (37%) and low spatial ability subjects (33%) stated they used a specific strategy to solve the task. More low spatial ability subjects said they did not have a specific strategy to solve the task (29%) versus high spatial ability subjects (16%). There was also a difference between spatial ability groups in whether users mentally rotated the whole figure, or whether users mentally rotated a section of the figure. With the colocated static aid, |
| Reference URL | https://collections.lib.utah.edu/ark:/87278/s6m04kz6 |



