| Title | Design and implementation of a mobile wireless sensor network testbed |
| Publication Type | thesis |
| School or College | College of Engineering |
| Department | Computing |
| Author | Johnson, David Michael |
| Date | 2010-04-15 |
| Description | Network simulation continues to be the dominant method of experimental evaluation in wireless networking. However, much research has established the failure of simulator models to adequately express wireless signal propagation. These shortcomings can lead to incomplete evaluation of wireless protocols and applications. When wireless research includes mobility, real evaluation becomes still more difficult due to the difficulty of creating and controlling mobile nodes in a real environment. The primary goal of Mobile Emulab, the testbed presented in this thesis, is to encourage real mobile wireless research in the wireless community and provide a sound, usable testbed platform for experimentation. Mobile Emulab is both software designed to control and monitor a mobile wireless testbed, and a testbed providing access to mobile wireless resources. The testbed consists of several robots, each with a small computer and small wireless devices called motes, manueverable in an area surrounded by fixed motes. Through a variety of interfaces, remote researchers can control these robots interactively over the Web. Mobile Emulab provides an overhead tracking system that localizes the robots to within 1 cm, providing repeatable positioning and valuable knowledge to researchers studying how signal propagation affects their experiments. Additional software tools that were developed to ease the evaluation process for wireless sensor network researchers can be used by Mobile Emulab experimenters. Finally, the testbed extends Emulab, which provides researchers with well-known experimental interfaces and automation capabilities. This thesis presents Mobile Emulab's design and implementation, and establishes its usability and utility through several experiments. |
| Type | Text |
| Publisher | University of Utah |
| Subject | Network simulation; Mobile Emulab |
| Dissertation Institution | University of Utah |
| Dissertation Name | MS |
| Language | eng |
| Relation is Version of | Digital reproduction of "Design and implementation of a mobile wireless sensor network testbed" J. Willard Marriott Library Special Collections TK7.5 2010 .J64 |
| Rights Management | © David Michael Johnson, To comply with copyright, the file for this work may be restricted to The University of Utah campus libraries pending author permission. |
| Format | application/pdf |
| Format Medium | application/pdf |
| Format Extent | 108,434 bytes |
| Identifier | us-etd2,155670 |
| Source | Original: University of Utah J. Willard Marriott Library Special Collections |
| Conversion Specifications | Original scanned on Epson GT-30000 as 400 dpi to pdf using ABBYY FineReader 9.0 Professional Edition. |
| ARK | ark:/87278/s6ks760v |
| DOI | https://doi.org/doi:10.26053/0H-CWPJ-HN00 |
| Setname | ir_etd |
| ID | 192088 |
| OCR Text | Show D E S I G N A N D I M P L E M E N T A T I O N O F A M O B I L E W I R E L E S S S E N S O R N E T W O R K T E S T B ED by David Michael Johnson A thesis submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Master of Science m Computer Science School of Computing The University of Utah May 2010 DESIGN AND IMPLEMENTATION OF A MOBILE WIRELESS SENSOR NETWORK TESTBED fulfi llment Mas ter in Copyright © David Michael Johnson 2010 All Rights Reserved © THE UNIVERSITY OF UTAH GRADUATE SCHOOL S U P E R V I S O R Y C O M M I T T E E A P P R O V AL of a thesis submitted by David Michael Johnson This thesis has been read by each member of the following supervisory committee and by majority vote has been found to be satisfactory. Chair: Jay Lepreau (signed by: Martin Berzins) Sneha Kumar Kasera SCHOOL SUPERVISORY COMMITTEE APPROVAL t hesis THE UNIVERSITY OF UTAH GRADUATE SCHOOL F I N A L R E A D I N G A P P R O V AL To the Graduate Council of the University of Utah: I have read the thesis of David Michael Johnson in its final form and have found that (1) its format, citations, and bibliographic style are consistent and acceptable; (2) its illustrative materials including figures, tables, and charts are in place; and (3) the final manuscript is satisfactory to the Supervisory Committee and is ready for submission to The Graduate School. Date Jay Lepreau (signed by: Martin Berzins) Chair, Supervisory Committee Approved for the Major Department K 4 - Martin Berzins Chair/Dean Approved for the Graduate Council Dean of The Graduate School SCHOOL FINAL READING APPROVAL ___~ D,---a,",-,v--"~id M""-i"c"~h""a"",el,-"J-,o,,,-h,~n~s<-,o,~nc--__ fin al format , v c1/J4'U- Charles 1. Wight A B S T R A C T Network simulation continues to be the dominant method of experimental evaluation in wireless networking. However, much research has established the failure of simulator models to adequately express wireless signal propagation. These shortcomings can lead to incomplete evaluation of wireless protocols and applications. When wireless research includes mobility, real evaluation becomes still more difficult due to the difficulty of creating and controlling mobile nodes in a real environment. The primary goal of Mobile Emulab, the testbed presented in this thesis, is to encourage real mobile wireless research in the wireless community and provide a sound, usable testbed platform for experimentation. Mobile Emulab is both software designed to control and monitor a mobile wireless testbed, and a testbed providing access to mobile wireless resources. The testbed consists of several robots, each with a small computer and small wireless devices called motes, manueverable in an area surrounded by fixed motes. Through a variety of interfaces, remote researchers can control these robots interactively over the Web. Mobile Emulab provides an overhead tracking system that localizes the robots to within 1 cm, providing repeatable positioning and valuable knowledge to researchers studying how signal propagation affects their experiments. Additional software tools that were developed to ease the evaluation process for wireless sensor network researchers can be used by Mobile Emulab experimenters. Finally, the testbed extends Emulab, which provides researchers with well-known experimental interfaces and automation capabilities. This thesis presents Mobile Emulab's design and implementation, and establishes its usability and utility through several experiments. ABSTRACT Ernulab, ut ility FFoorr mmyy ppaarreennttss C O N T E N T S A B S T R A C T F I G U R E S x C H A P T E R S 1. I N T R O D U C T I O N Motivation Structure 2. BACKGROUND AND RELATED W O R K 6 2.2 Mobile and Fixed Wireless Testbeds 7 O V E R V I EW Design 3.1.1 Component Initialization and Dataflow 10 rntp robotd visiond Hardware LOCALIZATION 17 4.1 Possible Localization Methods 18 Issues 19 Hardware 4.2.4 Deployment 21 Software 21 4.3.1 Mezzanine 21 4.3.2 vmc-client 22 CONTENTS ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. iv LIST OF FIGURES . . . .. ....... . .... . ... . . . .......... . ...... ...... ix LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. ACKNOWLEDGEMENTS ....... . .. . .. .. . . ........ ..... ..... .. .... xi CHAPTERS INTRODUCTION.. ........ .. . .... .. . .... . .... ..... .... .. . .... 1 1.1 Motivation... ......... ............... .. . ... . . . . ... . . ... .. . . . 1 1.2 Goals ... . .. ...... . ..... . .. .. . .. .... ... . ... . . ... . . .......... 3 1.3 Contributions . . . .. . . ...... . . ......................... . ...... 4 1.4 Structure................ . ... . . . .......... .. ......... . .. . .. . 4 WORK . . . . . . . . . . . . . . . . . . . . . . . 2.1 Emulab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Wirel ss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. SYSTEM OVERVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.1 Software Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 10 Dat aflow ...... . .... . . . . . . . . . . .. 3.1.2 Communication: mtp ................ .. . .... ..... . ........ 12 3.1.3 Robot Control: . ..... . .. . . ... . .... .. ...... . . .. .... . 12 3.1.4 Robot Localization: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 13 3.1.5 Motion Models .. . . .. . . ........ . . . . . . . . . . . . . . . . . . . . . . . . .. 13 3.1.6 User Interfaces .............. . .. . ...... . ... . ............. 14 3.2 Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 14 3.3 Hardware.. ... ... .. ... ... . . ... . .. . .. . . ..... . ......... .. .... . 15 4. LOCALIZATION. ......... ..... . ..... . .... ...... .............. 17 ....... . ...... . . . ... ... . . .... . .... 4.2 Design Issues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.2.1 Recognition Software ... . .... ................... ..... . .. .. 19 4.2.2 Fiducial Patterns ................................. .. .. .. . 20 4.2.3 Vision Hardware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 20 4.2.4 Deployment... ...... .... .. . ............ .. . ...... ...... . . 21 4.3 Localization Software . . .. . . .. ... . .. . ... .. ..... . . ... ........... 21 4.3 .1 Mezzanine... ... . .... .. .... .............. .... . . . .. .... . . 21 4.3.2 vmc-client.... . .. ... ... . . .. . .. .... . . . . ........ . .. .. . .... 22 4.3.3 visiond 22 4.3.3.1 Unique Identification 4.3.3.2 Track Maintenance 24 4.3.3.3 Scalability 24 4.3.3.4 Jitter Reduction 25 Dewarping Improvements 26 Validation Precision Analysis 5. 5.2 Sensor Network Application Management 38 5.2.1 Mote Management and Interaction 39 Interfaces 5.2.3 Message Handling 41 Plugins Information S T U D I E S 6.1 Environment Analysis 49 6.1.1 Experiment Basis and Methodology 49 Mobility Assumptions Evaluation Scenario Analysis CONCLUSION 65 Users 65 Modularity mtp 67 7.2.2 67 7.2.3 and vmc-client 67 7.2.4 embroker 68 7.3 Future Work 68 vii 4.3.3 visiond .............. . ............... . . . ............... 22 4.3.3.1 Unique Identification .......... ..... .......... ....... . 23 4.3.3.2 Track Maintenance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 24 4.3.3.3 Scalability... . ............. . .................... ... . 24 4.3.3.4 Jitter Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 25 4.4 Dewarping Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 26 4.5 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 27 4.5.1 Location Estimate Precision. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 27 4.5.2 Jitter Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 28 USABILITY TOOLS . ........... .... ........................... 36 5.1 Wireless Characteristics .. . . ... ......................... . . . .... 36 5.1.1 Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 5.1.2 Active Frequency Detection ........................... . . ... 37 Management. . . . . . . . . . . . . . . . . . . . . . . . .. Interaction. . . . . . . . . . . . . . . . . . . . . . . . . .. 5.2.2 Key Interfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 40 Handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 5.2.4 Stock P lugins . .... ........ .... .. .. . ..................... 42 5.2.4.1 EmulabMoteControl Plugin .. . . . . . . . . . . . . . . . . . . . . . . . . .. 42 5.2.4.2 PacketHistory Plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 43 5.2.4.3 PacketDispatch Plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 43 5.2.4.4 Location Plugin ... ... . . ... . ........ . .......... ... ... 44 5.3 Mote Data Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 45 5.4 User-available Location Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 47 6. CASE STUDIES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 49 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. Methodology. . . . . . . . . . . . . . . . . . . . . . . . .. 6.1.2 Results . . ......... .. .. ................ .. ............... 50 6.2 Mobility-enhanced Sensor Network Quality ....... ..... . ...... . . .. . 55 6.2.1 The Rationale for Mobility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 6.2.2 Design and Implementation ..... . .. . .... .. . . ....... . .. . .. . . 56 6.2.3 Assumptions........................ . . ........ .......... 58 6.2.4 Evaluation.............................................. 59 6.2.5 Lessons for Mobile Testbeds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 60 6.3 Heterogeneous Sensor Network Experimentation . . . . . . . . . . . . . . . . . . .. 61 6.3.1 Scenario.... . .............. . ............... . .... . ....... 61 6.3.2 Analysis. . .............. . ....... .... ........... . . . .. . ... 63 7. CONCLUSION..... .... . ...... . ... .. ......................... . 65 7.1 Users... ........... .... .. ..... .... ...... ... ............... . 65 7.2 Analysis of Component Modularity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 66 7.2.1 m,tp........................ .. .. .... . ...... .......... .. 67 7.2.2 robotd and pilot .................. .. . .... ... ........... .. 67 7.2.3 visiond and vmc-client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 67 7.2.4 embroker.............. . ...... . ... .... . ..... . ........... 68 7.3 Future Work. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 68 vii 7.3.1 Localization 68 7.3.2 Mobile Control 69 7.3.3 Mote Application Management 70 R E F E R E N C E S 71 viii 7.3.1 Localization.... . .. . .. . .... ..... .. ... .. .... . ....... .. .. .. 68 7.3.2 Mobile Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 7.3.3 Mote Application Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 70 REFERENCES .. . ........... .... . ... .. ..... ............... ....... viii L I S T O F F I G U R E S 3.1 Mobile Emulab software architecture 11 3.2 Garcia robot with two-circle, two-color fiducial and antenna extender 16 4.1 Location errors with cosine dewarping 28 4.2 Location errors with cosine dewarping and error interpolation 29 4.3 Average jitter (error bars show minimum and maximum) in x component. . 30 4.4 Average jitter (error bars show minimum and maximum) in y component. . 30 4.5 Average jitter (error bars show minimum and maximum) in 9 component. . 31 5.1 Screenshot of wireless connectivity applet showing received packet statistics. 38 5.2 Screenshot of wireless connectivity applet showing RSSI statistics 39 5.3 Screenshot of the EmulabMoteManager application 40 5.4 Screenshot of logged data in a MySQL database 46 6.1 ns-2 code that moves a robot through a grid and logs output 50 6.2 Packet reception at power level Oxff 51 6.3 Packet reception at power level 0x03 52 6.4 Average RSSI at power level 0x03 53 6.5 Packet reception ranges at two power levels 54 6.6 Average RSSI ranges at two power levels 54 6.7 Routing messages sent by each mote 60 6.8 Visualization of the heterogeneous topology generated by Emulab 64 LIST OF FIGURES architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. extender. .... dewarping. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. interpolation. .... . ..... ::r e statistics. ..... . . application. . . . . . . . . . . . . . . . . . .. database. . . . . . . . . . . . . . . . . . . . .. output. . . . . . . . . . .. Oxff. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. Ox03. ............. . ... . .. . ... . ..... Ox03. .. .. .. . ................. . .. . . . . . levels. . . . . . . . . . . . . . . . . . . . . . . . .. levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . mote. .............................. Emulab. ..... . . L I S T O F T A B L ES 4.1 Location error measurements 27 4.2 Location estimate jitter, x coordinate 32 4.3 Location estimate jitter, y coordinate 32 4.4 Location estimate jitter, 9 coordinate 33 4.5 Jitter in location estimate message interarrival times 34 4.6 Jitter for linear motion location estimates and message interarrival times.. . 35 6.1 Packet reception range statistics 55 6.2 Average RSSI range statistics 55 LIST OF TABLES measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. j itter, coordinate. ...... . ................ . .. . .. j itter, coordinate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. e coordinat e. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. inter arrival t imes. . . . . . . . . . . . . . . . . .. J itter inter arrival times. .. statistics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. statistics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A C K N O W L E D G E M E N T S I'd like to thank my advisor, Jay Lepreau, for giving me an interesting research project and a chance to build a real collection of systems software, and valuable advice, friendship, and support. By including me in his research group, Jay altered the direction of my career and life. He provided wonderful opportunities to learn about and experience systems software design and development, and I am very thankful to have been part of his life. I also thank Sneha Kumar Kasera and John Carter, members of my committee, for their advice, time, and patience! I owe Karen Feinauer, who wears the Graduate Coordinator hat in the School of Computing, a massive debt. She has patiently steered me through requirements, answered questions, and prodded and encouraged me to completion of my degree. In another department, or in another university, I might not have graduated. I am grateful for the significant contributions, ideas, and support from many members of the Flux Research Group-especially Dan Flickinger, Tim Stack, Russ Fish, Leigh Stoller, Mike Hibler, Robert Ricci, Eric Eide, Kirk Webb, and Mark Minor. More generally, the opportunity to work with the members of the Flux Group has helped me increase my knowledge of the computer systems field and given me the ability to partipate in its research community. Most importantly, my experiences in the Flux Group have taught me how to build better systems software! Finally, I thank my parents for the direction they gave to my life, the encouragement and expectation they provided for my path through higher education, and the care and love with which they have blessed me. This thesis is dedicated to them. This material is based upon work supported by the National Science Foundation under Grant Nos. 0520311, 0321350, and 0335296. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation. ACKNOWLEDGEMENTS J ay proj ect Group- especially Hibler , expect ation dedicat ed findings, C H A P T E R 1 I N T R O D U C T I O N Researchers experimenting with mobility in wireless sensor networks face a range of evaluation methods to test their work. The choice of simulation, emulation, or testing on live networks (or a combination thereof) heavily influences evaluation time, cost, and quality. Each type of evaluation method is useful at different times during research and development. For instance, a simulator can be an excellent debugging aid for early development and helpful for testing application scalability beyond what is possible on limited research hardware. Emulated networks provide experimenters with access to real hardware, generally under controlled conditions. Emulated environments provide a measure of repeatability while still allowing researchers to test on real hardware. Finally, evaluation on live networks is an important prerequisite to declaring that an application or protocol really works "in the real world," but it is hard or impossible to control the conditions in a live network. 1 . 1 M o t i v a t i on In the mobile and wireless research community, many research evaluations are performed only in simulation. An increasing number of papers also provide experimental data from live or emulated network tests, but the majority evaluate most thoroughly in simulation. There are many reasons that simulation dominates. First and foremost, it tends to be the easiest way to evaluate a protocol or application. Simulations may be as complex or as simple as the researcher deems necessary. Many simulators come prepackaged with different models of wireless signal propagation, basic environment effects, and mobility models. Furthermore, judging by the preponderance of papers that evaluate only by simulation, results from simulation alone is still an accepted means of establishing system quality in the wireless research community. Simulation tends to be a simpler alternative to real-world experimentation, but it can lack fidelity that real devices and environments provide, especially when considering CHAPTER 1 INTRODUCTION test ing 1.1 Motivation t hat 2 wireless and mobile systems. Sometimes this loss of real-world modeling accuracy can significantly affect the evaluation of the protocol or application. Judging from personal interaction with researchers from around the world, the mobile wireless research community generally agrees that simulation is insufficient for many kinds of network experimentation. Simulation simply cannot model effectively many of the physical intricacies of wireless signal propagation [2, 18, 28, 32, 33]. Wireless communication is often inhibited by obstacles or nonuniformities in an environment, which produce multipath and fading effects, or even simple interference, that can be difficult to model in simulation. Slight defects in manufacturing processes or different antennae placement may influence transmission characteristics. For instance, these real world effects can significantly impact the behavior of wireless routing and MAC protocols [33]. Simulation can mask these effects from experimenters, leading to incomplete or even flawed evaluation. When researchers add mobility to their wireless applications and protocols, the situation can only become worse. If simulation cannot accurately model signal propagation in a static environment, how can anyone reasonably expect it to model propagation under motion? Motion simply increases the difficulty of modeling all signal propagation effects in simulation. There is a clear, strong need for inclusion of evaluation based on emulated or real hardware in real wireless signal propagation environments. Unfortunately, experimentation in these environments presents many difficulties not found in simulation. Researchers should prefer to deploy a dedicated experimental network (a testbed) rather than experiment on production networks since this can avoid external interference that may disrupt experimentation. However, designing and implementing a testbed is difficult and reduces the time available for research and evaluation. If software systems are not in place to maintain and quickly reconfigure basic properties of the testbed, the testbed quickly degenerates into a single use system, lacking flexibility and means to easily extract data from the network. Wireless testbeds present additional difficulties for researchers. External wireless sources may disrupt experimentation in ways that are hard to observe and understand; removing external sources of wireless interference is not always possible. Repeatability is much more difficult to obtain in wireless testbeds, due to the complex nature of the physical media. There may also be limited channel availability and external wireless sources that interfere with ongoing experiments. The introduction of sensor network devices to a wireless testbed further increases Simulatioll silllply lIlodel oftell illhibited or llluitipath effects, or even simple interference, that can be difficult to model in simulation. Slight characteristics. For instance, these real world effects can significalltly impact the behavior of wireless routing and MAC protocols [:~:~]. Simulation can mask these effects from experilllenters, leading to incOlllplete or even flawed evaluation. \Vhen researchers add mobility to their wireless applications and protocols, the situation can only become worse. If simulation cannot accurately model signal propagation in a static environment, how can allyone reasonably expect it to model propagation under lIlotion? Motion simply increases the difficulty of modeling all signal propagation effects in simulation. enmlated U nfortunatcly, t>houlcl ndwork 011 alld evaluatioll. 1I0t maintaill ClmI flexibility resear-chert>. t>ources nmch (ilw charlllel 3 the burden on the researcher. Because sensor networks are relatively immature, especially when compared to IP networks, they lack the variety of standard toolchains and application interface software that has existed for many years for IP networks. Many tools, such as net c a t [13], nmap [10], and tepdump [11], exist for IP networks and are invaluable to researchers trying to understand or capture behavior of network protocols. The software that is currently available for use in sensor networks is still maturing, and when combined with the difficulties inherent in dealing with small, resource-constrained embedded devices, protocol and application development and debugging can easily become a nightmare. Experimentation with real mobile wireless devices presents a final set of issues. Precise, repeatable control and placement of mobile devices (in both time and position) is difficult to achieve. If the experimenter is unconcerned with repeatability, it may be easy to place devices on mobile objects such as people or automobiles. However, if positioning itself, or if relative positioning with respect to time, is important, these methods are insufficient. On the other end of the spectrum are systems that emulate mobility by routing packets through the real device nearest the emulated sending location; however, this does not provide exposure to the effects of real motion and will lack location precision. To achieve repeatable motion, the researcher must develop software and hardware infrastructure (including location and guidance services) to track and control the mobile nodes. Despite the difficulties inherent in real, mobile wireless experimentation, such experimentation is a valuable part of demonstrating claimed and proper functionality of new network protocols and applications. In this thesis, we demonstrate that real, mobile wireless sensor network experimentation can be made both practical and useful by creation of an emulation testbed, Mobile Emulab, that provides real wireless devices and mobility. 1 . 2 G o a ls Several important goals influenced the design of Mobile Emulab. First, it must provide simple, expressive, and precise motion to experimenters. Precise motion enables finegrained experimental analysis and is a prerequisite for repeatability. The combination of real world motion and wireless devices provides researchers with a useful platform for experimentation. Second, Mobile Emulab's design should keep both hardware and software costs low. This will enable other research groups to more easily create their own 011 ~mch netcat tcpdump exi!-:lt uwlerstallcl captnre :-Joftware tha.t i:-J seIl:-Jor lletwork!-:l i!-:l !-:ltill when combined with the difficultie!-:l inherent in dealing with :-Jmall, re!-:lource-constrained embedded devices, protocol and application development and debugging can easily become a nightmare. lllay sHch autolllobiles. i:-J thefie immfficient. systell1!-:l !-:lending however, this cloe!-:l not provide expo::mre to the effect!-:l of real motion and will lack location preci!-:lion. To achieve repeatable motion, the researcher UlUst dcvdop software and infrastrnctnre includ i Ilg !-:lervices) a.nd the mobile nodes. Despitc thc difficultiefi wirclefis claillled protocol!-:> a.nd demunstnLte mobile 'useful creation crrl,ulation JVfobile RTfI,ulab, mobility. 1.2 Goals Sevcral illlportcmt iufiucnccd desigu EUlIllab. Fir!-:lt, lllU!-:lt expres!-:livc, preci!-:le expcrilllcnter!-:l. Preci!-:le finegra. ined ifi 1l10tiOll dcvicc!-:l u!-:leful experilllcntation. Mobilc Elllulab's dC!-:lign !-:lllOuld kcep Thi!-:l enablc group!-:l eC1!-:lily mobile testbeds, although the software must also be easily adapted to different hardware. By fostering easy testbed creation, we can encourage many groups to place testbeds in radically different radio environments. The testbed must be remotely accessible from the Internet and should provide interfaces for controlling and observing motion and collecting experimental data. Finally, it should specifically ease sensor network application testing. Interactive sensor network control interfaces provide experimenters with much greater control, debugging, and exploration capabilities, which already exist for older network environments, such as TCP/IP. 1 . 3 C o n t r i b u t i o ns The focus of the research described in this work is the design and implementation of Mobile Emulab, a mobile wireless network testbed for use by remote researchers. Mobile Emulab extends the Emulab [30] network testbed, allowing it to leverage Enmlab's powerful capabilities and well-known interfaces. We added wireless devices attached to small computers, in turn mounted atop mobile robotic nodes. To allow researchers to control and dynamically position these nodes from remote sites, we developed control and tracking software for the robots. The control software includes simple path planning and obstacle avoidance algorithms; this support allows us to present simple motion models to researchers and abstracts details of low-level motion control. We track the robots via a computer vision-based localization system. This allows precise positioning and motion, and provides researchers with detailed location information for use in evaluation. Finally, we implemented several applications that enable experimenters to easily explore the wireless characteristics of our environment, and manage and interact with sensor network devices. In this thesis, I present my contributions to the Emulab mobile sensor network testbed. I contributed substantially to the overall system design, localization subsystem, and wrote application software and libraries to improve testbed utility for experimenters. I made only minor contributions to robot control and monitoring software, although I discuss key aspects of their implementation for clarity. 1 . 4 S t r u c t u re Chapter 2 provides background information about Emulab and discusses several testbeds that relate to this work. Chapter 3 presents the design of the Mobile Emulab software 4 t estbed test beds Th TCP l IP. 1.3 Contributions Emulab's 1.4 Structure software 5 system and implementation of key services and network protocols. Chapter 4 details the design and development of a computer vision-based localization system. Chapter 5 discusses additional software designed to provide experimenters with more information, control, and logging facilities. Chapter 6 presents several case studies demonstrating experiment interaction with testbed facilities and evaluation of various sensor network applications, which demonstrates that the testbed is a viable and valuable tool for researchers. Finally, Chapter 7 discusses the testbed's impact on the research community and suggests future work. information, facilit ies a.nd C H A P T E R 2 B A C K G R O U N D A N D R E L A T E D W O R K First, this section provides a brief introduction to the Emulab network testbed software, on which Mobile Emulab is based, and highlights its important and useful features. We then discuss a variety of related testbeds. 2 . 1 E m u l ab Emulab is itself a well-known and widely-used network testbed, but more generally is a software framework for controlling testbeds. Emulab enables remote researchers to develop and evaluate network protocols and applications on real hardware. The software automatically instantiates custom network topologies on physical resources present in the testbed, with none of the pain associated with configuration of a single-use testbed in a lab. Emulab software enables full space sharing so that many researchers can run concurrent, but separate, experiments. The capability of the software framework to handle a variety of hardware platforms with slight modification, and the experiment paradigm, coupled to a web interface, provide a strong base platform for supporting the mobile extensions. Emulab provides an intuitive methodology and interface for researchers. They create experiments interactively or via the well-known network simulation language, ns-2. When an experiment is "swapped in," Emulab software reserves the requested resources and sets up the network topology along with any custom network parameters. Once the experiment is fully configured, Enmlab's event system begins processing any scheduled events, such as running user programs, tweaking link (or other network) parameters, reporting results, etc. Enmlab's web interface provides the researcher with current status of nodes and other general information about the experiment. When finished, the researcher may "swap out" the experiment, perhaps saving state for later experimentation via disk imaging. CHAPTER 2 BACKGROUND AND RELATED WORK based , 2.1 Emulab software 2. Emulab's results , Emulab's 7 2 . 2 M o b i l e a n d F i x e d W i r e l e s s T e s t b e ds Significant research and development effort has been invested in wireless network testbeds, but few have attempted to add mobility to static nodes. One that has is MiNT [6], a testbed designed to be deployed in small environments. For its wireless platform, MiNT uses desktop PCs containing 802.11 wireless cards with external antennae and attenuation. The antennae extend to mobile robots running on a table. However, the mobile nodes are each confined to a sector in which they can move, lest the wires connecting to the PCs entangle. Finally, MiNT does not provide a localization system, other than presumably the onboard robot odometry. In comparison, our testbed provides experimenters with the ability to move robots anywhere in the experimental area, and to receive ground truth location information. The ORBIT testbed [26] provides emulated mobility in a multihop network of several hundred 802.11 wireless nodes. Experimenters can run code on PCs that binds to network interfaces on the wireless nodes. Thus, the testbed can emulate mobility by changing the program-interface bindings and provide a rough approximation of motion. In our testbed, we provide real node mobility and thus have no need for proxying packet streams through various emulated devices. We also provide wireless sensor network devices instead of 802.11 nodes. TWINE [34] is a software framework that provides hybrid simulation, emulation, and live network support for wireless networks. However, TWINE differs from other approaches in that their emulation does not route packets to real interfaces. Instead, the MAC and PHY layers arc simulated at a very detailed level so as to provide better modeling realism while retaining repeatability. Both emulation and simulation facilities provide several different mobility simulation models including random, group, and trace-based models. Finally, there are several fixed wireless sensor network testbeds. MoteLab [29] is a software framework providing access to a building-scale sensor network testbed. Experimenters may request time-slotted reservations over the Internet. MoteLab provides automatic mote programming at experiment swapin and logging of packets sent out of the serial port to a MySQL database. The mote logger application in our testbed (described later in Section 5.3) is inspired by Motelab's mechanism in that they both use the information stored in TinyOS-generated Java Active Message wrapper classes to extract information from packets and store them in a MySQL database. However, our 2.2 Mobile and Fixed Wireless Testbeds Oue G], platJol'lII, attcnuation. conuecting odomctry. experilllenters with the ability to lllove robots anywhere in the experimental area, and to receivc ground truth location information. 2G] emulatedmobiliLy lllultihop billds enmlate motioll. ami proXyillg vVe IS alld PRY sillllllated Doth silllulation ra.ndom, tracebased test beds. IS lllay Iuterllet. progralllllling t.o l'vlySQL clatabase. ill mechallism usc st.ored gcneratecl :Message informatiou 8 method also presents this information in multiple, human-readable ways in the database, instead of as a data byte array. EmStar [14] is a software framework that provides hybrid emulation on real motes, potentially combined with simulation. In emulated mode, code runs on PCs, but physical radios provide real communication effects. EmStar also provides a hybrid mode of operation in which some motes may run the code natively, while others operate in emulation mode. Although the software is not strictly designed for testbed management, it can manage hybrid networks of simulated and emulated motes. The Emulabtestbed, on which our software is based, provides numerous tools for controlling and managing testbed resources. The Re-Mote Testbed [9] provides novel logging into a MySQL database from motes attached to a control network. The TWIST [16] sensor network testbed provides mote reprogramming and data logging over a control network, and also includes remote power control selection between battery and wall power. pes, 111 C H A P T E R 3 S Y S T E M O V E R V I EW Creating a testbed for real, mobile wireless experimentation as discussed in Chapter 1 requires solutions to several subproblems. First, in order to control and maneuver the mobile nodes precisely, the testbed must provide a powerful localization service capable of providing fast, high-precision position information for each robot. Precise localization data may often aid experimenters in development of applications or protocols that require an input node location, or in evaluation of location services themselves. Second, to provide usable and accessible motion to users, the testbed must provide algorithms in place to aid in control of the mobile nodes. These algorithms must, at minimum, perform obstacle avoidance and rudimentary following of user-specified paths so that experimenters can work without monitoring all details of motion. In mobility simulation, the most often used models arc derivatives of simple waypoint models. However, support for full path-based mobility models is also valuable since providing conformity to specific paths may increase experiment repeatability. Finally, there are several aspects of hardware and environmental control that must be considered when building a system for mobile wireless experimentation. In Mobile Emulab, mobile nodes are surrounded with fixed wireless nodes, since hybrid systems are often of interest in wireless research. Adding fixed nodes also increases testbed scalability and requires fewer mobile nodes. Since this testbed was deployed in a small, public space, it was necessary to choose fixed node placement and mobile node physical radio and antenna setup with care. To create interesting multihop topologies in such a small environment, experimenters will need to use low-power transmissions. At low power, antenna position has greater effect on signal reception. To ease the transition to real-world mobile experimentation, we designed and built our system as an extension to the Emulab network testbed. By extending Emulab, we provide researchers with a well-known network experimentation interface. This effort CHAPTER 3 SYSTEM OVERVIEW solut ions First , fast , must , are However , power , effort 10 involved building a backend that translates motion requests from users and acts as an information broker. We discuss the system architecture and several of the important design aspects of these subproblems below. 3 . 1 S o f t w a r e D e s i gn Emulab provides researchers quick access to automatically-instantiated custom network topologies. Researchers access Emulab primarily through a web interface, through which they can configure and run experiments. The core of Emulab is a large database that maintains state for current experiments, both those currently running on physical resources, and others that have been created in the past but are not running. A large number of configuration programs, scripts, and management daemons perform experiment swapin and swapout and aid the researcher in experiment control, primarily using information stored in this database. To leverage Enmlab's extensive testbed management services, we link Mobile Emulab subcomponents to the database and the web interface. This largely happens in the embroker daemon, which functions as an information broker, through which user motion requests and status data flow. Other subcomponents of the mobile testbed are robotd, which executes requested robot motion commands through instances of the pilot program running on each robot, and visiond, which tracks all robots using information gathered by per-camera instances of the vmc-client program through computer vision localization techniques. Several new Java applets and other enhancements to the web interface expose new mobile and sensor network functionality to Emulab experimenters. The software architecture of our system appears in Figure 3.1. 3 . 1 . 1 C o m p o n e n t I n i t i a l i z a t i o n a n d D a t a f l ow When an experiment swaps in a mobile Emulab experiment, the swapin process spawns off instances of embroker, robotd, and visiond. embroker reads in a simple configuration file, generated from the Emulab database, which specifies robots the experimenter has requested, static obstacles in the area of motion, and the bounds in which the robot can move, and listens on a Unix socket for connections, visiond and robotd both immediately connect to this socket and receive configuration information. Once these three daemons have successfully initialized, visiond identifies the robots and begins tracking and sending impor tant 3.1 Software Design instantiat ed swap in Emulab's st atus ro botd , reques ted inst ances visiond , client J ava software archi tecture 3.1.1 Component Initialization and Dataflow robotd , configuration reques ted, st atic connections. 11 E m u l a b - b a s ed Internet " Users position data telemetry data event replies position request telemetr robot motion requests event requests M o b i l e E m u l ab obot motion requests de" requests data periodic real-time position data "wiggle" requests pilot pilot pilot t i l ^ > \ <am <sm cu« Mezzanine Mezzanine Mezzanine % % % F i g u r e 3.1. Mobile Emulab software architecture. location information to embroker. Once the system has reached this point, it is available to accept user motion requests. As mentioned above, embroker is responsible for passing data between Emulab (and by extension, the users) and the other components that comprise the mobile testbed extensions, embroker is connected to Emulab through the event system. The Emulab event system is a publish-subscribe service that supports event generation and reception by testbedobjects (i.e., nodes, shaped links), embroker communicates with Emulab primarily through this system, and transforms events into commands for robotd and visiond. Communication between embroker, robotd, and visiond (and also pilot and vmc-client) takes place over mtp, the Mobile Testbed Protocol. When an experimenter sends a motion request to Emulab, either through the web interface or from a script, it is passed to embroker, which performs bounds-checking on Emulab-based System r"I . '7 .... ~' robotd r"I . '7 '.O' r ~I embroker robot motion requests "wiggle" position data 0':., Figure Mobile Emulab periodic rea l- time position data "wiggle" requests visiond point , cornprise extensions. i .links). visiond. embroker, robotd , visiond pilot vmc-client) mtp, embroker , 12 the destination to ensure the robots do not leave the area in which they can be localized. The request is then passed to robotd, which requests the latest location data for the robot to be moved, and then plans a path to the specified destination, robotd breaks this path up in the manner required by the motion model being employed, and passes incremental, relative motion requests to pilot. Throughout this process, visiond tracks each robot and reports positions when requested or streams position data, depending on the motion model in use. C o m m u n i c a t i o n : mtp Communication between the components comprising the mobile extension to Emulab takes place over mtp. mtp is a message-based protocol that defines a set of messages that testbed components require. For instance, all components must understand robot location updates. Each mobile subcomponent responds appropriately to commands that it supports. Originally, mtp used a custom packed data format that was interpreted in a platform-independent manner; it was later wrapped with XDR routines by a colleague. 3 . 1 . 3 C o n t r o l : robotd The two primary components that implement motion control and guidance are robotd, which runs on a central control computer, and pilot, an instance of which runs on each robot, robotd handles motion requests at a high level, while pilot implements low-level motion on model used, robotd path to the destination consisting of linear segments that efficiently avoids known, static obstacles, robotd sends pilot relative motion commands, which pilot executes through a robot platform-specific API that translates requests to commands understood by the microcontrollers governing the drive wheels. Due to limitations in this API, pilot calls back to robotd after the robot has traveled a single segment to ensure its heading is correct. Heading may become incorrect due to slippage between the drive wheels and the floor, or unreliable wheel odomctry-based methods for calculating distance covered. When pilot estimates that it has reached the destination position, it refines the position to correct for potential drift error as instructed by robotd, which requests location data from visiond to obtain the most recent "ground truth" robot position. By using the continuous path motion model, the cumbersome nature of the segment-based approach is eliminated. In this model, robotd will generate a velocity profile for the path computed from waypoints, or specified directly by the experimenter. This "profile" reque::;t pas::;ed Tobotd, reque::;ts ::;pecifiecl destination. break::; ill lllCLllller motioll p'ilot. proces::;, v'is'ionri alld t.he model in use. 3.1.2 Communication: mtp COll1n1llnication cOlllprising Emlliab '{ntp. ii; lIlessage-define::; CL ::;et me::;::;age::; in::;tance, nlU::;t cOllmuulds ::;upports. rntp maImer; wa::; routine::; 3.1.3 Robot Control: robotd cOlltrol robotd, nUlS p'ilot, instRlICe wldcll nms 011 rohot.mbolri handle::; lllOtioll illlplclllents motioll directly em the robot. If the waypoint motion moclel is u::;ed, plans a ::;egments obstacles. '("o/Jotd p-ilol cOIlllllands under::;tood mic:rocontrollers wheel::;. Tobotd ensnrc Heaclillg lllay dne fioor, ullreliable calenlatillg clistallce e::;timates refine::; po::;ition Toliotd, 'l)'isiond llsillg continuou::; lllotion cllmbersome segmentbased 13 consists of wheel speeds that ensure the robot stays on the path. By sending low-level wheel speed commands to pilot instead of high-level straight-line moves and pivots, execution times for lengthy moves are reduced, embroker streams robot location data directly to robotd so that any deviations from the path will be discovered and corrected immediately by modifying wheel speed commands based on the original velocity profile. Flickinger, the implementer of this subsystem, provides significantly more detail about continuous motion in Mobile Emulab in [8] and [22]. 3 . 1 . 4 L o c a l i z a t i o n : visiond Mobile Emulab identifies and tracks the robots through a computer vision-based tracking system called visiond, using ceiling-mounted video cameras aimed directly down at the plane of robot motion. As described in greater detail in Chapter 4, we improved Mezzanine [21], an open source object tracking software package, to transform the overhead camera video into x, y coordinates and orientation for detected objects. The system consists of six cameras covering an approximately 60 in2 area. An instance of Mezzanine processes video from each camera, translating pixel locations of objects to x, y coordinates in meters. An instance of vmc-client connects to the shared IPC segment used by a Mezzanine instance and extracts object position and orientation for each object, and converts the local coordinates to globally-understood coordinates. Each vmc-client forwards individual robot location data to visiond for global processing. Individual robot locations from vmc-client instances arc aggregated by visiond into a single, canonical set of tracks that can be used by the other components. These tracks are reported at 30 frames per second to embroker since queries from robotd require low-latency replies and high precision, embroker in turn reports snapshots of the data (one frame per second) to the Emulab database, for use by the user interfaces. This reduction in data rate is an engineering tradeoff intended to reduce the communication bandwidth with, and resulting database load on, the Emulab core. M o d e ls Mobile Enmlab's initial design supported only a simple waypoint motion model. In this model, experimenters can specify lists of destination points for each robot, but the path to these positions is nondeterministic from the experimenter's point of view; even the straight-line path between waypoints is not guaranteed, since the area of motion is open reduced. embmker 3.1.4 Robot Localization: visiond visiond , orientat ion m2 client object , client client are latency precision. embmker 3.1.5 Motion Models Emulab's 14 to people, work carts, etc. These dynamic obstacles may force a path that is different than expected. Finally, the waypoint model is implemented using a primitive API that supports straight-line motion and pivots. Consequently, the robot must stop and reorient itself to move to the next waypoint, which leads to slower motion execution times. To improve the execution time of a user-supplied path, and to provide much more expressive motion, we designed a continuous motion model that generates a velocity profile containing wheel speeds that are then sent to and set on each moving robot. This model can take as input a B-spline with enforced motion constraints, or more simply a set of waypoints. Flickinger describes the theory and implementation behind this model in [8]. This model allows the robots to continuously move along a specified path to their destination, significantly enhancing motion request execution times and providing advanced experimenters with much greater potential for motion control. I n t e r f a c es Mobile Emulab inherits many useful features of Emulab's web interface. However, many more are necessary to allow experimenters to make full use of both mobile and wireless sensor network aspects of the testbed. For instance, a Java-based motion control applet allows experimenters to interactively position robots and monitor motion in real-time. Another Java applet displays connectivity information for all static motes at different power levels, aiding experimenters in choosing which motes to include in experiments to create topologies with different connectivity properties, without inspecting the space themselves. Finally, we developed an application called SNAP-M that is specifically designed to support mobile, wireless sensor device experimentation. Although this application is useful in a standalone manner apart from Emulab, some of its subcomponent features rely heavily on Emulab. These interfaces and others are described in Chapter 5. 3 . 2 E n v i r o n m e nt Currently, the mobile testbed is deployed in an L-shaped area of approximately 60 m and 2.5-2.7m high. This area is "live"; people and work carts may move through the area at any time. This adds a definite aspect of realism to wireless experiments. However, it potentially reduces repeatability. Uncontrolled movement by objects in the area of robot motion can force robots to halt, both by physically impeding their progress, or by preventing visiond from maintaining a track by obscuring line of sight between a robot and one or more videocameras. lllay different awl lllllch anel llloving destinatioll. lllotion 3.1.6 User Interfaces Ellllliab nlC\.ny weh mallY arc uecessary cxperinHmters usc botb SC11S01' 11etwork illstancc, .Java expcrilllellters diffcrent illspecting SNAP-M sellsor experimclltatioll. h(;(lvily EUl1llab. 3.2 Environment ill slmped 177 2 anel 2.7lll timc. aclds vicieocameras. This deployment environment also contains potentially damaging wireless interference across the 900 MHz spectrum in which the mote radios communicate. While the physical properties of the environment may help experimenters enhance quality of application evaluation, external wireless sources may interfere enough with testbed node communication to ruin analysis. Section 6.1 provides a characterization of our environment. 3 . 3 H a r d w a re To increase the types of experimentation we can support, we deployed 25 Crossbow Mica2 motes [3] in fixed locations surrounding the area of motion. All motes are attached to serial programming boards, and each serial line is attached to a console machine. As is the case for most other Emulab node types, each mote's serial port can be exported over a network, or accessed on an Emulab PC functioning as a proxy for direct, programmatic access to several mote serial devices. Testbed software fosters easy mote reprogramming. The fixed motes are deployed in a partial (subject to the "L"-shaped constraint) grid, spaced approximately 2 m apart. Several motes with attached Crossbow MTS310 sensor boards [4] are distributed evenly around the area of motion and are placed at the height of the robots. Since the robot-mounted motes also have attached sensor boards, the sensors are all in the same horizontal plane. We hope that some applications may be tested by using the robots as emulated, sensor-detectable devices. Other motes, without sensors, cover the ceiling. The overall deployment is such that it is possible to create a variety of different multihop networks. However, since the area is small, experimenters will need to reduce radio power to enable multiple hops. We have found this to work reasonably well in practice, although the maximum number of hops is not large. Mobile Emulab uses Acroname Garcia robots [1] (pictured in Figure 3.2) as courier devices. Each Garcia has an Intel Stargate with a 400 Mhz XScale processor [5], and attached to one of the Stargate's serial ports is a Mica2 mote. Thus, experimenters can remotely login to the robot and run programs that connect directly to the mote's serial port, or they can access the serial devices at an Emulab PC or over the network. Sensor boards are attached to all robot-mounted motes. The robots are controlled remotely via an 802.11 card plugged into the Stargate. We have also added antenna extenders that place the mote's antenna approximately 1 m above the ground. We hope that these devices will allow experimenters to emulate human-carried wireless devices. However, in our environment, this lowers the number of 15 deploymcnt environmcnt potcntially c1amaging interference gOO MH2 thc motc \Vhilc environlllent cOIllmunication 3.3 Hardware Alllllotes anc1 Emlllab mot.e's port. Elllulab clirect, programmatic soft.ware fm,ters reprogranlIuiug. L" -shapecl 2m arc evcnly <l.rea somc enmlatcd, ovcrall snch lllultihop net.works. siuce lleed t.o fonnel ill lllllllber Enllllab Acronamc pict1ll'cd ill Figme conner all I !ltd Stargat.e call thc nm dircctly deviccs t.o mollnt.ed autelllla exteuders antelllla vVe dcvices cumlate 16 F i g u r e 3.2. Garcia robot with two-circle, two-color fiducial and antenna extender. static motes that the mobile can communicate. When the antenna is lower to the ground (i.e., closer to the mote itself), a ground-capture effect increases reception capability from ceiling-mounted motes. When the antenna is raised, this effect is lessened and the robot has more difficulty communicating with the ceiling motes. At the same time, many mobile sensor devices have much more power than static motes, so increasing power for mobile radios may be permissible for many applications. Figure 3.2 . itself) , raised , C H A P T E R 4 L O C A L I Z A T I O N To enable accurate and repeatable experiments, Mobile Emulab must guarantee that all mobile devices and associated antennae are at the specified positions and orientations within a small tolerance. To do this, we need to accurately determine the location (position and orientation), or localize, each robot. The system is designed to achieve subcentimeter localization accuracy to ensure that wireless experimenters can clearly understand environmental effects on radio signals. Although clearly important for positioning guarantees, precise localization is also important for efficient robot motion. An incorrect location estimate at the beginning of a motion can subtly alter the robot's initial trajectory and thereby increase time required to correct during execution, and when homing in on the final destination. When dealing with advanced kinematic controllers that require stable and on-time data, this precision becomes even more important. Any amount of localization data jitter will adversely affect controller performance. Another important design goal is that the localization system should be flexible over a variety of quality and cost specifications. Because we wanted to create a mobile testbed that other universities would duplicate to create their own mobile wireless testbeds, the localization software should deliver acceptable levels of quality depending largely on available hardware. this implementation, we attempted to minimize cost while still meeting the goal of precise localization. This was necessary to ensure scalability for future expansions of the testbed motion area. A scalable system is also necessary so robots may be localized across a sufficiently large area to enable interesting multihop wireless experiments. While this implementation of the localization system lowers costs, other adopters of this testbed could use either higher or lower price hardware to obtain desired location precision. Finally, the localization system must not interfere with user experiments. Although the initial system was primarily designed with specific robots and sensor network gear CHAPTER 4 LOCALIZATION t hat robot 's vVhen In 18 attached, we eventually added external antenna risers and contemplated additional sensor attachments. The system we have implemented enables expansion and adjustment to the mobile platform and devices that it carries. As is typically the case in robotic systems, the robots' on-board odometry could not localize the robots with sufficient accuracy for our purposes. Consequently, we developed a computer vision-based localization system to track devices throughout our experimental area. Vision algorithms process image data from video cameras mounted above the plane of robot motion. These algorithms recognize markers with specific patterns of colors and shapes, called fiducials, on each robot, and then extract position and orientation data. This chapter discusses many of the decisions made while designing the mobile testbed's localization system, describes aspects of its implementation, and presents an analysis of its effectiveness. 4 . 1 P o s s i b l e L o c a l i z a t i o n M e t h o d s Because we must have precise, low-cost localization for motion control and experimenter data analysis, we considered several different methods of localization. First, precision requirements almost immediately rule out any form of GPS, since ordinary GPS provides accuracy only to within 1-3 m [15]. Differential GPS [7] raises this accuracy to the centimeter level, but may be difficult to use indoors due to multipath effects. Finally, GPS does not provide target orientation. One could use GPS to find this by placing multiple devices onboard a single tracked object, but the tracked object would need to be fairly large to prevent orientation estimates from being lost in the error of the GPS system itself. Many schemes have been devised for localization in wireless sensor networks using motes and sensorboards, such as Cricket [25] and the Active BAT system [17]. Primarily, these methods use acoustic or ultrasonic ranging in TDOA (Time Distance Of Arrival) techniques to find intranode ranges. Nodes send wireless signals at the same time as they generated sonic or ultrasonic waves, and receiver nodes can determine range based on how far apart these signals were received. However, since we must keep each mote's sensor and radio free for experimentation, we cannot use these techniques without causing potential experiment disruption. The system could avoid radio usage conflicts by adding a second mote and sensorboard to each robot, but would still need to require that the experimenter avoid the resulting radio and sensor interference. adjustll1ent robot.ic localizat.ion experimental proces:::: ::::pecific pat.tern:::: .!iriv,cials, orienta.tion lllobile localiza.tion a.nd present:::: analy::::is it:::: effectivene::::::::. 4.1 Possible Localization Methods rnlli-it coi-it. almoi-it Ollt I-3m usc Illultipath docs llse t.his frolll schellle~ mote:::: tlensorboards, lllethod~ ill Di::::tance a:::: llode~ detennine ra.uge lllote's eXI)erimellt sensor board 19 There are many different robot self-localization schemes. Mcltzer et al. present a SLAM (Simultaneous Location and Mapping) algorithm [24] in which a video camera mounted on a mobile robot records environment features, and the robot can localize itself when it sees these features again. Other methods such as [23] use an omnidirectional video camera to recognize specific landmarks, and estimate their positions given current line of sight to these known landmarks. Computer vision-based robot self-localization, while desirable for some applications, is inappropriate for our system because it often requires large amounts of onboard image processing to localize the robot. We would be forced to use low-resolution camera models to reduce CPU processing. Such cameras cannot be used onboard because we cannot extract high-precision data from them. Additional hardware and processing on the robots also increases the required power and hence decreases the amount of time an experiment can run uninterrupted. After studying these and other systems, we chose to use a computer vision-based localization technique that does not interfere with robot and mote experiments. The resulting system utilizes open-source computer vision software, makes use of a number of simplifying design assumptions that improve quality, and permits tlic use of higher or lower quality hardware to adapt to other testbed implementers' needs. 4 . 2 D e s i g n I s s u es To obtain high-precision data while limiting hardware costs, we made a number of design choices that tend to simplify the system and reduce cost while still attaining desired scalability and localization precision. These design points are discussed further below. 4 . 2 . 1 R e c o g n i t i o n Software One of the design goals for this localization system was low development time. Consequently, we investigated a variety of software packages ranging from computer vision libraries to object tracking products. Few libraries that we found provided enough high-level functionality to quickly build a highly-accurate localization system. Furthermore, many commercially-available object recognition and video processing tools and libraries cost several thousands of dollars to license. Due to these prohibitive costs and time constraints, we attempted to find a relatively complete open-source tool that would provide most of the functionality we needed. We eventually used an open-source tool, called Mezzanine, that provides basic object recognition from live video streams and complies with our other design choices. Iocalizatioll schellles. Meltzer et. a1. ill t.he olllnidirectional currcnt Comput.er self .. large amount.s of onboard image processing to localize the robot. We would be forced to usc low-resolut.ion camera models to reduce CPU processing. Such ca.meras cannot be used onboard becanse we cannot extract high-precision data from them. Additional hardware and processing on the robots also increases the required power and hence decreases the amount. of time an experiment can nm uninterrupted. t.hese usc loca.lization robot. resultiug simplifyiug pel"lllits usc nceds. 4.2 Design Issues obt.ain cost.s, 1:1 t.end scalabilit.y arc 4.2.1 Recognition Software Oue development. tra.cking highlevel funct.ionality a.ccnrate cost several thousa.lIds of dollars to license. Due to these prohibitive costs and time constraints, we attempted to find a relatively complete open-source t.ool that would provide lllost of t.he functionality we needed. We eventnally used an open-source tool, called Mezza.nine, that provides basic object recognition frolll live video streams and complies with our other design choices. 4 . 2 . 2 P a t t e r ns Since Mobile Emulab's localization system does not use self-localization or wireless identification techniques, it must identify each robot uniquely by using a family of patterns or by other means. Complex fiducial patterns may be more difficult to detect with low-quality videocameras, which may have lower resolution and produce noisier image data. Furthermore, more complex software is needed to detect anything more basic than a simple system of lines (i.e., a bar code). Also, there is no guarantee that there is enough physical space atop the robot hardware to mount a pattern family that can support enough patterns to uniquely identify all robots in the testbed. Since we originally planned to support 50 to 100 robots in a large empty room, we were forced to choose a much more simple pattern. Finally, pattern selection also impacts future flexibility for hardware and sensor modifications to the robots. By using a simple pattern, many future modifications remain possible. Mezzanine's default pattern is compatible with these constraints and we use the simple two-circle fiducial it supports. The circles are colored by two different colors widely separated in color space. By using a two-color circle pair, Mezzanine can determine both position and orientation of the object bearing the fiducial. By placing the fiducial on a raised platform, behind the robot's pivot axis, we can add numerous sensors near the front of the robot in the future. This placement, combined with the simple nature of the fiducial, simplified the addition of antenna extenders to the robot. The antenna extenders only slightly worsen the precision of the localization data because they obscure primarily a small portion of one of the circles at any one time. Refer to Figure 3.2 for an illustration of a testbed robot with antenna extender and fiducial. Mezzanine does not support multiple fiducial color pairs by default, and there is little reason to extend it to create this capability. Due to light variance, the number of unique fiducial pairs is not large enough to support a large number of robots as might be desired in the future. Consequently, each robot bears the same fiducial and is uniquely identified through motion algorithms (see Section 4.3.3). 4 . 2 . 3 H a r d w a re We use video cameras and lenses that combine to produce high-precision localization, yet are not prohibitively expensive. However, digital cameras with resolutions higher than 640x480 pixels all exceeded our cost constraints. We evaluated standard analog security cameras, and found that the analog resolution produced is too low to extract sharp fiducial 20 4.2.2 Fiducial Patterns EUllllab's locali,mtiou docs !lot localilmtion mllst videocamcras, than (t simple systcm of lines (i.c., it bar code). Abo, there is no guarantee that there is enongh physical space atop the robot hardware to mOllnt a pattern family that can support enongh pa.tterns to uniquely identify all robots in the testbed. Since we originally planned to support 50 to 100 robots in a large empty roOlll, we were forced to choose a n111ch lllore simple pattern. Fina.lly, pattern selectioll also impacts future flexibility for hardware and sensor lllodifications to the robots. By using a simple pattern, many future modifications remain possible. Mez:z:anine's cOlllpatible usc fiducia.l clift·erent Mezzauine deterllline Wl~ cau nllIllerous portiou oue anyone docs t.o pa.irs llot ellough rohot aJgorithms 4.2.3 Vision Hardware \,ye a.nd localiz:ation, prolIihitivcly cxpellsivc. G40x480 stalJdard fiducial 21 outlines. Standard security cameras also lack manual controls for light and color settings, which are needed in our environment to combat the effects of lighting variability. After extensive evaluation, we chose the Hitachi KP-D20A analog CCD camera [20], which provides sufficient analog resolution and good, manual control of light and color settings. The camera cost was $460. To cover the testbed with as few cameras as possible, we used wide-angle lenses. Such lenses produce barrel distortion, which can be partially accounted for in software, but which decreases our system's precision. Since low-distortion wide-angle lenses can cost many thousands of dollars, we used inexpensive lenses and corrected for distortion in software, using better camera geometry models and interpolative error correction. We are using Computar 2.8- 6.0 mm varifocal lenses set at focal lengths of 2.8 mm, each costing approximately $60. 4 . 2 . 4 Deployment Mobile Emulab's localization software utilizes video cameras mounted above the plane of robot movement, looking down, instead of installing one on each robot. This solution scales better than robot self-localization for dense deployments when there will be at least a one-to-one robot-to-camera ratio, since each overhead camera can track many robots. This method also removes processing requirements from the robots. Puthermore, since the video cameras are pointed straight down, perpendicular to the plane of robot movement, the geometry of the system is greatly simplified. We describe the resulting improvements to precision in Section 4.4. 4 . 3 L o c a l i z a t i o n S o f t w a re 4 . 3 . 1 Mezzanine We use Mezzanine [21], an open-source computer vision software that recognizes colored fiducials on objects and extracts position and orientation data for each recognized fiducial. Each fiducial consists of two 2.7 in circles that are widely separated in color space, placed next to each other on top of a robot. Mezzanine's key functionality includes a video image processing phase, a "dewarping" phase, which attempts to eliminate barrel distortion in wide-angle images, and an object identification phase. During the image processing phase, Mezzanine reads an image from the frame grabber and classifies each matching pixel into user-specified color classes. Each color class must be specified by the user so that all of the pixels in one of the circles in a fiducial varifocallenses 4.2.4 Deployment self.-Iocalization Futhermore, 4.3 Localization Software 4.3.1 Mezzanine 21] , fiducials fidu cial at tempts t o fiducial can be classified into that class. This can be problematic because observed color can be distorted by environmental lighting conditions. To operate in an environment with nonuniform and/or variable lighting conditions, the user must specify a wider range of colors to match a single circle on a fiducial. This obviously limits the total number of colors that can be recognized, and consequently, we cannot uniquely identify robots through different fiducials. We obtain unique identification by commanding and detecting movement patterns for each robot (the "wiggle" algorithm), and thereafter maintain an association between a robot's identification and its current location as observed by the camera network. Mezzanine then combines adjacent pixels, all of which are in the same color class, into color blobs. Finally, each blob's centroid is computed in image coordinates for later processing (i.e., object identification). 4 . 3 . 2 vmc-client As specified in Chapter 3, an instance of vmc-client runs for each videocamera. vmc-client connects to the shared memory segment in which Mezzanine writes extracted location data, and registers itself to be notified whenever Mezzanine has processed a new video frame. When notified, vmc-client reads the shared memory and passes the location data for any detected objects to any connected visiond daemons. Each vmc-client obtains location data for objects in Mezzanine's local x, y coordinates, in which 0, 0 is approximately at the center of the camera's focal axis, vmc-client converts these locations to global coordinates that cover all instances of vmc-clients in the testbed. Furthermore, since the robot fiducials are raised off the ground by approximately 20 cm and are offset from the robot's pivot axis, vmc-client modifies the global location data to account for these factors, vmc-client accepts parameters for these values, but since robots are not uniquely identified, using different types of robots would necessitate multiple vmc-client and Mezzanine instances. 4 . 3 . 3 visiond An instance of visiond is spawned for each mobile experiment, visiond connects to vme-clients embroker''s below). As soon as a robot is matched to an object location forwarded by one or more vm,c-clients, visiond constructs a track for it and updates the track on subsequent location data from a vmc-client. 22 fiducial. fiducials. algorithm) , robot 's adj acent for later processing (i .e., object identification). 4.3.2 vmc-client client client client client obj ects ° axis. client clients robot 's client factors. client client 4.3.3 visiond experiment. all vmc-ci'ients provided by embmkeT's configuration messages, and immediately begins the identification process for all robots reserved in this experiment (see "Unique Identification" vmc-vmc-client . 4 . 3 . 3 . 1 I d e n t i f i c a t i on Because we use the same fiducial on each robot, visiond must perform robot identification; that is, it must map objects identified by Mezzanine to robots controlled by the testbed. When visiond is initialized by embroker, or loses track of a robot, it must re-identify any unidentified robots. Since the testbed design must account for the possibility that robots will be moving in an area in which they may be briefly obscured from the videocameras, visiond must also re-identify a robot if such obscuration continues for too long a time. When a per-experiment visiond is launched by embroker, embroker configures it with the node identifiers that the user has reserved in the experiment, visiond immediately begins a serialized identification process. For each robot, visiond sends a "wiggle request" message to embroker, where it is forwarded to robotd, and then to the appropriate pilot for motion execution. A "wiggle request" typically pivots the robot by 180°. This motion is easily detectable, and very unlikely to create tracking confusion, since at no time can a user request a motion resulting in a stationary pivot of more than 180°. This occurs because robotd always minimizes the arc through which it must turn to execute a pivot. visiond saves the current state of all tracked objects and waits for robotd to signal that the wiggle has finished. When signaled, visiond compares the current set of tracks with the saved set. Whichever track has remained nearly stationary, and has an orientation of 180° difference, is matched to the robot that was commanded to wiggle. There are a number of cases in which, strictly speaking, Emulab does not need to re-identify a robot. For instance, at the end of each experiment, each robot returns to a parked location that is stored in the database. Consequently, Emulab could assume that each robot is at its parked location at the beginning of each experiment. However, if a robot is obscured at experiment swapin, is at a location slightly different than that stored in the database, or has been mistakenly switched to another robot's parked location by a testbed operator, the wiggle algorithm will prove invaluable to avoid mistaking robots. Such mistakes could potentially result in user motion request execution errors. Finally, if a robot's fiducial is ever obscured during experiment runtime, visiond will need to reacquire the robot again through the wiggle process. Therefore, it is in the best interests of the system as a whole to always wiggle to discover true robot-to-object mappings. 23 4.3.3.1 Unique Identification identifica, tion; Ylezzanine embroker-, mnst. idelltify a.ny ullidentified lllust all lIlay brieRy vicieocameras, for too long a time. {;'fniJroke'l', em,l)T"OkcT experimellt. imlIlediately embroker-, pilot executiOll. 1800 • Illotion detecta,ble, tilIle lllore 1800 • Tobotd visionri C:1ll'rent ami waiLs ['or 'mbotri finished. visionri the saved set. Whichever track has remained nearly stationary, and has an orientation of 1800 arc speakiug, Elllulab idellLify rctUnlS Enmlab eac:h robot.'s mistakilIg mista,kes conld nser reqnest ficlucia.l dmiug l'lmt.imc, 'IIi8iond best. iuterests 24 4 . 3 . 3 . 2 M a i n t e n a n ce Once visiond constructs tracks for all recognized robots, object location updates from all vmc-client instances are matched against the current set of known tracks. New object locations match a track if the new location is within a small distance of the latest location in the track, and if the heading is likewise similar. If a track remains unmatched for three subsequent updates, it is cleared and visiond no longer associates the robot with the track. Generally, this occurs due to extended obscuration of line of sight from one or more cameras to a tracked robot. Once visiond loses track of a robot, it immediately attempts to re-identify the lost robot. If the first re-identification fails, subsequent attempts are spaced at approximately 20 seconds. Since we require a multicamera localization system, visiond also performs track ag-gregration across multiple cameras. Since the fiducial atop the robots is larger and more complex than a simple, single LED dot, wherever a robot can cross a videocamera boundary, there must be an overlap zone in the cameras' coverage. This is unfortunate and leads to a reduction of system scalability, but it is necessary to maintain stability of data and reduce jitter. For instance, when a robot moves into another camera, visiond only begins using the position reported by that camera once the robot has left the original camera. This reduces jitter because even the adjacent cameras and their vrnc-clients may have slightly different parameterizations and offsets for calculating global coordinates of objects. Were a robot to move back and forth on the camera boundary, the jitter could increase significantly if visiond constantly selected the other camera's reported location. 4 . 3 . 3 . 3 Scalability The visiond process for each experiment connects directly to each video camera's vmc-client process. To properly eliminate duplicate objects seen by multiple videocameras, visiond waits until it has a full frame's worth of object locations from each vmc-client. Consequently, visiond will very likely not scale well beyond an estimated 30 to 50 cameras. System phase lag will begin to increase, harming advanced robotic motion controllers. Although bandwidth used should not become a problem on high-speed networks, it scales poorly. Due to the small size of the initial implementation of the testbed and time constraints, a permanent localization system scalability increase was beyond the scope of this thesis. However, we have considered how to modify visiond and vmc-client to increase system scalability, and provide suggestions below. As stated, the scalability problems arise due to the need to remove duplicate object 4.3.3.2 Track Maintenance vTnc-clicnt again:,;t a. latest. locat.ion ami hea.dillg a. t.rack ullllmtched H,llcl IOllger a:,;sociates robot. OIle lllore robot.. ulUlticamera locali~atiou aggregration cameratL lllore vicieocctmera unfortunate llloves visiond ouly beca.nse evml a.djacent mILe-clients parall1et.eri,mtiolls 1:1 ami selecteel 4.3.3.3 Scalability ea.ch vmcclient '1'0 eliminat.e vieleocameras, 7J'isiond frolll vrnc-client. la.g 011 Due' thc si~e a.nd local ization cOllsidered vme-client nced locations from adjacent cameras. One simple way to solve this problem is to use a processing hierarchy in which an aggregator instance is allocated to each mxn grid of vmc-client processes. The aggregator would remove duplicates discovered in this grid, and pass on the resulting set of object positions to another aggregator instance. By constructing an aggregation hierarchy in this manner, we can remove duplicates, but also reduce the amount of network bandwidth required since each visiond instance will now only need to connect to the aggregator root. Furthermore, by running the aggregator instances on as few machines as possible, we can limit the extra latency caused by network communication with aggregators higher up in the hierarchy. Unfortunately, every additional processing and communication step adds lag to the system. It is possible that when scaling to hundreds or thousands of cameras, phase lag would begin causing more severe problems to robotic controllers. 4 . 3 . 3 . 4 J i t t e r R e d u c t i on Due to light variance (fluorescent lights, combined with outdoor light), ceiling vibration, etc., there is an amount of jitter in the data reported by visiond. We discussed designing a Kalman filter for our system, but a Kalnian filter would require significant tuning (perhaps in each environment in which the testbed would be used) and implementation time. Consequently, we implement two different types of smoothing functions. We first implemented a simple moving window smoothing function. This alleviated difficulties in the initial system, where embroker would be deceived by a large enough difference in the reported orientation several degrees), decide that the robot had moved from its currently assigned position, and attempt to generate motion commands to drive it back. certain, isolated areas of particularly variable lighting conditions, this resulted in system-generated motion loops, thus acting as a denial of service to the user. When using the moving window estimator with a window size of five, we found that these loops did not appear. Furthermore, since the initial motion control implementation in robotd did not require constant vision data at 30 Hz and stopped every 1.5 m, the resulting phase lag introduced into the data did not harm motion. Unfortunately, the moving window average proved incompatible with the later re-implementation of robotd with continuous, path-based motion, robotd required data at as fast a rate as possible (only 30 Hz with the testbed video cameras, without any interpolation or predictive filtering). The phase lag introduced by the moving window 25 locatioll:-; Oue :-;illlple thb prohlem usc eachm x TI~ vmcclient, relllove cCLlllimit cOlllmllllicatioll aggregator:-; proce:-;:-;ing and communication :-;tep adcl:-; lag to the systelll. is possible that when scaling to hundreds or thousands of cameras, phase lag would begin causing more severe problems to robotic controllers. 4.3.3.4 Jitter Reduction cOlllbined vVe Kalman environlllent :-;moothing hlllctioll. allfwiatecl difficultie:-; systelll, cmbmkcT' (± it:-; In loop:-;, u:-;er. vVhen llsing t.hc:-;c loop:-; ill Tobotd ~~o 1.5111, int.roduced reimplementa. tion T'o/)()td ba::-:ed motion. roiJotd at. ra.te 3() t.e:-;tbed call1era:-;, a.ny filtering). average, coupled with relatively (relative to the newer robotd controller's needs) noisy data, made it much more difficult for the controller to operate successfully. Thus, we implemented an EWMA filter that we hoped would reduce jitter at least as well as the SMA filter, but with a smaller impact on localization data phase lag. All aspects of smoothing are configurable by the user; however, if the user does not provide a specific a parameter for the EWMA filter, we calculate it using a = 2/(N + 1), where N is the window size. 4 . 4 D e w a r p i n g I m p r o v e m e n ts The original Mezzanine detected blobs quickly and effectively, but the supplied dewarping transform did not provide nearly enough precision to position robots as exactly as we required. The dewarping transform is computed by a calibration phase in which grid points in the plane of motion are provided to the application. The supplied dewarping algorithm is a global function approximation. An approximating function does not need to match the provided data points exactly, whereas an interpolating function must. Although Mezzanine's approximation method worked well for us with slightly wide-angle angles, it began to exhibit strange data discontinuities in reported position estimates as angle of view increased. For instance, moving a fiducial 1-2 cm resulted in position estimate jumps of 10-20 cm. We enhanced Mezzanine with a different dewarping transformation that takes advantage of the fact that our overhead cameras point directly downwards. My colleague, Russ Fish, noticed that the barrel distortion pattern could be very closely modeled by a cosine and developed a mathematical basis for our model. Thus, we can transform image position estimates to real-world coordinates by dividing the image coordinate vector by the cosine of the angle between the vertical camera optical axis and a line from the optical center of the camera to the fiducial. An additional multiplier inside the cosine, the "warp factor," corresponds to the amount of distortion in the image. Results indicate that these improvements have removed the strange discontinuities and jumps observed under Mezzanine's original approximation transform. Furthermore, the vision system with these changes reduces error to 1-2 cm. With additional interpolative error correction modifications from Russ Fish, error is reduced to subcentimeter levels. 26 TIm:::;, lea:::;t (t:::;pects :::illloothing arc u:::;er docs (): 0: = 2/(N + N 4.4 Dewarping Improvements clewarping robot:::; EL:::; a:::; i:::; ill point:::; :::;upplied i:::; rLPPTO:cimation. fUllction cloe:::; Me~~anine's approxilllatioll di:::;('ontinuitie:::; po:::;ition estinmte:::; C1:::; fiducial po:::;ition \Ve euhanced Mezzaninc wi tll transfonllation Lakes camera:::; downward:::;. noticcd mathemH,tical lllodel. il1lage vcctor fiducial. aclditionallllultiplier in:::;idc co:::;inc, illlProvelllcnts rellloved stnwge discoutinuities jump:::; ob:::;crvcd uuder tran:::;fonn. FurtherlllOre, wit.h thc:::;e change:::; em. modificatiom; sub centimeter 27 4 . 5 V a l i d a t i on In this section, we validate our localization system by comparing location estimates generated by it for over two hundred grid points, spaced at half-meter intervals around the area of motion. We also examine location estimate jitter, or how much estimates vary at the 30 Hz camera rate. 4 . 5 . 1 E s t i m a t e P r e c i s i on To obtain as much precision as possible, before modifying Mezzanine's dewarping algorithm, we measured a half-meter grid over the mobile area. Consequently, we could calibrate the algorithm and measure its effectiveness with high precision. Using simple measuring tools and surveying techniques, we set up a grid accurate to 2 mm. Table 4.1 shows the results of applying these algorithms to a fiducial located by a pin at each of the 211 measured grid points and comparing the result to the surveyed world coordinates of these points. (Points in the overlap between cameras are gathered twice.) The original column contains statistics from the original approximate dewarping function, gathered from only one camera. Data for the cosine dewarping, and cosine dewarping error interpolation columns were gathered from all six cameras. Figures 4.1 and 4.2 graphically compare location errors at grid points before and after applying the error interpolation algorithm. Figure 4.1 shows measurements of the cosine dewarped grid points and remaining error vectors across all cameras. The circles are the grid points, and the error vectors magnified by a factor of 50 are shown as "tails." Since the half-meter grid points arc 50 cm apart, a tail one grid-point distance long represents a 1 cm error vector. Points with two tails are in the overlap zones covered by two cameras. Figure 4.2 shows the location errors after applying the error correction and interpolation algorithm. Location error measurements. Metric Algorithm original cosine dewarp -+- error interp Max error RMS error Mean error Std dev 11.36 cm 4.65 cm 5.17 cm 2.27 cm 2.40 cm 1.03 cm 0.93 cm 0.44 cm 1.02 cm 0.34 cm 0.28 cm 0.32 cm 4.5 Validation 4.5.1 Location Estimate Precision st atistics cosine + fa ctor apart , t ails cameras. Table 4.1. + 2.40cm 0.44cm I i i i i i i i i i 13 14 15 F i g u r e 4.1. Location errors with cosine dewarping. 4 . 5 . 2 J i t t e r A n a l y s is Although the basic linear waypoint motion controller is unaffected by location estimate jitter, the advanced kinematic controller discussed briefly in Section 3.1.3 is extremely sensitive to noise in location estimates. Even slight amounts of jitter as shown in our data could result in controller instability, leading to stale wheel speed commands sent to the robot. We collected location estimates generated by visiond for a single robot by capturing, timestamping, and logging the messages at embroker. In this section, we analyze deltas between subsequent location estimates for x, and 0 components of location. In addition, we analyze message interarrival times (the time in between reception of two subsequent messages). The advanced controller requires both smooth, timely data. Although we are primarily interested in jitter behavior while a tracked robot is moving, 28 13 Grid points • Error lines 12 11 10 9 ., ... • 8 .-..- 7 6 r'" r'" .... 5 ... r 4 ... ..,. 3 6 7 8 9 10 11 12 13 14 15 Figure 4.5.2 Jitter Analysis embmkeT. .1:, y, e oflocation. addit ion, inter arrival t imes t ime t imely - 1 • ft u • i i i i mm i r ; i i i Grid points • Error lines 4 m 1 i \ » - * • - T •• ^ • • t ? * • V \ . T * • T • • • • • V • • • X f • I m m • ^ ^ t \ s + • m m i i i i i 1 1 1 6 7 8 9 F i g u r e 4.2. Location errors with cosine dewarping and error interpolation. we first present jitter results from monitoring a single, unmoving robot. This provides a baseline estimate of system noise (at least at the time we monitored-recall that the localization system is sensitive to different and varying lighting conditions, and these are nearly uncontrollable in our deployment), which is helpful in analyzing filter performance. Figures 4.3, 4.4, and 4.5 show jitter data (the figures show the x, and 6 components, respectively) collected from logged location estimates for a nonmoving robot in one location (location "P3" in subsequent data tables) in the motion area. Tables 4.2, 4.3, and 4.4 provide statistical data for location estimates for a nonmoving robot in three different locations. For each location, the data are parameterized by filter window size and type ("-" if no filter was applied). Filters were only applied to real data streams; we did not filter the logged unfiltered data stream after collecting it to specifically analyze filter performance alone. It is important to measure end-to-end system jitter to 29 13 • .. • ~ • I GErridro rp ~liinnetss I • 12 • 1 1 , W" • - ~ ~ 11 l- I 1 ¥ '. - · .. ... , . 10 l- • • • • " • - .. 4 • ~ ~ \ ~ 9 I- ... • - • .. , .. - ..... , ;f .. ,..A • 8 I- • ... • • , .. • - • .. , " \ \ • 7 I- , 1 ~ • • ~ • - • • T .. " .... -...1 ~ ~ • • " • • ~ • 6 l- • • • · -- " \ T • , r ...... ,. • - • J \ • • • , ~ .. ., r r "". It 5 • , '- ~ \ \ \ . • • • • • • - • 1 • • " .......... ~ " .. ,- • 4 .. • • • .. . • • 3 10 11 12 13 14 15 Figure monitored- recall deployment) , fi lter 4.3,4.4, :r , y, e filter ( "- " filt er CD 0.01 0.008 h 0.006 0.004 0.002 0 - -& SMA Q EWMA & 4 5 6 7 8 9 Smoothing Window Size 10 11 F i g u r e 4.3. Average jitter (error bars show minimum and maximum) in x component. h 0.006 0.004 - 0.002 - 0 SMA 3 4 5 6 7 8 9 Smoothing Window Size 10 11 F i g u r e 4.4. Average jitter (error bars show minimum and maximum) in y component. 0.016 0.014 0.012 ~ OJ .E J 2 3 None - ---8- --SMA 0 ---8- -- ~ :.~.:.:.:,",. ~_:. :, -: : .. ,:-.:,: .:.:.: ... :: .C."_ : ___ C_:. ~_ .-.C :.:.~ .. ," ~ '.:_-" _. _"_, _.,, _. _ . 5 30 Figure 0.016 0.014 0.012 0.01 ~ 0.008 OJ .E J 2 None - - --8---SMA 0 EWMA - - -8- - - Figure 0.14 i 1 1 1 1 1 1 1 1 None --EH - SMA B 0.12 - EWMA 0.1 - 0.08 - 0.06 - 0.04 - 0.02 - II----.-,-- 2 3 4 5 6 7 8 9 10 11 Smoothing Window Size F i g u r e 4.5. Average jitter (error bars show minimum and maximum) in 6 component. CD •4-> analyze data timeliness as well as location component jitter- since the location estimate messages must cross at least one network link, there is a chance for the network or OS to add message delivery latency, in addition to any jitter introduced by Mobile Emulab software. Obviously, it would be ideal if the location estimates exhibited no jitter, but this was not possible in the Mobile Emulab deployment due to variable lighting conditions and low-cost equipment, among other factors. Aside from the 6 component, the figures for location P3 are representative of smoothing trends for positions PI and P2. Somewhat surprisingly, the SMA filter performs very slightly better than the EWMA filter. However, we still suspect that the SMA filter, while providing slightly less noisy data, is introducing more phase lag. The choice of which filter to use may then be based on real-world testing and requirements of the particular kinematic controller driving the robot wheels-perhaps the controller requires very smooth data so that it does not make radical and sudden course corrections, or perhaps it requires very precise data and can cope with slightly more noise. The smoothing results for the 6 component indicate that the EWMA filter performs slightly better at low window sizes than the SMA filter. This is likely occuring because 0.12 0.1 0.08 ..... Q) ,;; ""') 0.06 0.04 0 None - - - -8- --SMA ··· ·· 0 EWMA ---0--- J - - - - -- - -- - - --- - - - - - - - - - - - -- - - - - - - ----- - - - --- -- -- -- - - - - - - - - - - - - - - - - - -- 3 4 5 6 7 8 31 Fig ure e locat ion estima te lat ency, t o locat ion e representat ive fi lter. However , we st ill suspect t hat the SMA fi lter, while providing slightly less noisy dat a, is introducing more phase lag. The choice of which filter to use may then be based on real-world testing and requirements of the particular kinemat ic controller driving the robot wheels- perhaps da t a course corrections, or perhaps it requires very precise dat a and can cope with slightly more noise. e fi lt er fi lter. 4.2. Location estimate jitter, x coordinate. Position Window Filter Mean Stddev Variance Min Max PI - - 0.001382 0.001751 0.000003 0.000000 0.013400 3 sma 0.000568 0.000493 0.000000 0.000000 0.003300 ewma 0.001034 0.000802 0.000001 0.000000 0.006400 5 eSwllKml,a 00..000000353897 00..000000340543 00..000000000000 00..000000000000 00..000023300000 10 sma 0.000203 0.000179 0.000000 0.000000 0.001200 ewma 0.000378 0.000300 0.000000 0.000000 0.001900 P2 - - 0.001048 0.000897 0.000001 0.000000 0.007000 3 sma 0.000498 0.000382 0.000000 0.000000 0.002000 ewma 0.000633 0.000507 0.000000 0.000000 0.003800 5 eswmmaa 00..000000231098 00..000000128622 00..000000000000 00..000000000000 00..000012250000 10 sma 0.000102 0.000096 0.000000 0.000000 0.000700 ewma 0.000210 0.000170 0.000000 0.000000 0.001100 P3 - 0.001158 0.000928 0.000001 0.000000 0.007200 3 sma 0.000436 0.000335 0.000000 0.000000 0.002100 ewma 0.000549 0.000400 0.000000 0.000000 0.002200 5 eswmmaa 00..000000231368 00..000000216701 00..000000000000 00..000000000000 00..000011120000 10 sma 0.000114 0.000100 0.000000 0.000000 0.000600 ewma 0.000178 0.000143 0.000000 0.000000 0.000700 4.3. Location estimate jitter, y coordinate. Position Window Filter Mean Stddev Variance Min Max - 0.004007 0.004497 0.000020 0.000000 0.033800 3 sma 0.001548 0.001312 0.000002 0.000000 0.010600 ewma 0.003042 0.002334 0.000005 0.000000 0.022800 PI 5 eswmmaa 00..000010694742 00..000010371909 00..000000000021 00..000000000000 00..000069670000 10 sma 0.000557 0.000483 0.000000 0.000000 0.002700 ewma 0.001043 0.000869 0.000001 0.000000 0.005800 P2 - - 0.003002 0.002422 0.000006 0.000000 0.016000 3 sma 0.001427 0.001071 0.000001 0.000000 0.005900 ewma 0.001944 0.001555 0.000002 0.000000 0.011300 sma 0.000673 0.000571 0.000000 0.000000 0.003400 ewma 0.000944 0.000793 0.000001 0.000000 0.006500 10 sma 0.000328 0.000264 0.000000 0.000000 0.002500 ewma 0.000631 0.000489 0.000000 0.000000 0.003100 P3 - 0.003610 0.002682 0.000007 0.000000 0.013400 3 sma 0.001453 0.001140 0.000001 0.000000 0.007100 ewma 0.001828 0.001323 0.000002 0.000000 0.007700 5 eswmmaa 00..000010162546 00..000000850221 00..000000000001 00..000000000000 00..000024790000 10 sma 0.000359 0.000285 0.000000 0.000000 0.001600 ewma 0.000560 0.000428 0.000000 0.000000 0.002500 32 Table 4 .2. Window Fil ter Mean Stddev Variance - - 3 P1 5 sma 0.000339 0.000304 0.000000 0.000000 0.002300 ewma 0.000587 0.000453 0.000000 0.000000 0.003000 10 - - 0.001048 0.000897 0.000001 0.000000 0.007000 3 eWlna P2 5 sma 0.000219 0.000182 0.000000 0.000000 0.001200 ewma 0.000308 0.000262 0.000000 0.000000 0.002500 10 - - 0.001158 0.000928 0.000001 0.000000 0.007200 3 5 sma 0.000216 0.000171 0.000000 0.000000 0.001100 ewma 0.000338 0.000260 0.000000 0.000000 0.001200 Table 4 .3. Fil ter lVIax - - cwma PI 5 sma 0.000972 0.000799 0.000001 0.000000 0.006700 ewma 0.001644 0.001310 0.000002 0.000000 0.009600 10 - - 0.003002 0.002422 0.000006 0.000000 0.016000 3 P2 5 - - 0.003610 0.002682 0.000007 0.000000 0.013400 0.001 8 2 ~ sma 0.000656 0.000521 0.000000 0.000000 0.002900 ewma 0.001124 0.000802 0.000001 0.000000 0.004700 10 33 4.4. Location estimate jitter, 0 coordinate. Position Window Filter Mean Stddev Variance Min Max PI - - 0.033818 0.039660 0.001573 0.000000 0.278100 3 sma 0.012940 0.010670 0.000114 0.000000 0.085000 ewma 0.017922 0.014358 0.000206 0.000000 0.101300 5 eswmmaa 00..000089311342 00..000076174745 00..000000004561 00..000000010000 00..004534150000 10 sma 0.004745 0.004075 0.000017 0.000000 0.024400 ewma 0.006000 0.004656 0.000022 0.000000 0.028500 P2 - 0.048632 0.042012 0.001765 0.000000 0.313200 3 sma 0.023737 0.017557 0.000308 0.000000 0.091000 ewma 0.023599 0.018493 0.000342 0.000100 0.116500 5 eswmmaa 00..001100097469 00..000089866505 00..000000007993 00..000000000000 00..005056300000 10 sma 0.005188 0.004273 0.000018 0.000000 0.043100 ewma 0.006823 0.005423 0.000029 0.000000 0.027600 P3 0.030757 0.023086 0.000533 0.000100 0.114000 3 sma 0.012566 0.009948 0.000099 0.000000 0.060700 ewma 0.011506 0.008169 0.000067 0.000000 0.044900 5 eswmmaa 00..000056543265 00..000044381671 00..000000002149 00..000000000000 00..002267300000 10 sma 0.003044 0.002439 0.000006 0.000000 0.014400 ewma 0.003085 0.002283 0.000005 0.000000 0.011500 the 0 component is so much noisier than the x and y components -so the exponential weighting of the EWMA filter of the early window data docs not appear to "add" more noise than the SMA filter. Consequently, if the kinematic controller is sensitive to 0 jitter, the EWMA filter will be more appropriate to use in visiond. Table 4.5 shows jitter statistics for message interarrival times for a nonmoving robot. Naturally, interarrival times are not affected by filter, since filter computation runtime should be a very tiny portion of wall clock interarrival time. However, the minimum, maximum, and standard deviation statistics are of particular interest. First, the standard deviations suggest that the messages typically arrive approximately 0.33 seconds after the preceding message. However, the minimum and maximum interarrival times can vary from almost 0 seconds to around 0.66 seconds. The maxima could occur if the robot is not tracked in a single frame, but is found again in the next frame (the tracking code allows this to occur). Clearly the code could be extended with an option to always send the latest position for a robot even if it was not found in one frame, as long as it had been found in a recent frame. There are two potential solutions to reducing message interarrival times. First, the tracking code in visiond could produce updates according to a nearly real-time schedule, Table e Position Window Mean Stddev Variance Min Max - - 3 PI 5 sma 0.008314 0.006775 0.000046 0.000000 0.054500 ewma 0.009132 0.007144 0.000051 0.000100 0.043100 10 - - 0.048632 0.042012 0.001765 0.000000 0. 313200 3 0.Q17557 5 sma 0.010949 0.009655 0.000093 0.000000 0.055000 ewma 0.010076 0.008860 0.000079 0.000000 0.066300 10 - - 0.030757 0.023086 0.000533 0.000100 0.114000 3 P3 5 sma 0.005536 0.004317 0.000019 0.000000 0.026300 ewma 0.006425 0.004861 0.000024 0.000000 0.027000 10 e y components- does filter. e visiond. filter , st andard First , Table 4.5. Jitter in location estimate message interarrival times. Position Window Filter Mean Stddev Variance Min Max PI - 0.033404 0.001188 O.OOOOOl 0.026700 0.065800 3 sma 0.033378 0.000563 0.000000 0.030200 0.046500 ewma 0.033382 0.001080 0.000001 0.019900 0.060900 5 eswmmaa 00..003333430747 00..000010759027 00..000000000003 00..002380590000 00..004648810000 10 sma 0.033379 0.000520 0.000000 0.028400 0.042100 ewma 0.0331()1 0.001448 0.000002 0.024400 0.075700 - - 0.033367 0.000431 0.000000 0.027600 0.039400 3 sma 0.033367 0.000329 0.000000 0.026800 0.040000 ewma 0.033368 0.000813 0.000001 0.025000 0.046200 P2 5 sma 0.033401 0.000890 0.000001 0.026000 0.053000 ewma 0.033366 0.000351 0.000000 0.032500 0.034500 10 sma 0.033429 0.001922 0.000004 0.031800 0.090900 ewma 0.033367 0.000378 0.000000 0.025000 0.039100 - 0.033375 0.001025 0.000001 0.028500 0.045000 3 sma 0.033404 0.001122 0.000001 0.031400 0.066700 ewma 0.033381 0.000464 0.000000 0.032300 0.046300 P3 5 eswmmaa 00..003333337667 00..000010599449 00..000000000031 00..001080840000 00..005616600000 10 sma 0.033367 0.001495 0.000002 0.002300 0.064000 ewma 0.033367 0.001132 0.000001 0.026700 0.039700 always sending an update for each tracked robot at every time quantum. This may well require a real-time OS underneath the Mezzanine and vmc-client instances. Alternatively, robotd or pilot could be enhanced with a predictive filter, such as an EKF, that predicts based on the real estimates from visiond (and perhaps wheel odometry as well, or any other location data that can be gathered). This filter could certainly operate at speeds much greater than 30 Hz-perhaps matching the execution speed of the algorithm producing the drive wheel speeds for the robot. The best solution is clearly the real-time solution, if possible given available hardware and the number of camera frames to process per quanta, since it nearly eliminates the issue of visiond-caused phase lag in the data (except for the case when a robot is temporarily lost for one or more frames). Table 4.6 provides statistical data for location estimates during a 1 m linear move. For each location, the data are parameterized by filter window size and type ("-" if no filter was applied). 34 4.5 . inter arrival Position Window Filter Mean Stddev - - 0.000001 3 PI 5 sma 0.033377 0.000507 0.000000 0.030900 0.044800 ewma 0.033404 0.001792 0.000003 0.028500 0.068100 0.033404 - - 0.033367 0.000431 0.000000 0.027600 0.039400 3 P2 5 - - 0.033375 0.001025 0.028500 0.045000 3 P3 5 sma 0.033376 0.000949 0.000001 0.018800 0.051600 ewma 0.033367 0.001594 0.000003 0.000400 0.066000 10 client Hz- solut ion, caused frames) . Im move. ("- " 35 Table 4.6. Jitter for linear motion location estimates and message interarrival times. Data Type Window Filter Mean Stddev Variance Min Max X - - 0.004431 0.003850 0.000015 0.000000 0.023400 3 sma 0.001660 0.001288 0.000002 0.000000 0.006700 ewma 0.002269 0.001888 0.000004 0.000000 0.013600 5 sma 0.000923 0.000734 0.000001. 0.000000 0.005000 ewma 0.001231 0.000896 0.000001 0.000000 0.004200 10 sma 0.001066 0.000619 0.000000 0.000000 0.002900 ewma 0.001326 0.000756 0.000001 0.000000 0.003600 y 0.005880 0.002511 0.000006 0.000400 0.015200 3 sma 0.005751 0.001953 0.000004 0.000900 0.009800 ewma 0.005868 0.002083 0.000004 0.000700 0.010500 5 sma 0.005894 0.001907 0.000004 0.002200 0.009400 ewma 0.005819 0.001968 0.000004 0.001500 0.009500 sma 0.005796 0.001860 0.000003 0.002200 0.009100 ewma 0.006016 0.001782 0.000003 0.002700 0.009200 0 - 0.035807 0.034841 0.001214 0.000500 0.234800 3 sma 0.011703 0.010182 0.000104 0.000000 0.064800 ewma 0.013115 0.011207 0.000126 0.000000 0.061300 5 sma 0.007475 0.007330 0.000054 0.000000 0.056800 ewma 0.006677 0.005165 0.000027 0.000100 0.027200 10 sma 0.003800 0.003179 0.000010 0.000100 0.018400 ewma 0.003672 0.002829 0.000008 0.000000 0.015800 time - - 0.033349 0.001006 0.000001 0.030300 0.036800 3 sma 0.033367 0.000093 0.000000 0.033200 0.033600 ewma 0.033373 0.000362 0.000000 0.032300 0.034400 5 sma 0.033365 0.000438 0.000000 0.032400 0.034600 ewma 0.033367 0.000366 0.000000 0.032300 0.034400 10 sma 0.033367 0.000109 0.000000 0.032900 0.033700 ewma 0.033365 0.000346 0.000000 0.032200 0.034400 lincar - - x 0.000001 Sina O.000G19 0.00132G i 0.00075G 0.003GOO SIna 0.00208:3 Y eWIna 10 eWIna ~~ eWlna 0'(1l3115 e cwma 0'()03179 cwma - - ;) i':>ma O.OO()OOO 0.0:33200 0.0:33600 0.000:362 0.0:32300 eWIna 0.033;367 0.0;34400 0.0:32900 C H A P T E R 5 U S A B I L I T Y T O O LS Because of its dynamic nature, a mobile wireless testbed must present researchers with more control options and data interfaces. Users must be able to control and script motion easily. The addition of sensor network motes to experiments necessitates new kinds of monitoring interfaces to motes, since few such tools already exist for mote applications. Whereas users of Enmlab's classic wired network testbed can employ a whole host of tools, such as netcat [13], iperf [27], nmap [10], tcpdump [11], and the like, to control and monitor their experiments, as well as many custom Emulab tools, motes simply do not support the same experiment paradigms. Enmlab's wired network testbed also makes guarantees to users with respect to link bandwidth, loss rate, and many other characteristics, but a live wireless testbed can make no such guarantees and must also cope with the problem of external interference. Additional tools can significantly ease the pain of real-world experimentation with sensor network devices. 5 . 1 W i r e l e s s C h a r a c t e r i s t i cs C o n n e c t i v i ty experimenters can quickly choose which motes in our testbed to use for experimentation, and also for later use in results analysis. For instance, one could imagine evaluating the effectiveness of a link estimation routing protocol by comparing generated routes with testbed maps of link quality measurements taken at an earlier time. We developed a Java applet that displays wireless connectivity between wireless nodes in a geographic manner. We run a simple Tiny OS program on selected groups of motes (generally, all the fixed motes in the testbed) to collect this data. For each power level, and for each mote, the mote will broadcast 100 packets at a rate of 8 pps. All other motes listen and record the number of packets heard and their RSSI values. This information is analyzed and stored for later use, and recent data can be sent to the web interface for display in an interactive CHAPTER 5 USABILITY TOOLS Emulab's Emulab's 5.1 Wireless Characteristics 5.1.1 Connectivity We provide wireless connectivity information for different radio power levels so that TinyOS 37 Java applet. The applet uses a loosely-defined data file, allowing easy display of any link characteristic required in the future. The data format can also associate a set of statistics with each link characteristic value so that experimenters can better ascertain how links change over time. applet using our different certain and respectively. only 0x8.) 0x8, motes can communicate However, many links" when combined not insight instance, motel 14 motel 15, 100% high links sensor F r e q u e n c y D e t e c t i on Although robot resources cannot currently be space-shared, the static motes may be used by multiple experimenters at once. Consequently, we need to do our best to prevent frequency overlap between experiments. We developed a Tiny OS-based frequency-hopping radio sniffer for motes. This program listens for packets on channels for a few seconds and sends information to the serial port including the frequency, received packet count, and number of valid packet CRCs. This program receives not only transmissions generated from our system, but from external sources as well, making it a valuable tool with which to periodically scan for many types of interference. Unfortunately, since we expect experimenters to reduce radio power to create interesting multihop topologies, this application must run from several different observation points to ensure that all transmissions are monitored. Although this application is not applct. In the ctpplet itself, llsing om default data set, experimenters can select different power levels, view statistics by selecting source or destination mote sets, and filter out links below a certaiu threshold or by a best neighbor limit. Figures 5.1 and 5.2 show this application with packet reception percentage aud RSSI values, rcspectively. (These figures show ollly the three best-connected neighbors for each node, at power level Ox8.) Figure 5.1 shows that at power level Ox8, most moLes in the testbed call comlllllllicate with at least three other motes with relatively low packet loss rates. However. mallY of the "lillks" shown do not have a bidirectional component. This is due to the fact that many of these links are assymetric, and whcn cOlllbilled with the k-best neighbor filter, both directions may llot be part of both nodes' k-best neighbor sets. Figure 5.2 provides illsight to the degree with which the best three neighbors (in terms of RSSI) correlate with the best three packet reception neighbors, for each node. Some correlation is evident; for illstallce, lllote1l4 and mote1l5, at the bottom of the topology shown, exhibit almost lOf]fX) packet delivery rates, and rather higll RSSI values. However, comparative inspection of the figures reveals that many packet reception link::; do not have corresponding RSSI links. Much additional information may be quickly obtained by inspecting the testbed's sellsor network topology using this application. 5.1.2 Active Frequency Detection Calmot lllay llluitiple Heed TinyOS-frequencyhopping all chmlllcls inclmling trallsmissions with which to periodically scau for lllauy types of illterference. Ullfortullately, experimellters lllultihop lllust frOlll severa'! applicatioll 38 Options Change dataset: Change floor; Dataset parameters: power level: < Display property j pktHecvPt Mode: S) Shorn ill links Q Select by source O Select by receiver Limit by • None i^j k best neighbors ^ • links above Other options: 0 Never show 0% links Select nodes: note!02 notelM notelOS notel09 notetll notell2 notell3 notell4 F i g u r e 5.1. Screenshot of wireless connectivity applet showing received packet statistics. currently integrated into Emulab, users can install the application binaries on their motes before beginning experimentation, as well as during runtime, to find an appropriate frequency for their experiments. By installing it on their motes, they can determine which frequencies their motes can overhear, and adapt their applications accordingly. 5 . 2 S e n s o r N e t w o r k A p p l i c a t i o n M a n a g e m e n t A debilitating problem in sensor network research is the lack of readily-available tools for interfacing with mote devices. From a quick survey of the TinyOS contributors development tree, one can conclude that researchers generally construct their own interface programs for sensor network applications. Developers may often wish to manually interact with a running application by sending command messages to alter program state or to dump a set of debugging data. Beyond manual interaction, real-time data display about the state of the network at large may be useful. One may imagine wanting access to the current routing and sensing states on f----..-------' llIol eL23 III tI12 IJ1L • , .' lIIotel11 • i ~ r--motrl • Zoom: QI==== 2'1. Select nodes: m o t~ I 02 motelO4 mottlOS motelO6 mottlD7 motel OS motelO9 motelll mote11 2 motel13 motel!4 Figure Screens hot 5.2 Sensor Network Application Management t he T inyOS F i g u r e 5.2. Screenshot of wireless connectivity applet showing RSSI statistics. each node, for instance. When nodes can move, current information about their location and intended destination will also be useful. The following sections describe a set of useful applications and tools that enable sensor network developers (and Mobile Emulab users) to more easily communicate with their applications, explore data, and automate command tasks. 5 . 2 . 1 M a n a g e m e n t a n d I n t e r a c t i on EmulabMoteManager is an application that provides users with a basic set of functionality for communicating and otherwise interacting with sets of motes. This functionality is provided by the main application, as well as a set of stock modules that use the APIs provided by EmulabMoteManager. Figure 5.3 shows a screenshot of EmulabMoteManager and several plugins, including one application-specific plugin for the MobiQ application, which we discuss in Section 6.2. EmulabMoteManager also allows users to load new modules conforming with its API, and create multiple instances of them. Modules can 39 r----;~:->:x:-------,--------r=~E" O::"=::l=;O;:~;:============r-,-------H... Optlons-------, , +\ 1-----,-- - ---' mo.t. t 12::1 f "''' . III ~i1 2 1l10V "il! 1110 ,O ? - 57.73 I ~ 57.U • ~~ ~ I J Iw·te11J ltu" ,e11 6 • - 67.74 I - 6S.9 • '':"" 111 ("" 11-4 IlI'lleUS • - 61.85 I -58.57 • • <0, ~ '" ~ o lIb! Ill c'.el21 • lIIot el08 lII ovl07 N N r-mott!i: • Change dataset: I W'N-." ~ I Change floor; ~ I Zoom: Q==== ~ Dataset parameters: power level: DD I'x, ~ I Display property. Irul 1-.------1~ Mode: ~ (!) Show !II links ~ O it le ctby source o Select by !tu~ lver • --==.===========Ifll ';) ~:::J c::=::--=:- Umi, b'.y1/1 1 ~1rtnlt\gTIt"e fl O None .~ ~ J r.i k',,' "" h',,, I==:ll J I 0 Ii,k • .ow. [ __ "8 m~ e 09 Other options: ~ Neve r showO" lI nk, Select nodes: mOlel02 mote l 04 mott lOS mott l 06 motel Ol mottl GS motel 09 motelll rnot e l1 2 motel1 3 mOle114 Figure Screens hot abou t t hat 5.2.1 Mote Management and Interaction functionality functionality application , tha t specific application , 6. 2. AP I, F i g u r e 5.3. Screenshot of the EmulabMoteManager application. save state to a session file, which is read when the module is next loaded. Finally, EmulabMoteManager provides two different APIs for additional module loads via the Manager Module and AppModule interfaces; we discuss these interfaces in Section 5.2.2. I n t e r f a c es The ManagerModule interface should be implemented by modules that are capable of managing connections to motes and anything deemed related to that function. It requires implementation of several functions, such as commands to retrieve available motes and to connect or disconnect from motes. Several subscription methods enable interface users to easily listen to a subset of motes and be notified when they send messages. Subscriptions are parameterized by message type, source mote, or a combination. In order to support 40 ~ nev: AVlllablt sent: motel · moul1D JhO'l'l details motel ~; motel ~ recv; 2487 uptime: motel ant: 0 05:55:111 motel mOlell1 ~ h ow dtu.lIs 2535 uptime: • , 0 05:59:55 15 1109 ,2 , .1.3 :11 2 2 ,2 ' 11~6 ~ :i' 13 110 :2 ,2 " 1117 ;2 2 19 ;!JlS ,2 ,2 I I I2l 2 2 motdOJ mO\~~2_~ _~. _ 04 113 12 12 ,~ 13 10 102 2 ,2 4306 10 1107 - :2 ,2 24 -10 1108 -) 2 13 10 1101 ,2 - '0 1\9 12, - ,2 0 I II ,2 ) -- -- io 1\4 ,2 _!2 -I: -:106 12 ,2 !12.:1 ,2 " 04 10 1115 ,2 I, 24 10 ;122 i2 12 15 10 1109 " , Il :0 :I1} :> '2 04 10 1\6 12 > Il 10 '110 ;, 12 15 I: !l7 ;2 i2 \9 lOS ,2 ,2 \I 10 ji23 ;2 !2 04 I: 1113 12 Il 102 ,2 " '0 107 ,2 ,2 " '0 'i08 ~2 2 11 13 10 't·01 12 Ii .. 10 1\9 ,2 ;2 _1104 '0 illl ,2 , 104 J; :114 l' '2 11\ 3 1 ~ 06 2 , Figure EmulabMoteManager addit ional Manager-5.2.2 Key Interfaces Manager-Module type, 41 users who may require more than just the message bytes, one can use similar subscription methods to receive message metadata (timestamp, source mote name, and a boolean representing the result of the CRC check on the received bytes) in addition to the message itself. AppModule should modules that generally-useful functionality interoperatc ManagerModule or more simply, extra application-specific functionality not already supplied by existing users GUI that are added to the top of the GUI to streamline ease of use. More importantly, AppModules must implement an initModule method that notifies them of the parent module container, any MoteManager interfaces already created, and any previous session state saved by this module. The parent module container interface, MoteAppContainer, contains a simple method to save session state at any time. However, state is only written to the filesystem if the user has enabled the auto-save option, or specifically requests that the session be saved. Finally, any loaded module may be multiply instantiated in the GUI. H a n d l i ng EmulabMoteManager makes use of lower-level abstractions provided by the TinyOS distribution. When writing TinyOS applications, users are encouraged to communicate using "Active Messages", which use a standard format and are supported by modules in the TinyOS kernel. Generally, each message is defined by a packed structure, and associated with an "Active Message ID". The TinyOS distribution supplies a tool named mig that converts these Active Message C structures into Java classes. Each message wrapper class has accessor methods for each variable in the original structure, as well as private methods for packing the data to the mote platform specified to mig. EmulabMoteManager and MoteLogger both derive much of their usefulness from these Java wrapper classes. Using Java's reflection mechanism, a library included in both applications parses messages classes and extracts the original variable names and types. With this information, the library can print full details contained in a message for which the class is available, and also construct messages from user-supplied field values, which it parses as appropriate for the original message type. Unfortunately, mig does not maintain the original structure of complex data types. For instance, members of a member that is itself a structure appear as members with under- mctadata The interface shonld be implemented by module'S tha.t provide generallyuseful fUllctionality that can intemperate with any valid M(],nofJcrA1odule and its motes, modules. Through this interface, nsers may export sets of CUI menu and icon items AppModulcs ini tModule lllodule auy interfaccs intcrfacc, MoteAppContainer, st.ate filesystem that. loa.ded instantia.ted t.he CUI. 5.2.3 Message Handling TiuyOS TillYOS llsers kcrncl. C a.nd McssaKe TiuyOS COllverts Ja.va private met.hods for packing the dat.a to the lllot.e platforlll specified to mig. .lava's refiection lllechanism, inducled namcs "Vitli informatioll, t.he print. contaiued dass a.nd coustruct messagcs it parses as appropriatc for the original lllessage type. struct.ure membcr mcmbers scores in the accessor method name in the resulting Java class. Since EmulabMoteManager makes use of the way in which field accessor methods are named in the Java message class wrappers, it cannot reconstruct the exact original message structure. This has severe implications for proper display of data when logging or displaying messages to users. To work around this loss of information, an EmulabMoteManager user must supply a spec file in addition to the Java message wrapper class, mig produces a specification of the structure that contains the missing nesting information, although most users typically ignore it. With the nesting information, the message handling libraries can display arbitrary messages with complex types. P l u g i ns This section describes several of the stock plugins available for use with this application and discusses their general usefulness apart from Emulab. 5 . 2 . 4 . 1 E m u l a b M o t e C o n t r o l P l u g in This plugin implements the Manager Module interface for motes in the Emulab testbed and also provides a GUI interface for managing them. Figure 5.3 shows an instance of the EmulabMoteControl plugin near the upper left corner. Users can add motes to this manager cither by directly connecting to the network-exported serial device, or by connecting to an instance of the TinyOS SerialForwarder utility, which in turn attaches to the Emulab network-exported serial device. Other TinyOS Java libraries allow users to write messages (in TinyOS's Java message class wrapper abstraction) to SerialForwarder instances. By using the second interface, we provide experimenters with the same abstraction that they may already use in their existing data collection or monitoring programs. This can ease the pain of transforming all or part of their applications into plugins for the EmulabMoteManager. The plugin creates a simple GUI interface for adding motes, connecting to and disconnecting from motes, and displaying detailed statistics on packet transfer. In addition to connection management facilities, the plugin integrates the arbitrary TinyOS Java message-parsing libraries described in the previous section. Users may load Java classes representing TinyOS messages and they will be recognized by the mote connections. Packet creation and display methods are also then made available to any AppModule plugin instances that are utilizing this manager, as soon as a new message class is loaded. 42 usc ill arc Java. inforlllation, all nmst class. mig strllctnre ignore it. With the nesting information, the message handling libraries can display arbitrary messages with complex types. 5.2.4 Stock Plugins usc 5.2.4.1 EmulabMoteControl Plugin ManagcrMori'ule EnlUlab G UI 5.:i plllgin mot.es t.o lllanager either TillYOS En11l1ab .Java nsing t.he lllay nse ill applicat.ions EmulabMoteManager. awl lllotes, .Java a.nd Pa.cket AppNJodulc lllanager, By loading multiple instances of the EmulabMoteControl plugin, users will be able to communicate with multiple mote subsets at once, if their plugins can support multiple Manager-Module interfaces, or selection of the appropriate interface to use. P a c k e t H i s t o r y P l u g in The PacketHistory plugin implements the AppModule interface, and creates a GUI for displaying packets in a series of tables. The plugin subscribes to a ManagerModule for all message types and mote sources. In the first table, it displays received packet metadata including source mote name, message type (numeric ID if matching Java message class not loaded; otherwise message name) reception timestamp, a boolean representing success of the CRC calculation, and message length. When messages received have a matching, loaded Java message class, it creates a new table for all subsequent messages of that type. Users can double-click on packets in any of the tables to see the packet metadata and interpreted message fields. Figure 5.3 shows an instance of the PacketHistory plugin with several different message types (the message table shown is displaying messages from the MobiQ application discussed in Section 6.2) in the lower right corner. 5 . 2 . 4 . 3 P a c k e t D i s p a t c h P l u g in The PacketDispatch plugin enables users to send messages to subsets of motes, and also to script actions involving sending. Using a GUI interface, users can create packets for Java message classes that they have loaded by filling in desired field values in a form. Once a packet is created, the user can associate it with one or more motes known to this plugin through the manager, and a simple click will dispatch it to the selected motes. These target associations with motes are dynamic and can be easily modified. Since the goal of this software is to aid sensor network application designers by reducing the amount of application-dependent interface programs they must write, this plugin also allows users to script packet dispatching. Scripts can take the place of hand-tooled packet sending code, and can be easily reused in other applications without recompiling a new interface. Using a GUI interface, users can create scripts and add action items. The most important type of action item is a packet send, which users can add to scripts in different ways. First, users can append already created packets and their target mote associations to a preexisting script. Alternatively, they may manually add a new packet and new mote 43 instRllces EumlabMoteCoutrol COlllulllllicatc lllultiple subset:" plugills :"upport Module thc Rppropriate interfacc usc. 5.2.4.2 Packet History Plugin PacketHi:"tory implement:" AppModnle interfa.ce, R clisplRying plllgin subscrihes ManagcrModtdc messa.ge nlOte fir:"t pa.cket metadata nlCssage .Java lIle:"sage c:lass na111e) succes:" loa.ded c:lass, crea.tes llew c:lick Oil sec interpreted message fields. Figure 5.3 shows an instance of the PacketHistory plugin with several different message types (the message t.a.ble showll is displaying messages from the MobiQ applicat.ion discussed in Section 6.2) in the lower right corner. 5.2.4.3 PacketDispatch Plugin Packet Dispatch IllOtl~S, "md a. Hsers .J ava mcssa.ge c:lasses t.hat. i:" u::;er lllore kuown ma.nager, dick it. associat.ions ea::;ily modified. designer:" dependeut iuterface script. packet. rellsed recompilillg int.erface. actioll import.ant script::; different amI uew andncw targets by editing the script. addition to packet sending, another function that pauses script execution is supported. The ScriptEditor allows users to add, remove, and change positions of action items in the sequence. Individual script action items may also be edited in a similar GUI manner. The data format used to store scripts is very intuitive, allowing scripts to be hand-edited easily. This plugin is completely independent from the Emulab sensor network testbed, and could just as easily be used inside the EmulabMoteManager to interface to another testbed. Figure 5.3 shows an instance of the PacketDispatch plugin with a created packet and several scripts in the upper middle portion of the application. 5 . 2 . 4 . 4 L o c a t i o n P l u g in The location plugin provides methods for users to monitor mobile mote motion. By connecting to a Locationsource, the Location plugin monitors the current location, intended destination, and state of motion execution. Currently, the GUI implementation of this plugin allows users to connect to a network-based Locationsource. Once connected to the source, mote locations (x, y coordinates in centimeters, and orientation in degrees) |
| Reference URL | https://collections.lib.utah.edu/ark:/87278/s6ks760v |



