| Title | Scheduling virtual wifi interfaces for high bandwidth video upstreaming using multipath TCP |
| Publication Type | thesis |
| School or College | College of Engineering |
| Department | Computing |
| Author | Maheshwari, Shobhi |
| Date | 2018 |
| Description | Live video upstreaming refers to the flow of live data in the upstream direction from mobile devices to other entities across the Internet and has found use in many modern applications such as remote driving, the recent social media trend of live video broadcasting, along with the traditional applications of video calling/conferencing. Combined with the high-definition video capturing capabilities of modern mobile devices, live video upstreaming is creating more upstream data traffic than what present day cellular networks are equipped to support, often resulting in sub-optimal video experience, especially in remote or crowded areas with low cellular connectivity and no WiFi. We propose that instead of using its single cellular connection, a mobile device connects to multiple nearby mobile devices and splits the live video data over the cellular bandwidth of these devices using Multipath TCP protocol. MPTCP provides a promising solution for aggregating bandwidth, but its use has largely remained unexplored for upstreaming live video data, especially for scenarios whereWiFi connectivity is not available. We use wireless interface virtualization, offered by Linux, to enable Multipath TCP to scale and connect to a large number of cellular devices. We design and build a system that can assess the instantaneous bandwidth of all the connected cellular devices/hotspots and uses the set of the most capable cellular devices for splitting and forwarding the live video data. We test our system in various settings and our experiments show that our system increases the bandwidth and reliability of TCP connections in most cases and where there is a significant difference in the throughput across cellular hotspots, our solution can recognize and isolate the better performing cellular hotspots to provide a stable throughput. |
| Type | Text |
| Publisher | University of Utah |
| Dissertation Name | Master of Science |
| Language | eng |
| Rights Management | © Shobhi Maheshwari |
| Format | application/pdf |
| Format Medium | application/pdf |
| ARK | ark:/87278/s65f4rc3 |
| Setname | ir_etd |
| ID | 1698227 |
| OCR Text | Show SCHEDULING VIRTUAL WIFI INTERFACES FOR HIGH BANDWIDTH VIDEO UPSTREAMING USING MULTIPATH TCP by Shobhi Maheshwari A thesis submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Master of Science in Computer Science School of Computing The University of Utah December 2018 Copyright c Shobhi Maheshwari 2018 All Rights Reserved The University of Utah Graduate School STATEMENT OF DISSERTATION APPROVAL The dissertation of Shobhi Maheshwari has been approved by the following supervisory committee members: Sneha Kumar Kasera , Chair(s) 11 October 2018 Date Approved Neal Patwari , Member 13 October 2018 Date Approved Jeffrey Phillips , Member 12 October 2018 Date Approved by Ross Whitaker , Chair/Dean of the Department/College/School of Computing and by David B. Kieda , Dean of The Graduate School. ABSTRACT Live video upstreaming refers to the flow of live data in the upstream direction from mobile devices to other entities across the Internet and has found use in many modern applications such as remote driving, the recent social media trend of live video broadcasting, along with the traditional applications of video calling/conferencing. Combined with the high-definition video capturing capabilities of modern mobile devices, live video upstreaming is creating more upstream data traffic than what present day cellular networks are equipped to support, often resulting in sub-optimal video experience, especially in remote or crowded areas with low cellular connectivity and no WiFi. We propose that instead of using its single cellular connection, a mobile device connects to multiple nearby mobile devices and splits the live video data over the cellular bandwidth of these devices using Multipath TCP protocol. MPTCP provides a promising solution for aggregating bandwidth, but its use has largely remained unexplored for upstreaming live video data, especially for scenarios where WiFi connectivity is not available. We use wireless interface virtualization, offered by Linux, to enable Multipath TCP to scale and connect to a large number of cellular devices. We design and build a system that can assess the instantaneous bandwidth of all the connected cellular devices/hotspots and uses the set of the most capable cellular devices for splitting and forwarding the live video data. We test our system in various settings and our experiments show that our system increases the bandwidth and reliability of TCP connections in most cases and where there is a significant difference in the throughput across cellular hotspots, our solution can recognize and isolate the better performing cellular hotspots to provide a stable throughput. To all the ones whom I love. CONTENTS ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi NOTATION AND SYMBOLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii ACKNOWLEDGEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix CHAPTERS 1. 2. 3. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Research Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 RELATED WORK AND BACKGROUND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1 Multipath TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Wireless Network Interface Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 8 METHODOLOGY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.1 Base Algorithm: Client-Side Throughput Calculation . . . . . . . . . . . . . . . . . . . . 3.1.1 Rationale Behind probe lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Algorithm 2 : Globally Aware Dynamically Adjusted . . . . . . . . . . . . . . . . . . . . 3.2.1 Approach 1 : Globally Aware Subflow Selection . . . . . . . . . . . . . . . . . . . . 3.2.2 Approach 2: Subflow Selection Based on Adaptive Throughput Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Initial Subflow Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. 12 16 16 17 18 18 EVALUATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.1 Calculating Throughput Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.2 Base Algorithm vs. Base Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.3 Feedback Loop vs. Base Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5. CONCLUSION AND FUTURE WORK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 APPENDIX: MPTCP MEASUREMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 LIST OF FIGURES 1.1 An MPTCP connection going through 3 different access points. . . . . . . . . . . . . . 4 3.1 MPTCP connection when two virtual interfaces are present on the client-side. . 11 3.2 The figure shows the performance when a client is connected to two CDs. Scenario 1 (left): The client connected to CD1 and CD2 would push more data towards CD2 because of its better channel condition. Scenario 2 (right): CD1 moves away from the client and the client moves all the traffic to CD2 while keeping CD1 as a backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.3 The throughput threshold decreases step by step every time a AP contributes to the global throughput, but it increases based on the mean of the αcurr and αmax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.1 Our experimental setup involved two main scenarios. The one on the left shows when we connected our broadcasting client to a cellular hotspot on one interface and to the campus WiFi on the other interface. On the right, our broadcasting client is connected to two cellular hotspots both going through different cellular network providers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.2 The effect of out-of-order packets for the combined subflows is not so significant when the file size is small, as shown by the MPTCP bar. . . . . . . . . . . . . . . 24 4.3 The effect of out-of-order packets is much more noticeable for bigger file sizes when the two subflows are combined, as shown by the MPTCP bar. . . . . . . . . . 24 4.4 MPTCP combines throughput when both subflows are performing well. As can be seen in the figure, the expected and the actual values of the two subflows are quite similar. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.5 MPTCP througput goes down when one of the subflows is not performing well. The figure above shows the actual throughput achieved versus the expected throughput that should be obtained when combining the two subflows. 26 4.6 The expected throughput shows what the combined throughput of the two subflows should be, but out-of-order packets result in a throughput that is much lower than expected. Therefore, our system pushes all the data towards CD2 and puts CD1 on backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.7 Shows the throughput obtained, based on the cellular traces, for individual CDs as well as the combined system throughput. . . . . . . . . . . . . . . . . . . . . . . . . 28 4.8 Throughput as calculated from the cellular traces collected on a commuter train. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.9 Shows the number of retransmissions based on the cellular traces obtained from the train experiments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 A.1 Measuring the throughput threshold for cellular traces. The right side shows the effect of out-of-order packets on the transmission. . . . . . . . . . . . . . . . . . . . . . 33 A.2 Performance with low capacity subflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 A.3 Combined throughput achieved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 vii NOTATION AND SYMBOLS α β AP CD RTT MSS Biexp B APi Bgbl Throughput Threshold Smoothing Constant for EWMA over RTT Access Point Cellular Device Round Trip Time Maximum Segment Size Expected Throughput on the Client-side Throughput over APi Global Throughput Achieved over all the Subflows ACKNOWLEDGEMENTS Foremost, I am grateful to my advisor Professor Sneha Kumar Kasera for his consistent support and guidance at every step of my masters thesis research. I am thankful for his patience and motivation throughout my degree. I appreciate his time and ideas to make this thesis productive and innovative. His passion and enthusiasm for research has always inspired me and encouraged me to strive for more. I am also grateful to my friends here at the University of Utah for all the stimulating discussions and sleepless nights we had before deadlines. I am grateful to my brother for always being there for me and supporting my decisions whether or not he agreed with them. I want to thank everyone in my family for their selfless love and faith in me during this period, especially my parents for inculcating in me the respect for education and encouraging me to dream big. Thank You All! CHAPTER 1 INTRODUCTION Cellular networks have traditionally been optimized to deliver content to mobile devices as much of the data flows in the downstream direction. However, the introduction of smartphones with high-resolution cameras has increased the demand for high-quality video content not only in the downstream direction but in the upstream direction, as well. Recently, live video upstreaming has found an application in autonomous vehicles where a person can control an autonomous vehicle remotely to guide it through situations that the autonomous vehicle is not yet equipped to handle [2]. This idea, often termed as “remote driving,” is much like a car racing game minus the racing but with human subjects. It uses wireless cellular connections to transmit the data to the remote operator and has been employed by several companies to make the transition from human-driven to autonomous vehicles. SAIC motors, a China-based company, along with Huawei Technologies, has demonstrated remote driving with the video being transmitted over 5G Wireless Network [1]. Another application of live video upstreaming is the recent social media trend of broadcasting user-generated video content from mobile devices. Live video has been touted as the future of social media as it allows users to bring their experiences, in a raw and undiluted form, to everyone on a single platform. Viewers can connect and watch the video in real-time, all the while interacting with the broadcaster in the form of comments or likes. Facebook Live is one of the biggest live video broadcasting platform in the world, along with Periscope by Twitter and YouTube Live. Yet another example of a live video upstreaming platform is Twitch used by video gaming enthusiasts who enjoy live-broadcasting their games and watching other users play, compete, teach, and take part in all gamers activities. Even though high throughput WiFi links are capable of upstreaming live video data, 2 effectively, at a high quality, they fail in a more mobile/outdoor environment. Even with various deployments by several independent parties covering central areas of several cities [10], WiFi is often limited to urban areas and is not so widely available. Cellular networks provide wide-range coverage and opportunity for user mobility. Cellular base stations have a wide coverage area and within the range of a given cellular base station, a mobile client may roam freely and with complete transparency to the network medium. The widerange coverage of cellular networks can make the live video upstreaming experience truly mobile. However, cellular links are not always adequate to support video transmission because of varied reasons. First, the asymmetric nature of cellular networks places more focus on downloading content, rather than uploading. However, the rising popularity of live video upstreaming has seen a steady increase in the demand for upload throughput with the cellular companies struggling hard to catch up [17]. Second, the data rate of cellular connections often varies dramatically [8] over time because of spatio-temporal variations in cellular channel conditions. Furthermore, cellular blackspots/notspots defined as geographic areas that experience reduced cellular network coverage due to physical obstructions like hills, trees, buildings, etc. [6] can effect a particular cellular connections data rate solely because of cell tower placement. Third, every year, mobile devices are equipped with increasingly advanced cameras capable of recording high-definition videos that require a higher cellular data rate. WiFi, due to its high throughput, is capable of upstreaming high-definition videos but cannot provide true mobility. Cellular networks provide truly mobility but fall short of upstreaming high-quality videos. To improve the mobility experience and still be able to achieve the high throughput attained by WiFi, several solutions have been proposed that can aggregate throughput by offloading data to nearby mobile nodes [4, 15], but these solutions do not address the challenges of upstreaming live video data especially using the Multipath TCP protocol. In this work, we address the problem of throughput aggregation for high-quality video upstreaming using the Multipath TCP protocol. We assume that a client (a user broadcasting a live video) is in a social environment, i.e., surrounded by people that he/she is acquainted with and can request them to share their cellular throughputs. A client could also use socially unknown nearby devices as long as appropriate incentive mechanisms can be incorporated [21]. 3 Multipath TCP (MPTCP) [8, 12, 24–26] is a major modification to TCP that allows multiple paths, known as subflows, to be used “simultaneously” by a single transport connection. Figure 1.1 shows a MPTCP connection between a MPTCP-enabled client and server with 3 different subflows. MPTCP works by creating multiple subflows and distributing the application data over them. A MPTCP subflow is like a standard TCP flow, with the exception that MPTCP can have multiple such flows going between the client and server. A MPTCP subflow is identified by a combination of client-server IP addresses and port numbers, similar to standard TCP, and associated with a network interface connected to an access point, Cellular Hotspots (CD) in our case1 . Essentially, MPTCP is limited to the number of physical interfaces a device can support. For example, a standard laptop has two network interfaces: one WiFi and one Ethernet interface; smartphones often have a WiFi interface and a cellular interface. Most studies associated with MPTCP have explored its utility with just two interfaces. One possible solution is interface emulation and in a recent work, Lim et al. [20] use MPTCP in a simulation framework for stored video and file transfer applications. However, to the best of our knowledge, the use of MPTCP for live video upstreaming has remained unexplored. In our work, we overcome the restrictions posed on the Multipath TCP protocol, by the number of wireless interfaces available on a mobile device, by virtualizing the wireless interface and scheduling these virtual interfaces in a time-shared weighted round-robin fashion to transmit data that gives more weight to the interface able to send more data or has higher throughput as compared to interfaces with lower throughput. Another problem with the use of Multipath TCP for live video transmission is the variable throughput between the devices. The cellular throughput of individual neighbouring devices can be different because of reasons such as distance from the base station or different carriers. Although the creation of virtual interfaces provides a solution to the limited number of wireless interfaces on a device, it cannot address the problem of variable throughput. Variable throughput arises due to mainly two reasons: first, not all interfaces are the same, i.e., different interfaces would be connected to different cellular hotspots and would therefore have a different range of throughput. Second, even for a particular 1 For the rest of this thesis, we will use subflow and interface interchangeably. 4 Figure 1.1: An MPTCP connection going through 3 different access points. interface, upload and download throughputs are usually not the same. An interface can have a high download throughput but low upload throughput. Essentially, when the difference between the throughputs of two subflows is large, a large number of packets arrive out-of-order at the receiver [22]. MPTCP holds these out-of-order packets in a reordering queue at the receiving end until all the data sequence numbers are in order and therefore, a delayed packet can block all the packets with higher data sequence number that arrived before it. This problem is even more prevalent in the upstream direction because of lower upload speeds provided by the cellular networks and other shortcomings of a cellular network stated above. Therefore, in such scenarios, a better solution would be to drop the poor performing subflow. However, the complexity of selecting the subflows to drop increases as the number of subflows associated with a particular transmission increase. To complement the scalability provided by wireless interface virtualization, we design an intelligent system that adds or drops subflows based on their ability to contribute to the aggregate throughput. Our system maintains a global view (aggregated throughput) of all the subflows to get a holistic view of how a subflow would perform with other 5 subflows and drops the subflows that could prove to be a bottleneck. It transmits data over every single subflow in a weighted round-robin fashion and at the end of the transmission slot for a particular subflow, the throughput achieved by it is measured and based on the comparison of the current throughput and the global throughput, the decision of dropping or keeping the current subflow is made. We evaluate our system in three ways. First, we test our implementation under stable conditions in a lab with stationary cellular hotspots. Second, we set up our system in an outdoor environment and the cellular hotspots are allowed to roam free, moving in and out-of-range of the broadcasting client. Third, we deploy our system on a high-speed commuter train. Our experiments show that our system can aggregate the video throughput, depending on the number of cellular hotspots and their individual throughput. Our system can also account for delay variability between individual subflows and how that affects their respective throughputs. Our system can isolate and drop low performing subflows and probes them periodically to check for any changes in the connectivity. 1.1 Research Contribution In this thesis, we make the following contributions: • Design an intelligent system that works by dropping/adding subflows for increasing the aggregate upstream throughput for live video data. • Incorporate a feedback loop that provides a server-side view of the throughput to the client, to help the client make an informed decision about which subflows to transmit on. • Implement our globally aware dynamically adjusted algorithm in the Linux kernel. Also, implement a selection algorithm that dynamically selects an efficient subflow to initiate the MPTCP connection. • And last, evaluate our design/implementation under different network conditions to test it for robustness and scalibility. CHAPTER 2 RELATED WORK AND BACKGROUND The recent trend of live video broadcasting has resulted in an exponential increase in upstream data traffic and supporting this traffic has become the most engaging challenge for the cellular networks operators today. A promising and low-cost solution to this sudden increase in cellular data download volumes is to utilize nearby mobile devices or WiFi access points to offload cellular data [16] and video download. MicroCast [19] is a system that uses a group of mobile users, in proximity to each other, to cooperatively download a video. MicroCast performs under the basic assumption that all the individuals in a group are interested in the same video and therefore focuses on a dissemination algorithm that allows for different parts of the video to be indefinitely buffered by individual phones. These different parts can then be exchanged between the group members to obtain the complete video. However, MicroCast does not deal with live video data. Also, as noted above, MicroCast works with downstream data while our system deals with the less researched problem of data upstreaming. Quality Aware Traffic Offloading (QATO) [16] enables a user with a poor cellular connection to offload its data, with the help of a base station, to another nearby user with a better cellular connection for helping to transfer the data to the Internet. QATO suggests the use of WiFi Direct for transmission of data from the source node to the neighbouring node. Although QATO deals with uploading data, it only uses a single mobile node from its neighbours to offload data. Also, QATO works with stored data, such as pictures and text files, for uploading and does not consider live video data. [11, 27] also deal with cellular offloading using nearby devices but work primarily with video streaming/downloading. Similar to our system, mobiLivUp [21] utilizes nearby smartphones and their cellular bandwidth to effectively increase the live video upstream bandwidth. mobiLivUp works by creating a small wireless network, using WiFi Direct that nearby devices can then 7 connect to. The video stream is split into multiple different streams and then sent to these connected devices to be uploaded to the server through the devices’ network connection. However, mobiLivUp requires an application layer splitter and gatherer for handling the multiple data streams, which can limit its capability to work with unmodified video broadcasting systems. Instead, our system takes all the “splitting” and “gathering” complexity out of the application layer into the operating system, which makes it compatible with whatever application the user might prefer. Stream Control Transmission Protocol (SCTP) [23] can be used for transmitting multiple streams of data in parallel between two end points that have an established network connection. However, SCTP does not have the capability to scale to more than one wireless interface and is not necessarily optimized for live video data. In contrast to SCTP, Multipath TCP [12] is a protocol that works with multiple streams of data and has a built-in ability to scale to a large number of wireless interfaces. 2.1 Multipath TCP Multipath TCP, as proposed by the IETF working group mptcp [12], is an effort towards extending the functionality of standard TCP by spreading a single data stream across different interfaces (e.g., WiFi and LTE on a smartphone). Multipath TCP can utilize the throughput achievable over every available interface, thereby increasing the aggregate throughput for the application. Multipath TCP appears as a regular streamsocket interface to the application; however, below the application layer, TCP subflows are created for each interface. The multiple subflows going through the different interfaces in combination form a Multipath TCP connection and use TCP options for signalling the necessary control information between the end hosts. Since every subflow is similar to a standard TCP connection between two endpoints, Multipath TCP appears to be a regular TCP connection to the firewalls/middlesboxes along the subflows paths. Thus, Multipath TCP works with unmodified applications and is deployable in todays Internet [26]. The benefits of using multiple paths to transmit data include better resource utilization, better throughput, increased redundancy, and smoother reaction to failures as the connection can still persist through other paths when a particular path fails. MPTCP also has benefits of load balancing for multihomed servers and data centers, and for mo- 8 bility [31]. Many industry leaders such as Apple and Samsung have already adopted this protocol in their latest iOS and Android systems. iOS operating system now uses MPTCP to optimize the delay-sensitive traffic generated by Siri, Apple’s Personal Assistant [3]. MPTCP implementations are now available on Amazons EC2 instances, different Android-based handsets, the largest multihoming experimental platform NorNet [14], and Citrixs Netscaler. Even though Multipath TCP has the potential to scale to a large number of wireless interfaces, it is often restricted by the number of interfaces available on a mobile device. Niculescu et al. [10] designed a system named Multi WiFi that overcomes this limitation by creating virtual wireless interfaces on top of the physical interface, which then use the physical interface in a time-shared manner. Multi WiFi leverages Multipath TCP to achieve seamless mobility in WiFi, by letting a client connect to multiple APs on the same channel and splitting the traffic between them using Multipath TCP, thereby enabling a WiFi client to achieve close to the maximum achievable throughput in a wide range of scenarios. However, MultiWiFi has been designed mainly for downstreaming content and is not optimized for live video data. 2.2 Wireless Network Interface Virtualization Wireless Network Interface Virtualization is a concept that involves establishing and maintaining concurrent Access Point (AP) connections for better robustness and throughput. Spider [28, 29] is a system that uses multi-AP selection, channel-based scheduling, and opportunistic scanning to maximize throughput while mitigating the overhead of association and DHCP. While Spider has the capability of managing multiple channels, best results including maximum throughput are achieved when using multiple APs on the same channel. WiSwitcher [13] virtualizes the wireless driver sitting on top of the single radio card, such that it appears as independent Virtual Stations associated to their respective APs. Bandwidth aggregation is done through Time Division Multiple Access (TDMA), i.e., the wireless client sequentially connects to all selected APs within range in a round robin fashion over variable time cycles. Before switching to the next AP in the cycle, a client signals to the APs it is going to enter Power Saving Mode (PSM), effectively 9 telling the APs to buffer all packets addressed to the client. The percentage of time devoted to each AP is determined by the capacity of the wireless channels and their backhauls to maximize a utility function. The ViFi project [5] exploits the use of multiple APs to improve the link layer performance for common applications such as Web browsing and VoIP, for moving vehicles. MultiNet [7], FatVAP [18], and PERM [30] are some more examples of systems that enable clients to associate with more than one nearby Base Station (BS), to increase throughput if the wireless capacity is greater than the capacity of wired links behind the BSes. Thus, the idea of Wireless Network Interface Virtualization and associating to multiple APs for bandwidth aggregation is not new and has been studied quite extensively, but what is missing is the ability of unmodified application to benefit from this bandwidth aggregation, as standard TCP is designed to work with a single interface by default. Multipath TCP enables unmodified applications to use these interfaces. Niculescu et al. [10] make some efforts towards making unmodified applications Multipath capable. However, MultiWiFi has been designed mainly for downstreaming content and is not optimized for live video data. For this reason, MultiWiFi is ill-equipped to handle the challenges faced with moving video data traffic from the client to the server. We design a system that lets applications use Multipath TCP without any modification to the application with emphasis on live video transmission that is not tolerant to delays. CHAPTER 3 METHODOLOGY In this section, we present our approach to efficiently aggregate the upstream bandwidth of multiple nearby devices for live videos. We design our system incrementally with the help of two algorithms: Our first algorithm utilizes client-side information, including round-trip time and buffer sizes, to calculate subflow throughput, and the second algorithm builds on top of the first algorithm and introduces a feedback loop that conveys the number of bytes received at the server-side back to the client. This helps the client account for delays caused in the network and calculate server-side throughput more accurately. MPTCP works by establishing an initial connection over a single interface and starting data transmission over this initial subflow. It then adds subflows using the MP JOIN option. MPTCP adds subflows one at a time and sends as many packets with MP JOIN option as there are interfaces available on an end-device. In Figure 3.1, the live broadcasting device has two virtual interfaces, so the MPTCP connection would be established over one of them and then the next subflow would be added later. In our system, we use the MPTCP fullmesh path manager, which means that MPTCP would create subflows equal to the product of the number of interfaces present on both sides. In Figure 3.1, the client/live broadcasting device has 2 virtual interfaces and the video server has a single interface. Therefore, the fullmesh path manager would create (2 ∗ 1) subflows. A physical wireless card is capable of supporting hundreds of Mbps data rate but the actual data rate for any data transmission is determined by the rates that the network can support. More specifically, the data transmission rate is decided by the slowest/bottleneck link inside the network. Therefore, virtualizing a physical wireless card into multiple interfaces does not necessarily affect its capability to transmit data but enables us to transmit more data by splitting the data over multiple interfaces. Therefore, Multipath TCP can be used in conjunction with virtual interfaces to establish multiple subflows between 11 Figure 3.1: MPTCP connection when two virtual interfaces are present on the client-side. the client and the server. However, when the virtual interfaces are connected to different cellular hotspots, they get different throughput. Let us consider the example of a MPTCP connection with two subflows, where Sub f low1 has a much higher throughput than Sub f low2 , i.e., packets sent over Sub f low1 would arrive earlier than packets sent over Sub f low2 . MPTCP scheduler is designed in a way that it would push more data towards Sub f low1 and only a small amount of data would be sent over Sub f low2 . Since Sub f low2 has poor throughput, the packets sent over Sub f low2 would take much longer than packets sent over Sub f low1 . MPTCP, like TCP, ensures that packets are passed to the upper application layer in the correct order. So, while the packets sent over Sub f low2 are in-flight, i.e., are travelling in the network, the packets sent over Sub f low1 with higher sequence numbers would be held in the reordering buffer. Also, while the server is waiting for the packets sent over Sub f low2 it would keep asking the client for those packets, sending the client into fast retransmission state. The high mismatch in the throughput across two subflows thus results in an aggregate throughput that can be much lower than what we could achieve by using only Sub f low1 . Therefore, in our system, we aim at increasing the aggregate bandwidth by dropping subflows that can result in an increase 12 in the number of the out-of-order packets. Furthermore, in the network, every MPTCP subflow is treated as an individual TCP flow. Therefore, to maintain the backward compatibility of MPTCP with TCP (if one of the end hosts does not support MPTCP, then the connection should fall back to regular TCP) every subflow has an associated sending/receiving buffer and the end hosts maintain data structures containing metadata about the subflow, similar to the metadata maintained for a standard TCP connection. We leverage this information to obtain a server-side view of all the MPTCP subflows for more accurate throughput calculations at the client-side. In this section, we first propose a base algorithm that uses the round trip time, rtt, and buffer length maintained at the client-side (i.e., the user broadcasting live video) to estimate the throughput at the server-side (video server in the cloud where the live video is being uploaded). However, the throughput estimated at the client can differ completely from what the receiver sees because of the delays introduced inside the network and out-oforder packets. Therefore, in our final algorithm, we introduce a feedback loop to bridge the gap between the client’s and the server’s views. 3.1 Base Algorithm: Client-Side Throughput Calculation An MPTCP connection maintains information about the packet round-trip times and the transmission buffer length at the sender-side (client-side, in our case) for every subflow. Therefore, the throughput calculations are often done on the sender-side. We use these calculations in our first approach. The selection of subflows can be done in two ways. First, we can measure the throughput for the interfaces in advance, before starting the transmission, and then add subflows based on the throughput using MP JOIN. Second, we can start the transmission with subflows equal to the number of active interfaces we have and then drop the subflows that prove to be a bottleneck later. The first approach works well in the scenarios where we have only two interfaces, as in that case, we can just select one interface if the difference between the throughput is large. However, this approach does not scale well and the complexity of selecting the interfaces to start the connection with increases as the number of active interfaces increase. Also, in our scenario, the cellular hotspots are mobile in nature and are at liberty to move in and out-of-range of the broadcasting client. In such 13 a dynamic environment, selecting a set of ‘good’ interfaces for transmission can prove to be a challenging task and a wrong selection can result in less than optimal throughput. Lastly, because of the mobile nature of the cellular hotspots, a poor performing subflow can recover later with time. Therefore, we decide to make a bottleneck subflow a backup flow instead of completely dropping it, as explained in Section 3.1.1. We begin with the assumption that every subflow is a good flow, i.e., it can increase the aggregate throughput of the entire connection. By making this assumption, we are giving each subflow a fair chance for consideration. Basically, each subflow is initially scheduled to transmit some data, which in turn helps to better evaluate its performance. Consider the two scenarios in Figure 3.2. Even though CD2 is clearly the better choice than CD1 because of its better channel conditions, the client still chooses to transmit some data over CD1 , but as CD1 moves away from the client, the client makes the decision of dropping the CD1 subflow. For every subflow, we measure the throughput of the subflow based on the buffer length or maximum segment size, mss, and the round trip time, rtt. At the same time, we also keep track of the global throughput, i.e., the throughput achieved over all the subflows. If the throughput for the subflow threshold is less than a certain fraction of the global average, we drop that subflow so that it is not scheduled the next time its turn comes up. Initially, the client mobile device starts transmitting the live stream using MPTCP over the first subflow and MPTCP adds subflows without any interference from the system. The system schedules interfaces from a queue, i.e., brings them on-channel and a particular interface stays on-channel as long as the subflow buffer has data to transmit. Once the buffer is emptied, the interface is taken off-channel and a null frame is sent to the CD, to inform the CD that the interface has entered power save mode (PSM) and to buffer any packets for the client. When this same interface comes back on-channel again, it will inform the CD and the CD will deliver all the buffered packets to the interface. Before an interface goes off-channel, we sample its throughput and compare it to the global throughput of the MPTCP connection. If the throughput associated with CDi is less than a certain fraction of the global throughput, BCDi < αBgbl (3.1) 14 Figure 3.2: The figure shows the performance when a client is connected to two CDs. Scenario 1 (left): The client connected to CD1 and CD2 would push more data towards CD2 because of its better channel condition. Scenario 2 (right): CD1 moves away from the client and the client moves all the traffic to CD2 while keeping CD1 as a backup. where α is the throughput threshold, then the interface is marked inactive, moved to the probe list, and the probe timer is set. Now, when the probe timer expires, all the interfaces from the probe list are moved to the active list and their performance is evaluated once again. Algorithm 1 explains our base approach in a more concise manner. In Chapter 4, we evaluate this algorithm under different scenarios to determine the optimal value of α, the throughput threshold used in the procedure CDS WITCH. The use of exponentially weighted moving average (EWMA) for the calculation of the global throughput ensures that as the algorithm circles through the list of interfaces, the influence of old samples falls exponentially as new samples are added to the average. Another thing to note here is that even when a particular interface performs poorly during its turn, we still add its throughput to the global average. This is quite useful in scenarios where, for instance, if the client moves away from the set of CDs, it is connected to. In this case, the global average throughput would go down as the next interfaces are scheduled, but the algorithm would still be able to adapt to the worse but new network conditions. 15 Algorithm 1 BaseApproach 1: srtt ← 0 2: 3: procedure C ALCULATE T HROUGHPUT(rtt, mss) 4: B←0 5: β ← 0.125 6: if srtt! = 0 then 7: srtt = (1 − β) ∗ srtt + β ∗ (rtt) 8: else 9: srtt = rtt ∗ 8 10: B = mss/srtt 11: return B 12: 13: procedure CDS WITCH( ) 14: Bgbl ← 0 15: β ← 0.125 16: for CDi in CD list do 17: Bi ← C ALCULATE T HROUGHPUT (rtt, mss) 18: if Bi < α ∗ Bgbl then 19: // Mark CDi as backup 20: if probe timer is not set then 21: // set probe timer 22: if probe timer == 0 then 23: for CDi in CD list do 24: if CDi is backup then 25: // Mark CDi as active 26: Bgbl = (1 − β) ∗ Bgbl + β ∗ Bi 27: return . Smoothed RTT calculated over time . Instantaneous RTT . probe timer expired . EWMA calculation 16 3.1.1 Rationale Behind probe lists Since every device involved is mobile (the client requesting the connection and the device acting as the access point are all cellular devices), it may happen that a particular CD moves out-of-range, i.e., away from the client device (such that its throughput is less than the average throughput), but it may also move back in-range sometime in the future. Therefore, while trying to increase our aggregate throughput, if we disconnect when a device moves out-of-range and tried to connect when it comes back in-range, we might end up wasting a significant amount of time connecting to an CD, up to 15 seconds [9] because of the DHCP and the WEP/WPA key exchanges involved. So far, we have talked about dropping the interfaces that are performing poorly but since re-associating with a particular CD can have such high performance penalities, in practice, we move the interface associated with that particular CD to the “probe lists” and check for any changes in the bandwidth of the members of the list periodically. If the bandwidth increases, then we can avoid all the hassle of a new association and save the time we would have otherwise wasted in trying to connect to the CD. Also, in a truly mobile scenario, there is no accurate way of predicting the throughput the client will obtain from an CD in the very near future and establishing a new connection every time is a decision that locks the device to one CD for a certain period of time, resulting in suboptimal performance; therefore, it makes sense to stick with a choice for some amount of time, in case the CD may be facing some temporary short-term fluctuations that are affecting its capacity. 3.2 Algorithm 2 : Globally Aware Dynamically Adjusted The Internet is a best-effort network, meaning that the packets will be delivered if possible, but may also be dropped. Therefore, our first approach with sender-side throughput calculations may work well in theory, but in practice, these dropped packets have to be retransmitted by MPTCP and this retransmission reduces the throughput on the receiver-side for two reasons: • The lost data need to be sent again, which consumes time. The delay introduced by this retransmission is inversely proportional to the rate of the bottleneck link, i.e., the slowest link in the network between the sender and the receiver. 17 • The MPTCP protocol uses acknowledgements as a means of feedback about what packets were delivered. Detection of undelivered packets relies heavily on these acknowledgement packets. Due to propagation delays, acknowledgements can only be received by the sender with some latency, which further impacts transmission. In most practical scenarios, this is the most significant contribution to the extra delay caused by the retransmission. So, for the precise calculation of the throughput, these factors need to be taken into consideration. The MPTCP rate is regulated by its congestion window size, slow-start duration, and sender window (and receiver window) size. A suboptimal configuration of these variables will make the subflow throughput measured a bit lower. More importantly, especially in wireless channels, the channel condition changes also lead to adaptation of MPTCP congestion and flow controls. Such adaptation can also lead to the throughput being calculated differently than the actual value. All these conditions would lead to the sender and the receiver seeing two completely different sets of throughput and delay. The throughput calculation at the sender-side, while giving a pretty good estimate of the receiver-side throughput, does not reflect the exact throughput that the receiver perceives. The receiver has a more accurate global understanding of all the subflows, whereas the sender is responsible for managing the interfaces. To bridge this gap between the sender and the receiver, we introduce a feedback loop from the receiver to the sender. The feedback can now be used in two ways to improve our base algorithm, which we describe next. 3.2.1 Approach 1 : Globally Aware Subflow Selection As discussed above, the things that affect the throughput of a particular subflow are mss, rtt, congestion window, retransmission rate (loss rate), and other network delays. The client-side calculates the rtt, but the server has a better understanding of how many bytes are received on which subflow. Therefore, the server sends this information to the client as feedback. This information is embedded in the TCP acknowledgement headers’ options field, which can then be read by the client when the acknowledgement is received. MPTCP packets are similar to TCP packets in the sense that an MPTCP packet would have the same TCP header as a regular TCP packet but with the MPTCP control information in the TCP 18 options field. The objective of this algorithm is also to optimize the value of α, the throughput threshold, in the procedure CDS WITCH, in Algorithm 1. 3.2.2 Approach 2: Subflow Selection Based on Adaptive Throughput Threshold The disadvantage of using a fixed threshold is that it will treat every connection similarly, whereas it can happen that a certain threshold while being perfectly reasonable for a particular transmission may be too high for another transmission and can lead to loss of throughput or it may be too low for yet another transmission and slow down the whole connection. Therefore, the challenge of which subflow is a poor-performing subflow and how many such subflows exists remains. To mitigate this problem, we follow an approach to dynamically adjusted α. We start with a high value of α and decrease the value of α every time Bgbl increases or remains the same. However, when we see a decrease in the Bgbl , we set the value of α to the mid-point of the current value of α and the initial value of α. Algorithm 2 gives a pseudoimplementation of our dynamically adjusted approach. Setting the value of α as the mean of its current value and its max value ensures that α never exceeds αmax . Figure 3.3 illustrates the variation of α corresponding to one of the experiments explained in Chapter 4. 3.3 Initial Subflow Selection When multiple interfaces are available, the Linux operating system selects a particular interface as the primary interface. The problem with a static primary interface arises when the primary interface is connected to a bad CD or a CD that is too far away. If the path through the primary interface is congested or the rtt through it is quite high, then MPTCP makes a few attempts to establish the connection but ultimately gives up and returns a failure. In this case, the MPTCP connection can never be established as the first MPTCP subflow is created over the primary interface. To solve this problem, we have implemented a simple algorithm that can dynamically scan all the interfaces and establishes the MPTCP connection over the first working interface. To find a working interface, the algorithm 19 30.0 Throughput Threshold (%) 27.5 25.0 22.5 20.0 17.5 15.0 12.5 10.0 0 5 10 15 20 Time (Seconds) 25 30 Figure 3.3: The throughput threshold decreases step by step every time a AP contributes to the global throughput, but it increases based on the mean of the αcurr and αmax cycles through the list of interfaces starting with the primary interface and sends a few echo request messages to the receiver over every interface. The first interface it gets an echo response on would be used for connection establishment. While scanning through the list of interfaces, the algorithm looks for the first working interface rather than the best interface because MPTCP would ultimately create subflows through all the interfaces and we don’t need to waste our efforts trying to find the best interface. 20 Algorithm 2 Globally Aware Dynamically Adjusted Approach 1: procedure CDS WITCH( ) 2: Bgbl ← 0 3: β ← 0.125 4: αmax ← 0.20 5: α ← αmax 6: 7: for CDi in CD list do 8: Bi ← C ALCULATE T HROUGHPUT (rtt, bytes received) 9: Biexp ← C ALCULATE T HROUGHPUT (rtt, mss) 10: if Biexp < Bi & Bi >= α ∗ Bgbl then 11: α−= 1 12: else if Biexp > Bi & Bi >= α ∗ Bgbl then 13: //Do Nothing 14: else 15: α = (α + αmax )/2 16: // Mark CDi as backup 17: if probe timer is not set then 18: // set probe timer 19: 20: Bgbl = (1 − β) ∗ Bgbl + β ∗ Bi 21: // Switch to the next active interface 22: 23: if probe timer == 0 then 24: for CDi in CD list do 25: if CDi is backup then 26: // Mark CDi as active 27: return . EWMA calculation . probe timer expired CHAPTER 4 EVALUATION We have implemented our globally aware dynamically adjusted algorithm in the Linux 3.14.0 kernel, patched with MPTCP v0.89. Most of the changes are made to the 802.11 module of the Linux kernel with Atheros 9K NIC card as the wireless card, although the changes are not dependent on any NIC card. In this section, we evaluate each of our contributions experimentally, in an indoor as well as an outdoor environment, to account for different network conditions and test our system for scalability and robustness. First, we evaluate our system with different throughputs and delays to find an optimal value for the throughput threshold. Next, we run tests with throughputs above and below the throughput threshold to analyze how our system handles the poor performing subflow. Then, we run experiments to test our system in a comparatively stable indoor and outdoor environments. Finally, we run our system with two different cellular links in a real-world live video transmission in a commuter train to test its performance in a real-world scenario. 4.1 Calculating Throughput Threshold In this section, we run tests with different throughputs and delays, to figure out an optimal value of the throughput threshold, so that our system is able to differentiate between a subflow that would contribute to the aggregate throughput compared to a bottleneck subflow. To create unbalanced network conditions, we use linux traffic control (tc) utility to control the egress throughput (upload throughput) over a single subflow while letting the other subflow be uncontrolled. During our experiments, we are connected to the on-campus WiFi on both the subflows, which can support upload speeds up to 14 Mbps. We also performed these experiments while connected to two different cellular devices (CDs), to account for the real use case of our problem statement. Our implementation compares the throughput of a particular interface with the global 22 average just before switching to the next CD. Switching from one CD to the other requires transmitting a null IEEE 802.11 frame to the CD with the power-save mode (PSM) bit set, indicating that the client is entering PSM mode. This tells the CD to buffer any packets destined for the client. Then the algorithm updates the SSID and MAC address of the other CD along with encryption parameters if they have changed since the last time. If the CDs are on different channels, the device driver is also set to the channel frequency of the next CD. After this, the device is ready to transmit through the second CD. Our experimental setup has been shown and explained in Figure 4.1. We evaluated our algorithm by creating two virtual interfaces and connecting to two different access points through them. For interface 1, we were connected to our campus WiFi and were able to receive a pretty stable upload throughput of almost 14 Mbps, but for interface 2, we connected to a cellular device and limited the upload bandwidth to 75%, 50%, 30%, 20%, 15%, and 10% of the maximum throughput on the interface 1. By emulating such unbalanced rate conditions, we want to create situations where one CD is near the wireless client, whereas the other one is further away. Our analysis shows that the number of out-of-order packets increases as the difference between the throughputs of the two subflows increase. For the purpose of this analysis, we used data files with size ranging from 100 KB to 100 MB. Figure 4.1: Our experimental setup involved two main scenarios. The one on the left shows when we connected our broadcasting client to a cellular hotspot on one interface and to the campus WiFi on the other interface. On the right, our broadcasting client is connected to two cellular hotspots both going through different cellular network providers. 23 Figure 4.2 shows the effect of the low capacity subflow on the overall performance of MPTCP. Since the file size is small, the effect of the low performing subflow on MPTCP’s performance cannot be observed distinctly. As the file size increases, the capacity of subflow 2 significantly impacts the performance of the overall MPTCP connection, as can be seen in Figure 4.3. MPTCP comes with a smart built-in scheduler that pushes data to the subflow with the shortest rtt until its send buffer is full, but MPTCP does not take into account the effect of out-of-order packets when the difference between the throughput of the subflows is large and therefore would still end up sending some data over the bottleneck subflow. The delay introduced by out-of-order packets may not be significant when the video duration is small, say a couple of seconds, i.e., with the increase in video duration, this delay affects the overall throughput achieved at the receiver end. Next, we run the same experiments while being connected to two different CDs through our virtual interfaces. In this case, the highest upload throughput that we were able to achieve on a single interface was around 5 Mbps and we limited the upload bandwidth of the interface 2 to 75%, 50%, 30%, 20%, 15%, and 10% of interface 1. We used the same data files for these experiments as well. The results of our cellular measurements can be found in the Appendix. From the wide range of experiments that we performed, we found that a throughput threshold of 0.2 (20%) works best to differentiate good interfaces from the bad ones. 4.2 Base Algorithm vs. Base Implementation After finding the optimal value for the α, we plugged this value of α in Algorithm 1 and evaluated system performance. To demonstrate the function of our base algorithm, we ran experiments under the following conditions: created two virtual interfaces with one connected to on-campus WiFi access point with upload throughput around 14 Mbps and connected the second interface to a CD that had a variable upload throughput ranging from 4 Mbps to 1.6 Mbps. We ran the experiments for extended periods of time to account for a large amount of data being uploaded during a live video transmission that may last up to tens of minutes or more. Figure 4.4 shows that when both the subflows are performing well, MPTCP is able to combine the throughput of the two subflows. 20.0 20.0 17.5 17.5 15.0 15.0 Throughput (Mbps) Throughput (Mbps) 24 12.5 10.0 7.5 12.5 10.0 7.5 5.0 5.0 2.5 2.5 0.0 Subflow 1 Subflow 2 100 KB 0.0 MPTCP Subflow 1 Subflow 2 100 KB MPTCP (a) Throughput of subflow 2 > 20% of subflow 1 (b) Throughput of subflow 2 < 20% of subflow 1 20.0 20.0 17.5 17.5 15.0 15.0 Throughput (Mbps) Throughput (Mbps) Figure 4.2: The effect of out-of-order packets for the combined subflows is not so significant when the file size is small, as shown by the MPTCP bar. 12.5 10.0 7.5 12.5 10.0 7.5 5.0 5.0 2.5 2.5 0.0 Subflow 1 Subflow 2 5 MB 0.0 MPTCP Subflow 1 Subflow 2 5 MB MPTCP 20.0 20.0 17.5 17.5 15.0 15.0 Throughput (Mbps) Throughput (Mbps) (a) Throughput of subflow 2 > 20% of subflow 1 (b) Throughput of subflow 2 < 20% of subflow 1 12.5 10.0 7.5 12.5 10.0 7.5 5.0 5.0 2.5 2.5 0.0 Subflow 1 Subflow 2 100 MB MPTCP 0.0 Subflow 1 Subflow 2 100 MB MPTCP (c) Throughput of subflow 2 > 20% of subflow 1 (d) Throughput of subflow 2 < 20% of subflow 1 Figure 4.3: The effect of out-of-order packets is much more noticeable for bigger file sizes when the two subflows are combined, as shown by the MPTCP bar. 25 CD1 CD2 MPTCP (Actual) MPTCP (Expected) 14 Throughput (Mbps) 12 10 8 6 4 2 0 0 25 50 75 100 125 Time (seconds) 150 175 Figure 4.4: MPTCP combines throughput when both subflows are performing well. As can be seen in the figure, the expected and the actual values of the two subflows are quite similar. Next, we test the baseline system under unbalanced network conditions where on one interface, we are connected to a high-speed WiFi but for the other, we are connected to a congested cellular network. Figure 4.5 shows how the MPTCP performance went down as the performance of the better interface went up. This reflects the impact of out-of-order packets when the difference between the capacities of the two subflows is large. We next ran our system under similar conditions and, as shown in Figure 4.6, our system successfully recognized the bad interface/subflow and stopped transmitting on it. Since we periodically probe the backup interfaces for improvements, in Figure 4.6, the second interface again comes online at about 17 secs, but is again shut down for its low performance. 26 CD1 CD2 MPTCP (Actual) MPTCP (Expected) 14 Throughput (Mbps) 12 10 8 6 4 2 0 0 5 10 15 Time (seconds) 20 25 Figure 4.5: MPTCP througput goes down when one of the subflows is not performing well. The figure above shows the actual throughput achieved versus the expected throughput that should be obtained when combining the two subflows. We performed the same experiments while being connected to two different cellular hotspots and obtained similar results as shown in Figure 4.5. Our base algorithm was successfully able to recognize and isolate low performing subflows to maintain the throughput of the better performing subflows. A summary of all the experiments that we conducted with our base algorithm can be found in Appendix. 4.3 Feedback Loop vs. Base Algorithm In this section, we test our implementation that introduces the feedback loop into our algorithm to see how it performs under different network conditions. Along with incorporating the feedback in our throughput calculations, we adjust our throughput threshold 27 10 CD1 CD2 Algorithm (Actual) MPTCP (Expected) Throughput (Mbps) 8 6 4 2 0 0 5 10 15 20 Time (seconds) 25 30 Figure 4.6: The expected throughput shows what the combined throughput of the two subflows should be, but out-of-order packets result in a throughput that is much lower than expected. Therefore, our system pushes all the data towards CD2 and puts CD1 on backup. based on the feedback we receive from the server. We decrement the throughput threshold every time the received feedback indicates that the server-side throughput has increased or has at least remained the same, but if the feedback indicates that the server-side throughput has decreased because of a particular subflow, we block that interface and increment our throughput threshold to account for the feedback. We first test our systems in an indoor setting with two virtual interfaces connected to two cellular CDs. The testing conditions in the indoor environment were stable, but we still saw an improvement over our base algorithm. We then perform the experiments in an outdoor environment, with stationary as well as mobile cellular CDs. These experiments performed similar to the indoor environment and were also able to select subflows in order 28 to maximize the throughput achieved. As can be seen in Figure 4.7, initially when only CD1 is present, the MPTCP connection closely tracks the throughput of CD1 and when CD2 comes online, the system is able to combine the throughput of both. Lastly, we went on a commuter train in our city and evaluated our systems in unstable network conditions as experienced in a fast-moving vehicle. Performance in a fast-moving vehicle is a challenge because a cell phone moves through the coverage range of a lot of base stations experiencing frequent handoffs that can affect its performance and the data rates that it can achieve. We traveled on the train while going north, outside the city, on the commuter train and the further north we went, the data rates of the cellular CDs became more and more unpredictable, but our system was still able to closely track the best throughput that the CD1 CD2 Algorithm (Actual) MPTCP (Expected) 14 Throughput (Mbps) 12 10 8 6 4 2 0 200 400 600 800 1000 Time (seconds) 1200 1400 1600 Figure 4.7: Shows the throughput obtained, based on the cellular traces, for individual CDs as well as the combined system throughput. 29 two interfaces could provide. Figure 4.8 shows the cellular traces for the tests we ran on the commuter train. Both the interfaces are suffering from bad connectivity but one is worse than the other. Because of bad network conditions, the working subflow often goes into slow start mode, as can be seen at around 100 secs, but the system is still able to trace the throughput of the better performing interface. The bad network conditions also increased the number of retransmission required by the protocol, as shown in Figure 4.9. 30 10 CD1 CD2 Algorithm (Actual) MPTCP (Expected) Throughput (Mbps) 8 6 4 2 0 0 50 100 150 200 Time (seconds) 250 300 Figure 4.8: Throughput as calculated from the cellular traces collected on a commuter train. 31 350 300 Time (Seconds) 250 200 150 100 50 0 0 200 400 600 No. of packets lost 800 1000 Figure 4.9: Shows the number of retransmissions based on the cellular traces obtained from the train experiments. CHAPTER 5 CONCLUSION AND FUTURE WORK In this thesis, we address the challenge of increasing the upstreaming throughput by aggregating the throughput of nearby cellular devices. We propose a novel algorithm that schedules the virtual wireless interfaces associated with a particular MPTCP connection to maximize the throughput perceived at the server-side. Our system works based on the feedback it receives from the server and uses that feedback to add or drop subflows by means of a dynamically adjusted threshold. We have implemented our algorithm in the linux kernel and evaluated it under different network conditions. Through our evaluations, we show that we are able to achieve close to optimal throughput for situations where the throughput of the subflows were close to each other. For situations where a huge difference exists in the throughput of the subflows, our algorithm was successfully able to recognize and isolate the low performing subflow to maintain a higher throughput with the help of better performing subflows. Our research can progress along the following directions: We plan to examine network coding methods to deal with the out-of-order packets, although these methods are not likely to entirely eliminate the need for dropping poorly performing subflows. We also need to evaluate our system when multiple WiFi channels are used. APPENDIX 10 10 8 8 Throughput (Mbps) Throughput (Mbps) MPTCP MEASUREMENTS 6 4 2 Subflow 1 Subflow 2 5 MB 0 MPTCP 10 10 8 8 6 4 2 0 Subflow 2 5 MB MPTCP Subflow 1 Subflow 2 20 MB MPTCP Subflow 1 Subflow 2 100 MB MPTCP 6 4 Subflow 1 Subflow 2 20 MB 0 MPTCP 10 10 8 8 6 4 2 0 Subflow 1 2 Throughput (Mbps) Throughput (Mbps) 4 2 Throughput (Mbps) Throughput (Mbps) 0 6 6 4 2 Subflow 1 Subflow 2 100 MB MPTCP 0 Figure A.1: Measuring the throughput threshold for cellular traces. The right side shows the effect of out-of-order packets on the transmission. 20.0 20.0 17.5 17.5 17.5 15.0 15.0 15.0 12.5 10.0 7.5 12.5 10.0 7.5 5.0 5.0 2.5 2.5 0.0 Subflow 1 Subflow 2 5 MB 1+2 2 backup Throughput (Mbps) 20.0 Throughput (Mbps) Throughput (Mbps) 34 0.0 12.5 10.0 7.5 5.0 2.5 Subflow 1 Subflow 2 20 MB 1+2 2 backup 0.0 Subflow 1 Subflow 2 1+2 100 MB 2 backup Subflow 2 1+2 100 MB no backup 10 10 8 8 8 6 4 2 0 Throughput (Mbps) 10 Throughput (Mbps) Throughput (Mbps) Figure A.2: Performance with low capacity subflow 6 4 2 Subflow 1 Subflow 2 5 MB 1+2 no backup 0 6 4 2 Subflow 1 Subflow 2 20 MB 1+2 no backup 0 Subflow 1 Figure A.3: Combined throughput achieved REFERENCES [1] Huawei demonstrates 5g-based remote driving with china mobile and saic motor. http://www.huawei.com/en/press-events/news/2017/6/5G-based-RemoteDriving. [2] I rode in a car in las vegas that was controlled by a guy in silicon valley. https://www.technologyreview.com/s/609937/i-rode-in-a-car-in-las-vegas-itsdriver-was-in-silicon-valley/. [3] Use multipath tcp to create backup connections for ios. https://support.apple.com/enus/HT201373. [4] A. Aijaz, H. Aghvami, and M. Amani, A survey on mobile data offloading: technical and business perspectives, IEEE Wireless Communications, 20 (2013), pp. 104–112. [5] A. Balasubramanian, R. Mahajan, A. Venkataramani, B. N. Levine, and J. Zahorjan, Interactive wifi connectivity for moving vehicles, ACM SIGCOMM Computer Communication Review, 38 (2008), pp. 427–438. [6] M. M. Bharadwaj and J. Karjee, Improved cell coverage in hilly areas using cellular antennas, International Journal of Advanced Networking and Applications, 7 (2016), p. 2953. [7] R. Chandra and P. Bahl, Multinet: Connecting to multiple ieee 802.11 networks using a single wireless card, in INFOCOM 2004. Twenty-third Annual Joint Conference of the IEEE Computer and Communications Societies, vol. 2, IEEE, 2004, pp. 882–893. [8] Y.-C. Chen, Y.-s. Lim, R. J. Gibbens, E. M. Nahum, R. Khalili, and D. Towsley, A measurement-based study of multipath tcp performance over wireless networks, in Proceedings of the 2013 Conference on Internet Measurement Conference, ACM, 2013, pp. 455–468. [9] Y.-C. Chen, E. M. Nahum, R. J. Gibbens, D. Towsley, and Y.-s. Lim, Characterizing 4g and 3g networks: Supporting mobility with multi-path tcp, University of Massachusetts Amherst, Tech. Rep, (2012). [10] A. Croitoru, D. Niculescu, and C. Raiciu, Towards wifi mobility without fast handover., in NSDI, 2015, pp. 219–234. [11] S. Dimatteo, P. Hui, B. Han, and V. O. Li, Cellular traffic offloading through wifi networks, in Mobile Adhoc and Sensor Systems (MASS), 2011 IEEE 8th International Conference on, IEEE, 2011, pp. 192–201. [12] A. Ford, C. Raiciu, M. Handley, and O. Bonaventure, Tcp extensions for multipath operation with multiple addresses, tech. rep., IETF, 2013. 36 [13] D. Giustiniano, E. Goma, A. Lopez, and P. Rodriguez, Wiswitcher: an efficient client for managing multiple aps, in Proceedings of the 2nd ACM SIGCOMM Workshop on Programmable Routers for Extensible Services of Tomorrow, ACM, 2009, pp. 43–48. [14] E. G. Gran, T. Dreibholz, and A. Kvalbein, Nornet core–a multi-homed research testbed, Computer Networks, 61 (2014), pp. 75–87. [15] B. Han, P. Hui, V. A. Kumar, M. V. Marathe, J. Shao, and A. Srinivasan, Mobile data offloading through opportunistic communications and social participation, IEEE Transactions on Mobile Computing, 11 (2012), pp. 821–834. [16] W. Hu and G. Cao, Quality-aware traffic offloading in wireless networks, IEEE Transactions on Mobile Computing, 16 (2017), pp. 3182–3195. [17] C. V. N. Index, Cisco visual networking index: global mobile data traffic forecast update, 2014–2019, Tech. Rep, (2015). [18] S. Kandula, K. C.-J. Lin, T. Badirkhanli, and D. Katabi, Fatvap: Aggregating ap backhaul capacity to maximize throughput., in NSDI, vol. 8, 2008, pp. 89–104. [19] A. Le, L. Keller, H. Seferoglu, B. Cici, C. Fragouli, and A. Markopoulou, Microcast: Cooperative video streaming using cellular and d2d connections, arXiv preprint arXiv:1405.3622, (2014). [20] Y.-s. Lim, Y.-C. Chen, E. M. Nahum, D. Towsley, and R. J. Gibbens, Improving energy efficiency of mptcp for mobile devices, arXiv preprint arXiv:1406.4463, (2014). [21] P. Lundrigan, M. Khaledi, M. Kano, N. D. Subramanyam, and S. Kasera, Mobile live video upstreaming, in Teletraffic Congress (ITC 28), 2016 28th International, vol. 1, IEEE, 2016, pp. 121–129. [22] H. Nam, D. Calin, and H. Schulzrinne, Towards dynamic mptcp path control using sdn, in NetSoft Conference and Workshops (NetSoft), 2016 IEEE, IEEE, 2016, pp. 286–294. [23] L. Ong, An introduction to the stream control transmission protocol (sctp), (2002). [24] C. Paasch, G. Detal, F. Duchene, C. Raiciu, and O. Bonaventure, Exploring mobile/wifi handover with multipath tcp, in Proceedings of the 2012 ACM SIGCOMM Workshop on Cellular Networks: Operations, Challenges, and Future Design, ACM, 2012, pp. 31–36. [25] C. Paasch, R. Khalili, and O. Bonaventure, On the benefits of applying experimental design to improve multipath tcp, in Proceedings of the Ninth ACM Conference on Emerging Networking Experiments and Technologies, ACM, 2013, pp. 393–398. [26] C. Raiciu, C. Paasch, S. Barre, A. Ford, M. Honda, F. Duchene, O. Bonaventure, and M. Handley, How hard can it be? designing and implementing a deployable multipath tcp, in Proceedings of the 9th USENIX Conference on Networked Systems Design and Implementation, USENIX Association, 2012, pp. 29–29. [27] F. Rebecchi, M. D. De Amorim, V. Conan, A. Passarella, R. Bruno, and M. Conti, Data offloading techniques in cellular networks: A survey, IEEE Communications Surveys & Tutorials, 17 (2015), pp. 580–603. 37 [28] H. Soroush, P. Gilbert, N. Banerjee, M. D. Corner, B. N. Levine, and L. P. Cox, Spider: improving mobile networking with concurrent wi-fi connections, in SIGCOMM, 2011. [29] H. Soroush, P. Gilbert, N. Banerjee, B. N. Levine, M. D. Corner, and L. P. Cox, Concurrent wi-fi for mobile users: analysis and measurements, in CoNEXT, 2011. [30] N. Thompson, G. He, and H. Luo, Flow scheduling for end-host multihoming., in INFOCOM, Citeseer, 2006. [31] D. Wischik, C. Raiciu, A. Greenhalgh, and M. Handley, Design, implementation and evaluation of congestion control for multipath tcp., in NSDI, vol. 11, 2011, pp. 8–8. |
| Reference URL | https://collections.lib.utah.edu/ark:/87278/s65f4rc3 |



