| Title | Eztrust: Network independent perimeterization for microservices |
| Publication Type | thesis |
| School or College | College of Engineering |
| Department | Computing |
| Author | Zaheer, Zirak |
| Date | 2019 |
| Description | Emerging microservices-based workloads introduce new security risks in today's data centers as attacks can propagate laterally within the data center relatively easily by exploiting cross-service dependencies. As countermeasures for such attacks, traditional perimeterization approaches, such as network-endpoint-based access control, do not fare well in highly dynamic microservices environments (especially considering the management complexity, scalability and policy granularity of these earlier approaches). In this work, we propose eZTrust, a network-independent perimeterization approach for microservices. eZTrust allows data center tenants to express access control policies based on fine-grained workload identities, and enables data center operators to enforce such policies reliably and efficiently in a purely network-independent fashion. To this end, we leverage eBPF, the extended Berkeley Packet Filter framework, to trace authentic workload identities and apply per-packet tagging and verification. We demonstrate the feasibility of our approach through extensive evaluation of our proof-of-concept prototype implementation. We find that, when comparable policies are enforced, eZTrust incurs 3-6 times lower packet lantency and 1.5-2.5 times lower CPU overhead than traditional perimeterization schemes. |
| Type | Text |
| Publisher | University of Utah |
| Dissertation Name | Master of Science |
| Language | eng |
| Rights Management | © Zirak Zaheer |
| Format | application/pdf |
| Format Medium | application/pdf |
| ARK | ark:/87278/s6451mmc |
| Setname | ir_etd |
| ID | 1706526 |
| OCR Text | Show EZTRUST: NETWORK INDEPENDENT PERIMETERIZATION FOR MICROSERVICES by Zirak Zaheer A thesis submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Master of Science in Computer Science School of Computing The University of Utah May 2019 Copyright © Zirak Zaheer 2019 All Rights Reserved The University of Utah Graduate School STATEMENT OF THESIS APPROVAL The thesis of Zirak Zaheer has been approved by the following supervisory committee members: Jacobus Van Der Merwe , Chair(s) 11/28/2018 Date Approved Robert Ricci , Member 11/28/2018 Date Approved Ryan Stutsman , Member 11/28/2018 Date Approved by Ross Whitaker , Chair/Dean of the Department/College/School of Computing and by David B. Kieda , Dean of The Graduate School. ABSTRACT Emerging microservices-based workloads introduce new security risks in today’s data centers as attacks can propagate laterally within the data center relatively easily by exploiting cross-service dependencies. As countermeasures for such attacks, traditional perimeterization approaches, such as network-endpoint-based access control, do not fare well in highly dynamic microservices environments (especially considering the management complexity, scalability and policy granularity of these earlier approaches). In this work, we propose eZTrust, a network-independent perimeterization approach for microservices. eZTrust allows data center tenants to express access control policies based on fine-grained workload identities, and enables data center operators to enforce such policies reliably and efficiently in a purely network-independent fashion. To this end, we leverage eBPF, the extended Berkeley Packet Filter framework, to trace authentic workload identities and apply per-packet tagging and verification. We demonstrate the feasibility of our approach through extensive evaluation of our proof-of-concept prototype implementation. We find that, when comparable policies are enforced, eZTrust incurs 3–6 times lower packet lantency and 1.5–2.5 times lower CPU overhead than traditional perimeterization schemes. To my parents, my loving fiance’, friends, and fellow students. CONTENTS ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii CHAPTERS 1. 2. 3. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Thesis Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 4 BACKGROUND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1 Zero Trust Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 eBPF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 5 ARCHITECTURE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.1 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Key Idea and Design Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Egress Packet Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Context Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Per-Packet Tagging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Ingress Packet Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.4 Slow Path Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.5 Per-Packet Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.6 Dynamic Context Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 7 9 9 11 11 12 13 14 4. PROTOTYPE IMPLEMENTATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 5. EVALUATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.1 5.2 5.3 5.4 5.5 5.6 5.7 Slow Path vs. Fast Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Microbenchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cilium vs. eZTrust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DPI-Based vs. eZTrust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CPU Resource Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dynamic Policies and Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Real-World Application: Sock Shop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 21 22 23 24 26 27 6. MOTIVATIONAL USE CASES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 7. RELATED WORKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 7.1 Network Flow Rule-Based Perimeterization . . . . . . . . . . . . . . . . . . . . . . . . . . 31 7.2 7.3 7.4 7.5 7.6 8. 32 32 33 33 34 DISCUSSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 8.1 8.2 8.3 8.4 9. Transport-Level Perimeterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Label-Based Perimeterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DPI-Based Perimeterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . API Gateway-Based Perimeterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Network-Independent Packet Processing . . . . . . . . . . . . . . . . . . . . . . . eZTrust in Realistic Deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tag Anonymization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Smart NIC Offload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Platform Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 36 37 37 CONCLUSION AND FUTURE WORK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 vi LIST OF FIGURES 3.1 The eZTrust architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 eVerifier’s packet verification procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.1 eBPF maps for policy enforcement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 5.1 Latency: slow path vs. fast path. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5.2 Cilium vs. eZTrust. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 5.3 Protection against Heartbleed vulnerability: DPI-based vs. eZTrust. . . . . . . . . 25 5.4 Per-packet CPU resource overhead. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5.5 eZTrust in action in dynamic environments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.6 Microservice control flow in Sock Shop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 5.7 End-to-end latencies of Sock Shop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 LIST OF TABLES 4.1 Microservice context collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.1 Microbenchmarks: The average latency is measured with netperf in TCP RR mode. The CPU core usage is system-wide CPU usage reported by /proc, but excluding that of iperf. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.2 Experimental scenarios. In scenario #1, two wget clients and one curl client on host1 download files from three nginx/https servers on host2 with 1MByte/sec rate limit, respectively. In scenario #2, three wget clients on host1 download files from three nginx,https servers on host2 with 1MByte/sec rate limit, respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 7.1 Comparison of existing perimeterization approaches. . . . . . . . . . . . . . . . . . . . . 34 CHAPTER 1 INTRODUCTION As common network security measures, data centers are traditionally protected at their borders, under the assumption that attacks originate externally via north-south traffic. This assumption is proving incorrect as data centers start to house more interdependent microservices [46], which in turn leads to increasingly dominant intra-data center east-west traffic (85% of total data center traffic [12]). This emerging application deployment trend poses new security risks as the infrastructure is not properly protected against its internal misbehavior, which allows threats from east-west traffic to propagate laterally across any number of microservices within data centers. In order to address these newly emerging security risks, the zero-trust security model [28] has been postulated with a guiding principle of “never trust, always verify” instead of the current operating model of “trust but verify.” Under this model, every deployed tenant microservice must be secured with fine-grained perimeterization policies that scrutinize the traffic in and out of the microservice, as dictated by tenants. In modern SDN-centric data centers [47], where a centralized controller interconnects tenant microservices by pushing appropriate forwarding rules at the programmable software switches, a traditional way of realizing perimeterization is to define network-endpoint-based policy rules at these switches [26, 19, 36]. In this network-based perimeterization, tenant policy intents, typically defined based on workload identities (e.g., only workload “X” can talk to workload “Y”), need to be translated into corresponding network-endpoint policies (e.g., only <IP1:port1> can reach <IP2:port2>) to be enforced by data center operators at the network level. However, this semantic gap between tenant’s policy intents and operator’s policy enforcement infrastructure makes the resulting perimeterization potentially unreliable and error-prone. Network endpoint properties such as IP addresses and port numbers are 2 not the binding properties of tenant workloads, but rather ephemeral attributes attached to them, which may be dynamically changed either by tenants from microservice reconfigurations or by middleboxes as part of network operations (e.g., address/port translation and load balancing), and can even be spoofed by malicious attackers. Correctness aside, the network-based perimeterization also introduces a scalability challenge in policy rule management. Firstly, the size of policy rule-sets increases multiplicatively with the number of communicating microservices or their security zones, as well as the number of endpoint properties relied upon by policies. In addition, every time communication patterns change due to microservice creation, termination and migration events, policy rules provisioned for existing microservices need to be inspected and adjusted in a timely fashion to fulfill tenant policy intents. Considering the large-scale, highly dynamic microservice deployment nature of modern clouds [41], the task of maintaining and updating policy rule sets in such environments incurs significant resource overhead on the data center infrastructure [45]. Finally, the policy granularity of the network-based access control is restricted to network endpoint level. On the other hand, emerging security risks increasingly necessitate more fine-grained perimeterization, where access is regulated based on detailed contexts associated with microservice workloads (e.g., application/user identity, protocol version, status of security patches). Such granular policies are useful to contain potential damage from newly discovered software vulnerabilities (e.g,. POODLE attack against SSL, OpenSSL Heartbleed, Shellshock). Enriching network-endpoint policies with granular contexts, however, this typically requires resource-heavy layer-7 deep packet inspection (DPI) and intrusive guest introspection [53, 34]. In order to address these limitations of the existing network-endpoint-based perimeterization, we propose in this thesis an alternative solution called eZTrust, where we shift perimeterization targets from network endpoints to workload identities. In this approach, we exploit the fact that microservices are typically packaged in lightweight containers. We are also motivated by the ongoing efforts to monitor detailed lifecycles of containerized microservice workloads [22, 20, 17]. Our approach is to repurpose the growing wealth of such monitoring data gleaned from deployed workloads for perimeterization. The key idea of eZTrust is as follows. Every packet generated by a microservice is stamped with 3 a tag which encodes a fine-grained identity of the microservice. The fine-grained identity is defined as a set of authentic contexts tied to the microservice workload. Example contexts include application-level identity (e.g., application name/version), run-time environmentrelated signatures (e.g., kernel version, dynamically loaded library version, user identity) or deployment-specific metadata (e.g., geographic location, filesystem image tag). Some of these contexts are detected from the workloads themselves, while others are fetched from the centralized microservice orchestrator. Once the tagged packet is received, the receiver end extracts the tag, decodes it back to the sender-side contexts, and applies perimeterization policies based on the sender-side contexts as well as the recipient’s contexts, as instructed by a receiver-side tenant. In this manner, the whole perimeterization process is completely decoupled from underlying networks. To realize eZTrust, we leverage eBPF [6], the extended Berkeley Packet Filter, which enables us to trace various contexts associated with microservice workloads as well as perform per-packet tagging and verification. Inspired by the flow cache design of Open vSwitch (OVS) [48], we adopt dual-path per-packet verification, where slow path via userspace is triggered to handle packets with unknown contexts, while fast path conducts eBPF-based in-kernel packet verification. To ensure correct packet verification in the presence of context changes, we leverage the notion of an epoch, which is used to detect context changes and invalidate caching on the fast path. We have prototyped eZTrust and conducted detailed evaluations to show its efficacy. We find that, when comparable policies are enforced, eZTrust incurs a factor of 3–6 lower packet processing latency and a factor of 1.5–2.5 lower per-packet CPU overhead than other state-of-the-art perimeterization schemes. Using realistic perimeterization scenarios such as OpenSSL Heartbleed vulnerability containment and control flow protection for a real-world e-commerce application, we demonstrate that eZTrust can support context-rich perimeterization policies efficiently. 1.1 Thesis Statement It is possible to provide fine-grained, context-aware policy enforcement in a modern dynamic multitenant microservices ecosystem. 4 1.2 Contributions We make the following specific contributions in this work: • We propose a solution that redefined perimeterization in terms of context which is dynamically derived from the workloads. We highlight how legacy perimeterization approaches fall short to protect modern data center workloads. • We design the eZTrust architecture which enables fine-grained context-based perimeterization without relying on complex network-endpoint-based policies, nor requiring compute-intensive DPI for detailed context discovery. • We implement a proof-of-concept prototype of the architecture using eBPF, and demonstrate motivational scenarios enabled by the prototype. • We quantify the performance and resource overhead of the prototype, and compare it against other alternative approaches. • We demonstrate that eZTrust is a viable option for policy enforcement in realistic dynamic environments. • Finally, to show that eZTrust can be used for policy enforcement to secure existing real-world systems, we deployed Eztrust with a real-word microservices setup (sock-shop). We showcased that Eztrust was able to enforce tenant policies in this realistic setting, incurring slight overhead, while maintaining correctness for all of its functionality. CHAPTER 2 BACKGROUND To understand the core idea behind eZTrust we need to understand two key concepts. 2.1 Zero Trust Architecture The major idea underlying eZTrust is the Zero Trust Networking model. Zero-Trust is based on the key idea of ’never trust always verify”. It is designed to specifically address the lateral movement of bad actors in the network by leveraging fine-grained perimeterization. It defies the idea that the inside of the data-center can be trusted be it internal traffic, or tenant workloads. Instead, there must be no concept of trusted traffic or trusted tenants within the data center, and all traffic external and internal must be verified. Moreover, Zero Trust discourages reliance on networking constructs and elements for micro-segmentation and perimeterization purposes. This concept was originally coined by Palo Alto Networks in 2017 [8], and since then has gained traction in industry and academia both. Works such as Google Beyond Corp [7], Aporeto [24] and Vmware NSX [26] are partially or fully motivated by the idea of Zero Trust networking. eZTrust attempts to meet the Zero Trust networking model and builds a per-packet policy-enforcement mechanism. 2.2 eBPF The extended Berkeley Packet Filter is a highly flexible, generic and efficient code execution engine that allows the capability to inject new code in the Linux kernel at run-time [6]. This injected code executes in an event-based fashion, e.g., when a packet is received, a new process is created etc. eBPF allows bytecode to run at various hook points inside vanilla linux kernel. eBPF has a flexible instruction set and is flexible enough to support for a wide range of usecases in domain of networking, traffic engineering, security, monitoring etc. 6 For networking domain, it allows us to create high-performant in-kernel network functions with the ability to chain modular network functions which can perform userdefined actions on the packet as needed. eBPF can also be said to act as an in-kernel framework which supports data-structure called eBPF maps. These in-kernel maps can be shared between multiple eBPF programs as well as shared between the kernel and userspace. In eZTrust, eBPF is used for both monitoring and policy enforcement purposes. We will talk more about how we make use of eBPF in our system in Chapter 3 and Chapter 4. CHAPTER 3 ARCHITECTURE Before presenting the eZTrust architecture, we first describe the assumed threat model which drove our design. 3.1 Threat Model eZTrust is a policy-driven endpoint-based perimeterization access control system for a microservices environment. We make the following assumptions about an eZTrust deployment: We assume the microservice provider is trustworthy and that the provider’s infrastructure is free of vulnerabilities. We assume that the eZTrust implementation is securely integrated with the provider’s cloud control framework to ensure policies, contexts and secrets are securely stored and distributed. We also assume that the physical server operating system/kernel is not compromised. Given this context, we assume that a potential attacker can take control of a running container/microservice and try to launch attacks against microservices of other tenants. In particular, we assume an attacker might attempt to gain unauthorized access to other tenant applications which violate the tenant policies. eZTrust protects against this threat by preventing unauthorized communication from occurring, thereby protecting tenants microservices from any illegitimate traffic. 3.2 Key Idea and Design Requirements Next, we present the key idea of eZTrust and discuss associated design requirements. For better understanding of eZTrust, let’s consider the following illustrative example scenario enabled by eZTrust, where two hypothetical microservices S1 (HAproxy load balancer) and S2 (nginx web server) are operated. S1 carries three contexts: app=HAproxy, appVersion=1.8 and location=US-West. The value “tag1” is mapped to these contexts. Similarly, S2 contains three contexts: app=nginx, appVersion=1.2 and loc=US-East, and 8 the value “tag2” (6= tag1) is resolved to these contexts. S2’s policy is defined as “accept traffic only if it originates from HAproxy with version 1.8, and is destined to an nginx server in the east coast US.” Under the eZTrust architecture, every packet generated by HAproxy on S1 is stamped with tag1. When the packet is received by S2, tag1 is converted to the sender-side contexts (app=HAproxy, appVersion=1.8 and loc=US-West). S2 then applies its defined policy based on the combination of the sender-side and receiver-side contexts, and accepts the packet. Tag2 is attached to the packets sent by S2 in a reverse direction. As is clear from the above example, there are several important requirements to meet in order for the proposed architecture to become a reality. • (R1) For each microservice, its associated contexts must be correctly determined without significant overhead. To make context discovery verifiable and lightweight, the contexts must be directly derived from the microservices, rather than arbitrarily assigned by a tenant like the static label-based approach, or separately mined with a heavy duty packet processing like the DPI-based approach. • (R2) Some microservices (e.g., LAMP stack service or multicontainer pod in Kubernetes) may run more than one applications in it, in which case, there will be multiple sets of contexts defined in the microservice (for different applications). Thus, when network packets are generated within a microservice (by any one of the applications running inside), the packets must be tagged with a correct set of application contexts. • (R3) The mapping between a tag and a set of contexts must be globally unique and available for any arbitrary microservice to retrieve contexts from received packets with tags. • (R4) Whenever any context is changed in any microservice, the change must be reflected in the mapping and subsequent policy enforcement in a timely fashion. • (R5) The per-packet access control process must be lightweight enough to handle line rate traffic. The requirements (R1) and (R2) are relevant to egress packet processing on the sender side, where packets are tagged based on contexts, while the requirements (R3), (R4) and (R5) are needed for ingress packet processing on the receiver side, where received tags are resolved to sender-side contexts, and packets are verified based on them. In the rest of this 9 section, we describe how we address these requirements in the eZTrust architecture. The overall architecture diagram of eZTrust, along with trust boundaries, is shown in Fig. 3.1. 3.3 Egress Packet Processing 3.3.1 Context Discovery To meet requirements (R1) and (R2), we are motivated by the recent advance in the universal in-kernel virtual machine technology called eBPF [6]. As a mainline Linux kernel feature, eBPF allows user-defined bytecode programs to be dynamically attached to kernel hooks in order to trace and process various kernel events without any expensive instrumentation or kernel customization. Deployed in-kernel bytecode programs can process and report captured kernel events to userspace, and access allowed kernel memory regions (e.g., packet data or key-value maps for stateful processing) via available eBPF helper function APIs. leverage the eBPF-based tracing mechanism to monitor various Global Context Map Central Microservice Coordinator Global Policy Map Context Manager Policy Agent Microservice 1 Microservice 2 harvester Context Related Events eTracer eTracer for for Context 1 Context 2 ... eTracer for Context N Local Context Map .... User Space VIF2 VIF1 Kernel Space eTagger and eVerifier Local Policy Map Trusted Figure 3.1: The eZTrust architecture. Untrusted PIF We End Server 10 microservice-related events, which can reveal verifiable application contexts associated with deployed microservices (R1). For example, a currently running application’s identity (e.g., name and version) can be reliably determined by tracing process creation events and mapping the created PIDs to corresponding application identity (e.g., appID and version).1 In case of multi-user applications like remote desktop services, the identity of a logged-in user can be found from the user ID of the detected login shell PIDs. The SSL version enabled in an application can be identified by tracing an SSL handshake library call and its arguments (e.g., SSL_do_handshake() in OpenSSL). We collectively call these eBPF programs deployed for tracing eTracers. On top of the eTracer-driven event tracing, we rely on active probing via harvester, which is a privileged monitoring service tasked by the infrastructure with collecting additional run-time environment-related contexts of microservices, either by querying the centralized microservice orchestrator, or by attaching itself to the namespaces of the target microservice. Example contexts so collected include geographic location, filesystem image tag, signer’s identity for a digitally signed image, container capabilities, kernel version, etc. A dedicated userspace daemon called Context Manager collects events and contexts from eTracers and the harvester, and stores the discovered contexts in the context map in the form of <tag, a set of contexts> tuples. A set of contexts stored in each tuple is essentially a dictionary, containing a list of key-value pairs (e.g., {context1:value1, context2:value2, context3:value3,...}). A tag, which is the key to the context map, is uniquely mapped to a set of contexts associated with a particular application instance running in a microservice. To ensure its global uniqueness, the tag is formed by concatenating a microservice ID (which is unique data center wide) and an application PID. The centralized policy orchestrator maintains the global context map for all existing microservices deployed data center wide, and each end-server operates a local context map, which is a subset of the global context map for all locally running microservices, as well as some non-local microservices as part of slow path processing (see Section 3.3.4). 1 We assume there is an infrastructure-managed trusted database that maps the cryptographic hash of binary executables or interpreted application bytecode to corresponding appID and version. More thorough application integrity verification is possible [32], but is out of scope of this work. 11 In order to identify a correct set of application contexts for each egress packet (R2), we keep track of which network sockets are created by which PID in what network namespace. The port number information in network sockets can provide a link between packets and corresponding applications, while network namespace information can be used to disambiguate different microservices that happen to create sockets with the same port number (e.g., HTTP port 80). We trace the in-kernel socket binding events using eBPF, and store the tracing result in the socket map in the form of <port number, network namespace, PID> tuples. 3.3.2 Per-Packet Tagging In order to tag every egress packet generated by a microservice, we attach a separate eBPF program called eTagger to the microservice’s virtual network interface (VIF). eTagger intercepts every egress packet in the form of in-kernel packet data structure (e.g., sk buff), which carries raw packet data, as well as per-packet metadata such as network namespace information. From the captured packet data structure, source port number (from TCP header) and namespace (from packet metadata) are extracted, and using the socket map above, are mapped to a corresponding PID. This PID can be used to construct a tag that represents a correct set of application contexts for the packet. Once a tag is ready, it can be added to the existing packet header as part of an IPv4 option or a TCP option, or appended as a trailer to an IPv4 packet to prevent the tag from being modified accidentally by intermediate switches or middleboxes, or added as part of encapsulation protocol headers (e.g., VxLAN, VLAN). In case of IPv6, the tag can be carried in the 20-bit flow label field. 3.3.3 Ingress Packet Processing In order to verify each ingress packet with respect to the intended receiver’s policies, we attach a separate eBPF program called eVerifier to the physical NIC interface. eVerifier performs per-packet verification in four steps: (i) extract a tag from an incoming packet, (ii) resolve the tag to the sender’s contexts, (iii) look up the intended receiver’s contexts and (iv) finally perform verification based on those obtained contexts. As we will see, the design of eVerifier is inspired by the multilevel flow caching in OVS, where packets are processed using OpenFlow tables on slow path and megaflow/microflow caches on 12 fast path. The major difference in eZTrust is that packets are classified into flows not based on packet header fields, but based on microservice contexts. This makes eZTrust intrinsically more scalable than network-endpoint-based perimeterization, as traffic from distinct microservice instances carrying the same contexts (e.g., due to auto-scaling) can be processed as a single flow. 3.3.4 Slow Path Processing The step (ii) to resolve a tag to a sender’s contexts requires that the local context map on the receiver side be already populated with the extracted tag (R3). However, the local context map is not expected to contain tags for all existing microservices running in the data center, due to scalability concerns and resource constraints. Inspired by the flow cache design of OVS, we instead populate the local context map on demand from the global context map via slow path. During ingress packet processing, the slow path is triggered when a tag extracted from an incoming packet is missing in the local context map. On the slow path, eVerifier punts the packet to the Policy Agent in userspace, which then fetches the contexts for the tag from the central microservice coordinator. Once the obtained contexts are populated into the local context map, the Policy Agent re-inserts the packet to eVerifier for subsequent verification. To prevent a burst of packets carrying the same missing tag from entering the slow path during this time, we maintain a simple in-kernel status table maintaining a list of tags for which slow path processing is underway. Any packet that carries the tag stored in the status table is simply dropped without entering the slow path. In practice, communication patterns among deployed microservices are well-defined, and thus contexts are highly likely to be used repeatedly on the same microservices during their lifetime, thereby making slow path processing an uncommon event. The step (iii) after tag resolution is to look up the intended receiver’s contexts. For this, eVerifier looks up the local socket map using the key constructed from the destination port number of the packet and network namespace associated with the VIF, and finds out the recipient PID for the packet. This PID is used to construct a receiver’s tag, which in turn is mapped to the receiver’s contexts from the local context map. Unlike the sender’s tag, the receiver’s tag is guaranteed to exist in the local context map, and thus no slow path 13 processing is necessary. 3.3.5 Per-Packet Verification The final step (iv) is to perform packet verification based on a set of sender’s contexts as well as receiver’s contexts. For verification, we maintain a policy map which holds the perimeterization policies for individual microservices in the form of <microservice id, sender’s contexts, receiver’s contexts, policy decision> tuples. Any context field can be wildcarded in the policies, and possible policy decisions are “accept” or “drop”. Finding a match for a packet based on a set of sender/receiver contexts in the policy map is the classic packet classification problem. The difference is that packets are classified, not based on packet header fields, but based on a set of contexts. There are many efficient algorithms for packet classification, and we adopt a scheme similar to the tuple space search classifier [51], commonly employed by popular software switches (e.g., megaflow cache in OVS). It is simpler than the original tuple space search as it does not need to handle the longest prefix searching, but only exact match. In this scheme, we define a policy template for each microservice, which is an array of <subset of sender’s context keys, subset of receiver’s context keys> tuples. The policy template of a microservice indicates which subsets of sender-side/receiver-side contexts are used to define its policies. If a tenant installs multiple policies for her microservice, each based on distinct subsets of contexts, the policy template of the microservice would contain more than one tuples. For example, if two policies are defined for a microservice: “accept traffic only if it originates from HAproxy and is destined to nginx”, and “drop traffic if it is generated by an application located in US-West”, its policy template would look like: [< appIDsrc , appIDdst >, < Locationsrc >]. When eVerifier looks up the policy table for an incoming packet, it iterates over the policy template of the intended receiver, forms all possible keys to the policy table, and performs policy table lookup. Upon finding the first successful lookup, it stops the iteration, and returns the decision. In case policy prioritization needs to be supported, eVerifier can complete the full iteration and chooses the policy decision with the highest priority among multiple matches. With no match, a packet is processed based on a default action. The full procedure for step (iv) is described in Algorithm 1. 14 Algorithm 1 Procedure for generating a policy decision. 1: /∗ array of all available contexts ∗/ 2: struct context t { 3: uint32 context[MAX CONTEXT] 4: } 5: /∗ each boolean field tells if a given context is considered in the policies ∗/ 6: struct template t { 7: bool srcContext[MAX CONTEXT] 8: bool dstContext[MAX CONTEXT] 9: } 10: procedure GENERATE POLICY DECISION(src, dst, T) 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: input: context t src, // sender’s contexts context t dst, // receiver’s contexts template t [] T // array of receiver’s policy templates output: ACCEPT or DROP for each t in T do uint32 key ← 0 /∗ generate a policy key from template t */ for i ← 0 to MAX CONTEXT-1 do if t.srcContext[i ] then key ← compute hash (key, src.context[i ]) end if if t.dst[i ] then key ← compute hash (key, dst.context[i ]) end if end for /∗ look up policy map with the key ∗/ result ← lookup policy map (key) if result = null then /∗ not found ∗/ return DROP end if return result /∗ ACCEPT or DROP ∗/ end for end procedure 3.3.6 Dynamic Context Handling So far in these packet verification steps, it is assumed that contexts remain unchanged both at the sender side and at the receiver side. However, contexts associated with a microservice can dynamically change for various reasons. For example, a microservice can be migrated geographically. Multi-user services such as remote desktop services or HPC applications can be accessed by different users at different time. In addition, mission critical production environments often benefit from dynamic software updates [35, 37], where any critical security patches or software upgrade are applied live without incurring downtime. This can affect the contexts (e.g., software version) associated with any active microservices. In order to detect and handle resulting potential context changes during the packet verification steps, we introduce the notion of an “epoch” in the contexts (R4), which indicates the up-to-dateness of the detected contexts. An epoch is a simple counter that is incremented and wraps around when it reaches its maximum. The entries in the context 15 map are now expanded to include an epoch: <tag, a set of contexts, epoch>. Whenever any context is changed in any microservice, the corresponding entries in the context map have their epoch incremented. In addition, each egress packet carries not only a tag, but also its corresponding epoch. When a tag and its associated epoch are received by the other end, the receiver can detect whether the entry stored in the local context map for the tag is outdated or not by comparing the epoch in the entry against the received epoch. If the entry is detected as outdated, it is evicted from the context map, and the receiver goes through the slow path to re-populate the context map for the tag against the latest epoch. Note that the entire per-packet verification procedure described so far requires multiple independent map lookups (i.e., context map, socket map and policy maps), even without considering one-time slow path through userspace. As an optimization to speed up the multistep verification operation (R5), we cache the final policy decision obtained from the step (iv) in a separate table called decision map, which stores the mapping <sender’s tag, receiver’s tag, policy decision>. Subsequent packets with the same tags can be verified with a single lookup of the decision map. This design is somewhat similar to microflow caching in OVS. Whenever any sender-side context change is detected from the epoch of the received tag, the corresponding entries in the decision map are invalidated, and verification for the packet falls back to the original multistep procedure. Any receiver-side context change also invalidates associated entries in the decision map. The overall packet verification steps are summarized as the flowchart in Fig. 3.2. Receive packet Extract tag from packet Sender’s contexts found? Look up context map with tag No Yes Slow Path Under Way? Yes Sender’s contexts up-todate? No Invalidate entries in context map & decision map Yes Policy decision found? Yes No Look up receiver’s contexts from packet Look up policy decision from policy map Cache policy decision in decision map Accept or drop packet base on policy decision No Drop Packet Enter slow path to update context map End Figure 3.2: eVerifier’s packet verification procedure. CHAPTER 4 PROTOTYPE IMPLEMENTATION We implement the eZTrust prototype in Python/C and integrate it with Docker runtime environment. In this section, we highlight key implementation details of the prototype. • Context management. The userspace Context Manager is implemented in Python (600 LoC), using bcc library [2] to interact with eBPF-based eTracers, and Docker SDK [15] to listen on container events. Context Manager collects contexts of deployed containers either by attaching eTracers to kprobes and uprobes, or by dispatching a harvesting routine in a target container’s namespace. For example, by attaching eTracers to kprobes, it derives the identity of an application running inside a container from the md5sum of its binary executable (for compiled applications) or its application bytecode (for interpreted applications such as Java/Python apps) at the time an application process is instantiated.1 It then finds the association between network sockets and identified application processes by tracing sys bind() and sys connect(). See Table 4.1 for a list of collected contexts. The obtained information is written as container identities to local eBPF maps (socket map, context map), as well as distributed to the global context map realized with Redis key-value store. • Policy enforcement. The implementation of policy enforcement is split between (1) the userspace Policy Agent written in Python (350 LoC) and (2) two in-kernel eBPF programs written in C: eTagger (60 LoC) and eVerifier (250 LoC). eTagger is attached to the ingress traffic control classifier [31] of each container’s VIF, tagging every egress packet generated by the container, while eVerifier is attached to the egress traffic control classifier of the physical NIC interface, inspecting every ingress 1 The name of application bytecode can be obtained by tracing the argv argument of sys execve(filename, argv[], envp). 17 Table 4.1: Microservice context collection. Collector Context AppID, App version eTracers Operating system userID Network socket SSL version MySQL user Harvester Microservice ID, geographic location, filesystem image tag and signer’s identity (for a digitally signed image), capabilities Kernel version Method Trace sys fork(), sys execve(), sys bind() and sys connect() using kprobes. Use bpf get current uid gid() in the process context captured above. Trace sys bind() and sys connect() using kprobes. Trace SSL do handshake() and its arguments using uprobes. Trace connection start() and its arguments using uprobes. Query the microservice orchestrator. Probe the host operating system with sys uname(). packet. For this prototype, we make use of the 12 bits in the VLAN header to carry per-packet tags (10 bits) and epoch number (2 bits). eTagger and eVerifier call bpf skb vlan push() and bpf skb vlan pop() eBPF helper APIs to add or remove a VLAN header on egress/ingress path. Support for other types of encapsulation protocols (e.g., VxLAN, Geneve) is available in eBPF [30] in case bigger tag space is needed. The Policy Agent is responsible for slow path handling, receiving raw packets from eVerifier via perf_event interface and re-inserting the packets via a tap interface, to which eVerifier is also attached. • Slow path processing. During slow path, the Policy Agent interacts with the global context map run by Redis to retrieve up-to-date contexts of a remote container. One issue with this northbound interaction, however, is the potential race conditions that can occur. Suppose a remote container Cremote is newly launched, and immediately opens a TCP connection to a local container Clocal . In this case, by the time Clocal initiates slow path processing due to the first SYN packet from Cremote , the global context map may or may not have been populated with the contexts of Cremote . In the latter case, the first SYN packet would be rejected, which would lead to TCP timeout, significantly delaying TCP connection establishment. A similar race condition can occur when Cremote changes its context; by the time Clocal sees an incremented epoch from Cremote , the global map may or may not have updated new contexts for Cremote . If global map is not up-to-date by then, Clocal would fail to detect context change in 18 Cremote . To avoid the first race condition (due to traffic coming from a new container), the Policy Agent performs global map lookup upto N times in case of lookup failure. We observe that global map lookup mostly succeeds with N = 2 for new containers. To avoid the second race condition (due to traffic with incremented epoch), we perform a staged epoch update on the sender-side as follows. Whenever a context change is detected on the sender, we update the global context map with the new context, but do not increment epoch number at this point. Only after the new context is successfully committed to the global map do we increment the epoch number. That way, a receiver will always obtain the updated context from the global map with epoch change. • Policy management interface. Tenants interface with the orchestrator to provide their policies for their containers. In Fig. 4.1, we illustrate how container identities and tenant defined policies are transformed into in-kernel eBPF maps for fast, in-line policy enforcement implementation. Once policies (b) are defined by tenants based on available contexts (a), the corresponding policy templates (c) are constructed based on the policies. The in-kernel policy map (d) is updated based on the policies, and similarly, the in-kernel policy template map (e) is populated based on (a) and (c). The key to these maps is constructed by the combination of numeric representations of sender and receiver contexts defined in (b). Lastly, once a packet hits an entry in (d), the policy decision is cached in an in-kernel decision map (f). 19 (A) Container Identity Fields App Id App OpenSSL Service Kernel Image User Id Location Capabilities Version Version Status Version Version (C) Tenant Policy Templates (B) Tenant Policies Tenant Container Receiver Sender Policy ID ID Context Context Decision <Nginx, <Redis, 1 34322 Allow US-West> US-West> <Redis, 2 42432 <Nginx> Allow SSL1.0.2> Policy Decision 1 (accept) 1 (accept) Template ID Receiver Context Sender Context 123 <appId, Location> <appId, Location> 234 <appId> <appId, OpenSSL> (E) In-kernel Policy Template Map (D) In-kernel Policy Map Key 3242134 4231235 .... Index 3242134 4231235 Policy Template (Receiver , Sender) (1, 0, 0, 0, 0, 1, …. ), (1, 0, 0, 0, 0, 1, …. ), (1, 0, 0 … ), (1, 0, 0, 1, …) (F) In-kernel Decision Map Key (Sender Tag + Receiver Tag) 2333219 4231235 Policy Decision 1 (accept) 1 (accept) Figure 4.1: eBPF maps for policy enforcement. CHAPTER 5 EVALUATION In this section, we evaluate the eZTrust prototype implementation and compare it against other alternative perimeterization schemes. 5.1 Slow Path vs. Fast Path As described in Section 3.3.4, eZTrust adopts dual-path (slow/fast path) ingress packet processing to speed up packet verification in a resource-efficient fashion. In the first experiment, we evaluate the implication of such design as the complexity of policies varies. We deploy two microservices across two hosts back-to-back connected via a 10G network interface, install perimeterization policies for each service and measure network latency between them in two separate experiments. In one experiment, we measure round-trip delay using a dummy microservice, which simply opens a TCP connection to the other microservice and reports connection establishment delay incurred from TCP SYN/ACK exchange. The initial TCP SYN/ACK packets in this case go through slow path as the local context map on either node is not populated for the other remote microservice at this point. In the other experiment, we deploy netperf on both microservices, which measures average round trip delay by generating multiple request/response transactions over a single long-lived TCP connection. The netperf traffic in this case is handled via fast path. We then compare the network latencies measured from these experiments as we adjust the number of installed policy templates, which represent the complexity of policies. A higher number of templates imply that more diverse policies (based on different contexts) are installed. Fig. 5.1 shows the packet latencies of slow path and fast path as we adjust the number of installed policy templates, which represents the complexity of policies. A higher number of templates imply that more diverse policies (based on different contexts) are installed. In case of slow path, packet latency increases with the number of policy templates because 21 Figure 5.1: Latency: slow path vs. fast path. more iterations are required for policy map lookup (see Algorithm 1). Note that multiple polices based on the same template still do not increase slow path delay due to constant time policy map lookup. In case of fast path, packet latency is not affected by policy templates as the policy decision for an initial packet is cached in the decision map, and subsequent packets carrying the same contexts can be verified with a single decision map lookup. 5.2 Microbenchmarks Next, we evaluate the performance and resource overhead of eBPF-based ingress packet processing on fast path. Using the same eZTrust deployment as the previous experiment, we measure average latency, throughput and CPU core usage, with and without perimeterization policy. As suggested in the previous experiment, the overhead for ingress packet processing on fast path does not depend on the number of policy templates nor the number of policies installed for each template. For perimeterization, we set up a simple policy rule based on a pair of contexts < appIDsrc , appIDdst >. In case of no policy, eZTrust simply forwards traffic via eBPF’s packet redirection capability. Table 5.1 shows that eZTrust’s packet forwarding capability exceeds that of Linux bridge 22 Table 5.1: Microbenchmarks: The average latency is measured with netperf in TCP RR mode. The CPU core usage is system-wide CPU usage reported by /proc, but excluding that of iperf. . Latency Setup Throughput netperf eZTrust without policy enforcement eZTrust with policy enforcement Linux bridge CPU core usage iperf 24.6 µsec 9.36 Gbit/s 30.4% 25.2 µsec 35.1 µsec 9.32 Gbit/s 9.26 Gbit/s 31.3% 32.7% (the first and the third row in the table). For example, eZTrust reduces round-trip latency by 10µsec, and CPU core usage by 2.3%. On the other hand, enabling policy enforcement in eZTrust does not add significant performance and resource overhead to baseline eZTrust with no policy (0.6 µsec latency increase and 0.9% core usage increase). Interestingly, even with that much additional overhead, eZTrust is still superior to the Linux bridge deployment with zero policy. 5.3 Cilium vs. eZTrust Next, we compare eZTrust against an alternative perimeterization approach called Cilium [11]. While both eZTrust and Cilium offer network-independent, identity-based perimeterization, they differ in how microservice identities are obtained. eZTrust traces identities directly from the microservice workloads, while Cilium relies on tenant-assigned key-value pair labels to identify microservices. In this experiment, we deploy iperf on two microservices connected across two hosts, configure perimeterization policies using either Cilium or eZTrust, and inject 60-byte UDP packets from iperf in varying traffic rate. In Cilium deployment, we configure a policy based on two labels (i.e., accept traffic if labelsrc =“joe” and labeldst =“alice”), while in case of eZTrust, we set up an application-aware policy (i.e., accept traffic if appIDsrc =ID_IPERF and appIDdst = ID_IPERF). Fig. 5.2 plots the CPU resource overhead of packet processing in these scenarios. In the figure, we also consider two additional baseline cases; eZTrust without policy enforcement (“eZTrust without Policy”), and forwarding via Linux bridge (“Simple Forwarding”), which represent baseline packet forwarding without any form of perimeterization. As consistent with Table 5.1, eZTrust’s eBPF-based packet 23 200 CPU Core Utilization (%) 180 160 140 120 100 80 60 Cilium Linux Bridge OVS eZTrust with Policy eZTrust without Policy 40 20 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Million # of Packets per Second Figure 5.2: Cilium vs. eZTrust. forwarding scales much better in terms of CPU than Linux bridge forwarding, as the packet rate increases. Even when policies are enabled, eZTrust still maintains lower CPU usage than simple forwarding. When compared against Cilium, eZTrust can perform identity-based perimeterization with significantly lower CPU overhead (e.g., factor of 2 in 1 MPPS). This result confirms the high performance of the eZTrust’s fast path design. 5.4 DPI-Based vs. eZTrust Next, we compare eZTrust against DPI-based context-aware perimeterization in terms of CPU efficiency. In this experiment, we consider a perimeterization policy that protects against the Heartbleed vulnerability [10], which targets particular versions of the OpenSSL cryptographic library. We run two containers, one nginx/https server, and the other curl client, deployed across two hosts. As DPI-based perimeterization, we set up between them inline mode Snort [21] with the officially vetted Heartbleed Snort signature [1] as the only rule loaded. In case of eZTrust, we configure on both ends policies based on four contexts < appIDsrc , openSSLVersionsrc , appIDdst , openSSLVersiondst >, 24 which are detected as described in Table 4.1. We install multiple rules allowing only traffic between nginx and curl with Heartbleed-safe OpenSSL versions (e.g., 1.0.1g through 1.1.2). Fig. 5.3 plots the CPU utilization of two approaches as a function of injected traffic rate. We measure CPU utilization on the host where nginx server is running. In case of DPI approach, Snort is deployed on this host as well. The reported CPU utilization on y-axis is the host-wide CPU usage without the CPU load from nginx server, thus accounting for perimeterization overhead only. The figure shows that compared to DPI-based approach, eZTrust can achieve OpenSSL-based policy control with a factor of 8 to 10 smaller CPU overhead. This experiment illustrates the high CPU efficiency of eZTrust to support context-aware policies. 5.5 CPU Resource Usage Next, we shift focus to the CPU overhead of eZTrust. Here we are interested in the per-packet CPU overhead of perimeterization, which captures the CPU usage incurred by policy enforcement only, but discounts packet forwarding overhead. For this, we measure the difference in CPU usage of eZTrust with and without policies. In this experiment, we deploy two containers on one server, pin them to fixed CPU cores and generate 60-byte UDP packets between them bidirectionally using iperf, in a fixed packet-per-second rate (R) for a fixed time period (T ). During T , we count the total number of CPU cycles (C ) incurred on the server using perf-stat. For each scheme (eZTrust, Linux bridge, OVS, Cilium), we repeat this experiment with and without perimeterization policies, and obtain two counters (C1 and C2 ), respectively. Per-packet CPU overhead for perimeterization (CP ) is obtained from C1 −C2 R·T . We use the same perimeterization policies used in the previous experiment except that netperf is replaced with iperf. Fig. 5.4 compares CP for different schemes. It shows that Linux bridge with iptables rules exhibits the lowest CPU overhead. However, iptables-based perimeterization is already well known for its inability to support a large number of rules due to sequential processing. Barring Linux bridge, eZTrust is the most CPU-efficient among them (by a factor of 1.5–2.5). 25 CPU Core Utilization (%) 100 Inline DPI eZTrust 80 60 40 20 0 0 1 2 3 4 5 6 7 Traffic Rate (Gbit/sec) 8 9 10 Figure 5.3: Protection against Heartbleed vulnerability: DPI-based vs. eZTrust. # CPU Cycles per Packet 30000 25000 20000 15000 10000 5000 0 Bridge OVS Cilium Figure 5.4: Per-packet CPU resource overhead. eZTrust 26 5.6 Dynamic Policies and Contexts The evaluations thus far focus on static environments where policies and contexts remain fixed. In the next experiment, we consider dynamic deployment environments where policies or contexts change over time, and demonstrate that eZTrust can handle such dynamic environments correctly. In scenario #1, contexts remain fixed while policies are updated by tenants. In scenario #2, policies remain unchanged while contexts are altered at run-time. For scenario #2, we define a hypothetical context called status to indicate the health of a microservice. Table 5.2 describes detailed deployment setup and events we introduce in these two scenarios. Fig. 5.5 shows timing diagrams indicating how the total transfer rate of nginx servers change as a result of these events. The T2/T3 events in scenario #1 introduce policy changes on nginx servers. This in turns instructs the policy agent to update the local policy map and invalidate the decision map, so that subsequent packets can be re-inspected according to the updated policy map. This has the effect of blocking further traffic to curl clients due to bidirectional nature of TCP. The T2/T3/T4 events in scenario #2 indicate the change in status context in wget clients one by one, which causes the monitoring agent on wget clients to increment the epoch for wget’s contexts accordingly. These epoch updates are then detected by eVerifier on nginx servers, which in turn invalidates the local context map as well as decision map for wget clients, and triggers slow path. As slow path is completed for each wget client, context/decision maps for nginx server side get fully populated, eventually blocking further traffic to the corresponding wget client. Table 5.2: Experimental scenarios. In scenario #1, two wget clients and one curl client on host1 download files from three nginx/https servers on host2 with 1MByte/sec rate limit, respectively. In scenario #2, three wget clients on host1 download files from three nginx,https servers on host2 with 1MByte/sec rate limit, respectively. Scenario #1: dynamic policies for nginx container T1: Install policies allowing traffic from wget/curl with OpenSSL version X. Scenario #2: dynamic contexts for wget container T1: Install policies allowing traffic from wget with status HEALTHY. T2: Remove the curl policy. T2: Change the status of wget container 1 to COMPROMISED. T3: Change the wget policy to allow traffic from wget with OpenSSL version Y. T3: Change the status of wget container 2 to COMPROMISED. T4: Remove the wget policy. T4: Change the status of wget container 3 to COMPROMISED. 27 50 E1 Traffic Rate (Mbit/s) 40 E2 E3 E4 30 20 10 0 0 50 100 150 200 250 300 Time Traffic Rate (Mbit/s) (a) Scenario #1: dynamic policies for nginx. 50 T1 40 T2 T3 T4 30 20 10 0 0 50 100 Time 150 200 (b) Scenario #2: dynamic contexts for wget. Figure 5.5: eZTrust in action in dynamic environments. 5.7 Real-World Application: Sock Shop In the final experiment, we deploy a real-word microservices-based application on eZTrust. We choose the Sock Shop [9], a distributed e-commerce demo application composed of 14 different microservices. The control flow among these microservices is visualized in Fig. 5.6. In eZTrust, we set up microservice-aware policies based on this control flow. As a comparison, we also deploy the Sock Shop application in OVS-based and Cilium-based perimeterization environments, with equivalent flow-rule-based and label-based policies, respectively. Using Locust-base load generator, we inject identical workloads (e.g., retrieving product pages, posting orders, accessing shopping cart, etc.) in three deployments, and compare user-perceived end-to-end latencies. Fig. 5.7 plots end-to-end latencies for several Sock Shop APIs. Compared to OVS and Cilium, eZTrust reduces the latencies by 3–6% and 5–15%, respectively. These reduced end-to-end latencies in eZTrust are attributed to its lower packet processing latency previously shown in Table 5.1. 28 CARTS CARTS-DB PAYMENT ORDERS FRONT-END EDGE-ROUTER CATALOGUE ORDERS-DB SHIPPING RABBITMQ USER USER-DB CATALOGUE-DB Figure 5.6: Microservice control flow in Sock Shop. Figure 5.7: End-to-end latencies of Sock Shop. QUEUE-MASTER CHAPTER 6 MOTIVATIONAL USE CASES In this section, we describe a few practical use case scenarios that can be enabled by the eZTrust prototype. • Vulnerability-driven perimeterization. Although unpatched software vulnerabilities are a common source of security breaches, software patches are often neglected due to other pressing tasks or postponed for integrity testing [49, 29]. Official software patches may not even be available at the time of zero-day attacks. To minimize potential damage while critical software patches are phased in, a data center operator can quickly deploy data-center wide contingency policies with eZTrust, where traffic to vulnerable application binaries is either blocked or alerted depending on tenant requirements. Note that alternative container image scanning approaches (e.g., Clair [13]) are not only time-consuming but also insufficient due to live container updates [52]. • Control flow integrity. In a distributed microservice architecture, interdependencies of microservices can be highly complex (see a sock shop example in Fig. 5.6). On top of that, each microservice can scale in and out independently. This makes the network-endpoint-based regulation of cross-service interaction extremely challenging. On the other hand, eZTrust makes it easy to express policies for acceptable control flows solely based on microservice identities. As the identities are derived from the fingerprint of application executables or bytecode (e.g., JAR), such policies remain unchanged with microservice auto-scaling. • User-identity-based firewall. Consider a remote desktop service deployment, where multiple users log in to the same remote desktop frontend service, and from there access different backend services (e.g., read documents hosted in a remote file 30 storage service, or open a remote SSH terminal). In such an environment, eZTrust can enable user-identity-based perimeterization, where remote desktop traffic generated by different users’ login shells is tagged differently, so that the traffic is selectively allowed or blocked at different backend services. This network-independent approach does not require complex user-to-IP address mapping like other commercial firewall solutions [53]. • Software stack hardening. Microservices often run on top of existing software stacks [23]. Since individual software components in a stack are tightly coupled (e.g., MySQL and Apache/PHP servers in LAMP stack), their communication within the stack needs to be properly hardened. As a possible hardening strategy, access to individual software components can be granted based on their fine-grained identities. For example, in case of LAMP stack, MySQL server can only accept traffic generated by Apache server v2.4.37 with MySQL user identity bob. Any software component whose version has reached the end-of-life can be blocked. CHAPTER 7 RELATED WORKS In the following chapter, we discuss several alternative perimeterization approaches and their limitations. 7.1 Network Flow Rule-Based Perimeterization Modern SDN-centric data centers are architecturally “edge-based,” where all tenant microservices housed at the end-servers are interconnected via software switches running at these end-servers. The software switches are then programmed by the centralized SDN controller to steer traffic among the microservices as specified by tenants. Architecturally, these software switches are well positioned as distributed vantage points to inspect and control tenant traffic. This motivates data center operators to realize zero-trust perimeterization by leveraging the SDN programmability of the software switch [26, 36]. In this approach, tenant-defined perimeterization policies are translated into the network flow rules installed in the switch, which allow or block tenant network traffic based on packet header fields (e.g., tenant source/destination IP addresses, port numbers) or flow state (e.g., new or established). Data center operator is responsible for installing and maintaining these network flow rules at the switches according to tenant policies. Such network flow rule-based approach has the following issues. For one, the packet header fields such as source/destination IP addresses or port numbers, on which the perimeterization policies are based, are not the binding properties of microservices, but rather ephemeral attributes attached to the microservices, that can be dynamically changed by tenants due to microservice reconfigurations, or by middleboxes as part of network operations (e.g., address/port translation by NAT/PAT and load balancer), or even can be spoofed by malicious attackers. Implementing perimeterization policies based on such ephemeral packet properties is intrinsically not secure and very error-prone [27]. In addition, the number of flow rules to manage increases 32 multiplicatively with the number of communicating microservices, as well as the number of packet header fields relied upon by policies. Every time communication patterns change due to a new microservice instance created or an old one destroyed, flow rules installed for the existing microservices need to be adjusted to ensure that the same policy intents are preserved. Considering the highly dynamic deployment nature of microservices, this creates a significant scalability challenge in flow rule management [45]. 7.2 Transport-Level Perimeterization The First Packet Authentication [33] and Trireme [24] enforce perimeterization policies at the TCP layer. In these proposals, a cryptographically signed identity token is carried in a TCP SYN packet, and the rest of the TCP handshake proceeds only if access is granted based on the identity. Compared to network-endpoint-based approach, transport-level perimeterization is more reliable as the identity of a microservice is not tied to the underlying network, but cryptographically verified. These approaches, however, have several drawbacks. First, these TCP-specific schemes cannot be generalized to non-TCP-based connection-less traffic (e.g., QUIC [39] over UDP). They also require heavy-duty cryptographic operations during TCP handshake, resulting in high per-connection computation overhead. This is problematic with a large number of short-lived TCP flows, which are common for microservices, or denial-of-service SYN attacks. Finally, since access control is only on a per-flow basis during initial TCP handshake, they can be vulnerable to session hijacking attacks [54]. 7.3 Label-Based Perimeterization A label-based perimeterization approach called Cilium [11] is similar to the previous transport-level approach in that its policy enforcement is based on the “networkindependent” identity of microservices. In Cilium, the microservice identity carried in network packets is defined by a set of key-value pair labels (e.g., role=frontend, user=joe) that are specified by its tenant. Unlike the transport-level approach, its policy enforcement is protocol-agnostic. Besides label-based policies, Cilium also supports layer-7 API-aware policies. The problem of this approach is that tenant-defined labels for microservices are 33 not their binding properties, thus not providing protection across tenants. When a label is assigned to a microservice by its tenant, other tenants blindly trust that label, which makes it vulnerable to malicious tenants who attempt to impersonate other microservices using their labels. Also, the labels are defined statically at the microservice launch time. Once associated, the labels remain with the microservice for the rest of its lifetime. Such static labels are problematic in dynamic policy environments. 7.4 DPI-Based Perimeterization The aforementioned problems of the static label-based approach can be addressed by more dynamic context-aware schemes [53, 25, 43]. In this approach, individual end servers, where microservices are hosted, operate a DPI engine to actively extract layer-7 contextual information (e.g., application/protocol types, version, etc.) from packet payload, so that traffic can be filtered based on contextual attributes. While this approach allows finer-grained and genuine context-based policies, the DPI processing takes a heavy toll on CPU resources, and sacrifices end-to-end packet delay. Besides, increasingly common encrypted traffic (e.g., HTTP/2) is not properly identified by DPI, and thus can bypass contextual policy filters. 7.5 API Gateway-Based Perimeterization In a distributed microservice architecture, API gateways are often responsible for many critical management services for deployed microservices [50]. In particular, as a single entry point to the system, API gateways can provide API-level perimeterization using standard authentication and authorization techniques (e.g., OpenID, OAuth). However, this approach is only applicable to the cross-service communication that is designed for API gateways, but cannot regulate any other non-API traffic. Besides, it can be leveraged only for custom-designed microservices with built-in support for OpenID/OAuth-capable APIs, but is not a general solution for all types of microservices. Many microservices are realized with existing open-source software, which are not originally developed to be integrated with API gateways. For them, the application-integrated API security is not a viable option. Table 7.1 summarizes the pros and cons of different perimeterization approaches. 34 Table 7.1: Comparison of existing perimeterization approaches. Properties Policy management complexity Reliability of policy attributes End-server resource overhead Protocol/application dependency Policy granularity and dynamism Networkendpointbased Transportlevel Labelbased DPI-based API gateway eZTrust Bad Good Good Good Good Good Bad Good Bad Good Good Good Good Bad Good Bad Bad Good Good Bad Good Bad Bad Good Bad Bad Bad Good Bad Good eZTrust aims to address the limitations of these approaches. 7.6 Other Network-Independent Packet Processing Authors of [44] propose a new type of policy routing that is based on process-level identifiers. While similar to eZTrust, this preliminary work does not provide detailed description on required data/control plane processing, and the performance of their prototype implementation is very limited. CHAPTER 8 DISCUSSION This chapter presents a discussion on how eZTrust will perform in realistic settings as well as outlines other ideas that can be applied to eZTrust. 8.1 eZTrust in Realistic Deployments Although we have not deployed eZTrust in a realistic distributed microservice environment, eZTrust design is well suited for such realistic deployments. To give you an idea of some realistic microservices deployment we can take examples of Netflix and Uber micrsoservices architecture, both are reportedly composed of around 1000 microservices [3, 4]. In a production environment, these 1000 microservices can scale up to half a million containers on about 1000 hosts, as reported by [5]. This roughly breaks down to 500 containers per-host (this number could vary depending on host‘s resources). To see whether eZTrust will support realistic production deployments, we need to understand (1) how well eZTrust will operate when each host is running 500 containers and (2) how well the centralized orchestrator handles a production environment. For (1), as the number of containers per-host grow, the number of policy-templates and policies will remain unaffected as these are only dependent on the number of distinct microservices running on a host. So, if a host is running 10 distinct microservices, which scale up to 500 on that host, the number of policy-templates will not scale up. The number of context events that eZTrust needs to handle, however, will certainly scale up as containers scale. These events occur when a new container is launched or a new application is started inside a container. Although the CPU-overhead will grow as the number of context events increase, we believe that event-based context tracing leveraging eBPF is still lightweight enough to support numbers such as 500 containers per-host, without overwhelming the host resources. For (2), since eZTrust is leveraging a multipurpose orchestrator such as Kubernetes, 36 we need to measure the impact that eZTrust framework would have on the orchestrator in realistic production environments. As the number of containers grow on a given host, the number of messages exchanged between the Context Manager (running on local node) and the orchestrator grow linearly. As mentioned earlier in Chapter 3, the orchestrator doesn’t broadcast contexts to all hosts but rather context is fetched from the orchestrator by the hosts on need basis. This helps to keep the load on the orchestrator manageable even in dense production environments. For distribution of policies and templates, the orchestrator can choose to not replicate the same set of policies and templates on every host, but rather for each host distribute only those templates and policies that will be needed for protection of microservices on that host. Moreover, eZTrust will scale well to support such realistic deployments in terms of manageable policy rule table, flexibility in defining policies, as well as ensuring low overhead policy enforcement. That’s because in eZTrust, packets are classified into flows not based on packet header fields, but based on microservice contexts. This makes eZTrust intrinsically more scalable than network-endpoint-based perimeterization, as traffic from distinct microservice instances carrying the same contexts (e.g., due to auto-scaling) can be processed as a single flow. 8.2 Tag Anonymization We assume that a generated tag is placed in a well-known packet header field (e.g., IP/TCP option, tunnel header) as plaintext. This can be justified if tenant microservices communicate with one another only through the end servers’ network stack, which is controlled by the infrastructure, and is not compromised according to our threat model. In other words, tenant microservices cannot artificially inject or modify the tags in their traffic (e.g., by using raw sockets to bypass the network stack). One way to prevent such tag forgery or impersonation is to deny tenant microservices access to raw sockets as an infrastructure-wide policy. This is in fact one of the standard microservice security practices recommended to prevent packet spoofing [16]. If such restriction is not an option for any reason, one can anonymize the tags using traditional approaches such as shared secret-based symmetric encryption [42]. Note that in this case, not the entire packet payload, but only the small tag, needs to be encrypted. Such tag encryption/decryption 37 can be done efficiently with modern SIMD instruction sets (e.g., SSE/SSE2, AVX/AVX2). Secrets can be shared with the existing orchestrators’ secret management and distribution interfaces [40, 14]. 8.3 Smart NIC Offload In order to minimize performance overhead introduced by per-packet operations for tagging/verification and possible encryption/decryption, one can leverage smart NICs. As eBPF is embraced as a mainline kernel feature, next-generation smart NICs (e.g., Netronome Agilio [18]) have already started to support eBPF offload, allowing unmodified eBPF programs along with maps to be transparently offloaded to the NICs [38]. As of this writing, however, eBPF offload is still experimental, supporting only ingress packet processing and a limited set of eBPF helper APIs. For example, slow path handling via perf-event interface cannot be offloaded. We plan to explore full potential of eBPF offload as the support improves. 8.4 Platform Compatibility The network-independent approach taken by eZTrust does not necessarily make it incompatible with the existing network-endpoint-based perimeterization. The extension of eZTrust to support network-endpoint-based polices is straightforward by treating packet header fields as additional application contexts and enabling prefix match in policy map lookup. Alternatively, if context-based and network-endpoint-based policies are not intertwined with complex priorities, one can separate out these two policy sets, and deploy a hybrid solution, where eZTrust co-exists with network-endpoint-based perimeterization, and the former is only responsible for context-aware policies at the first ingress point in a bump-in-the-wire fashion. CHAPTER 9 CONCLUSION AND FUTURE WORK Traditional network-based perimeterization approaches are not capable of securing highly dynamic, microservices-based datacenter environments. These approaches fall short of protecting these modern datacenter workloads from emerging, sophisticated attacks. In this thesis, we present eZTrust, a network-independent perimeterization solution for microservices, using which we shift perimeterization targets from network endpoints to fine-grained, context-rich microservice identities. To this end, we tap into the growing wealth of tracing data of microservices made available by eBPF, and repurpose them for perimeterization. While doing so, we adopt OVS-like flow-based packet verification, where packets are classified into flows not based on packet header fields, but based on microservice contexts. We showcase that eZTrust overhead is minimal and is well designed to support realistic microservices deployment scenarios. While eZTrust has provided a very promising way of doing perimeterization for microservices, we believe this will still stay an ongoing effort. Improvements such as introducing additional sources of contexts for tenants workloads, optimizing policy translation and tag granularity are a few areas where improvements can be made. For example, in the current design, per-packet tags are instantiated at the process granularity (i.e., distinct tags per process). For future improvements, tags could be defined at coarser (microservice/app-level) or finer (transport connection-level) granularities. The implication of varying tag granularities is two fold. On one hand, coarser-grained tags would not support fine-grained policies based on detailed contexts. For example, permicroservice/per-app tags would not support policies that regulate remote desktop traffic generated by different login shells. On the other hand, finer-grained tags would incur more frequent slow path processing. For example, with connection-level tags, which can 39 carry session-level contexts (e.g., user context per database session), every single connection opened by a process would now trigger slow path processing on the receive end. In a sense, the epoch adopted by eZTrust can be considered a way to encode finer-grained contexts than process-level attributes at the cost of additional slow path processing. Given this inherent trade-off in tag granularity, an alternative way to improve policy granularity while minimizing slow path handling is to introduce prefix matching in the context map lookup. With prefix matching, tag space is no longer flat but hierarchically defined (e.g,. delineated into microservice ID, process ID and port number fields), and the context map will contain wildcarded tags as keys. Then depending on the granularity of contexts that is needed for policy enforcement, the Context Manager can push appropriately wildcarded tags into context map during slow path processing, so that any subsequent traffic from the same microservice can avoid the slow path. Prefix match for eBPF maps is already supported (BPF_MAP_TYPE_LPM_TRIE). Given this inherent trade-off in tag granularity, an alternative way to improve policy granularity while minimizing slow path handling is to introduce prefix matching in the context map lookup. With prefix matching, tag space is no longer flat but hierarchically defined (e.g,. delineated into microservice ID, process ID and port number fields), and the context map will contain wildcarded tags as keys. Then depending on the granularity of contexts that is needed for policy enforcement, the Context Manager can push appropriately wildcarded tags into context map during slow path processing, so that any subsequent traffic from the same microservice can avoid the slow path. Prefix match for eBPF maps is already supported (BPF_MAP_TYPE_LPM_TRIE). All in all, we believe that eZTrust can further benefit from advances in microservice tracing technologies, as well as on-going improvements in the world of eBPF and other software switch technologies. REFERENCES [1] FBI Snort Signatures (Heartbleed). https://ics-cert.us-cert.gov/ UPDATE-FBI-Snort-Signatures-Heartbleed-April-2014, 2014. [2] IO visor bcc. https://github.com/iovisor/bcc, 2015. [3] Netflix microservices. https://smartbear.com/blog/develop/ why-you-cant-talk-about-microservices-without-ment/, 2016. [4] Uber Microservices. http://highscalability.com/blog/2016/10/12/ lessons-learned-from-scaling-uber-to-2000-engineers-1000-ser.html, 2016. [5] Uber Microservices. https://medium.com/netflix-techblog/ titus-the-netflix-container-management-platform-is-now-open-source-f868c9fb5436, 2016. [6] A thorough introduction to eBPF. https://lwn.net/Articles/740157/, 2017. [7] Google Beyondcorp. https://cloud.google.com/beyondcorp/, 2017. [8] Palo Alto: Zero Trust. https://www.paloaltonetworks.com/cyberpedia/ what-is-a-zero-trust-architecture, 2017. [9] Sock Shop – A Microservices Demo Application. https://microservices-demo. github.io, 2017. [10] The Heartbleed Bug. http://heartbleed.com, 2017. [11] Cilium. https://cilium.io, 2018. [12] Cisco Global Cloud Index: Forecast and Methodology 2016–2021. White Paper, 2018. Cisco Systems, Inc. [13] Clair: Vulnerability Static Analysis for Containers. https://github.com/coreos/ clair/, 2018. [14] Distribute Credentials Securely Using Secrets. https://kubernetes.io/docs/ tasks/inject-data-application/distribute-credentials-secure/, 2018. [15] Docker-SDK. https://docker-py.readthedocs.io/en/stable/, 2018. [16] Docker Security. https://docs.docker.com/engine/security/security/, 2018. [17] Lumogon. https://github.com/puppetlabs/lumogon, 2018. [18] Netronome Agilio CX. https://www.netronome.com/products/agilio-cx/, 2018. 41 [19] OVSDB:Security Groups - OpenDaylight Project. org/view/OVSDB:Security\_Groups, 2018. https://wiki.opendaylight. [20] Prometheus. https://prometheus.io, 2018. [21] Snort. https://snort.org, 2018. [22] Sysdig. https://sysdig.com, 2018. [23] Tech Stacks. https://stackshare.io/stacks, 2018. [24] Trireme. https://github.com/aporeto-inc/trireme-lib, 2018. [25] vArmour DSS Distributed Security System. https://www.varmour.com/pdf/ data-sheet/vArmour-DSS-Data-Sheet.pdf, 2018. [26] VMware NSX. http://www.vmware.com/products/nsx.html, 2018. [27] Andersen, D. G., Balakrishnan, H., Feamster, N., Koponen, T., Moon, D., and Shenker, S. Accountable Internet Protocol (AIP). In Proc. ACM SIGCOMM (2008). [28] Barth, D., and Gilman, E. Zero Trust Networks. O’Reilly Media, Inc., 2017. [29] Beattie, S., Arnold, S., Cowan, C., Wagle, P., Wright, C., and Shostack, A. Timing the Application of Security Patches for Optimal Uptime. In Proc. USENIX LISA (2002). [30] Borkmann, D. Advanced Programmability and Recent Updates with tc’s cls bpf. In Proc. NetDev 1.2 (2016). [31] Borkmann, D. On Getting tc Classifier Fully Programmable with cls bpf. In Proc. NetDev 1.1 (2016). [32] Catuogno, L., and Galdi, C. Ensuring Application Integrity: A Survey on Techniques and Tools. In Proc. International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (2015). [33] DeCusatis, C., Liengtiraphan, P., Sager, A., and Pinelli, M. Implementing Zero Trust Cloud Networks with Transport Access Control and First Packet Authentication. In Proc. IEEE International Conference on Smart Cloud (2016). [34] Haim, R. B. NSX Identity Firewall - Deep Dive. http://www.routetocloud.com/ 2016/11/nsx-identity-firewall-deep-dive/, 2016. [35] Hayden, C. M., Saur, K., Smith, E. K., Hicks, M., and Foster, J. S. Kitsune: Efficient, General-Purpose Dynamic Software Updating for C. ACM Transactions on Programming Languages and Systems (TOPLAS) 36, 4 (2014). [36] Jin, C., Srivastava, A., and Zhang, Z.-L. Understanding Security Group Usage in a Public IaaS Cloud. In Proc. IEEE INFOCOM (2016). [37] Kashyap, S., Min, C., Lee, B., and Kim, T. Instant OS Updates via Userspace Checkpoint-and-Restart. In Proc. USENIX ATC (2016). 42 [38] Kicinski, J., and Viljoen, N. eBPF Hardware Offload to SmartNICs: cls bpf and XDP. In Proc. NetDev 1.2 (2016). [39] Langley, A., et al. The QUIC Transport Protocol: Design and Internet-Scale Deployment. In Proc. ACM SIGCOMM (2017). [40] Li, Y. Introducing Docker Secrets Management. https://blog.docker.com/2017/ 02/docker-secrets-management/, 2017. [41] Liu, B., Lin, Y., and Chen, Y. Quantitative Workload Analysis and Prediction using Google Cluster Traces. In Proc. IEEE INFOCOM Workshop on Big Data Sciences, Technologies and Applications (2016). [42] McKay, K. A., Bassham, L., Turan, M. S., and Mouha, N. Report on Lightweight Cryptography. https://doi.org/10.6028/NIST.IR.8114, 2017. NIST. [43] Mekky, H., Hao, F., Mukherjee, S., Zhang, Z.-L., and Lakshman, T. Applicationaware Data Plane Processing in SDN. In Proc. ACM HotSDN (2014). [44] Michel, O., and Keller, E. Policy Routing using Process-Level Identifiers. In Proc. IEEE International Symposium on Software Defined Systems (2016). [45] Moshref, M., Yu, M., Sharma, A., and Govindan, R. Scalable Rule Management for Data Centers. In Proc. USENIX NSDI (2013). [46] Newman, S. Building Microservices: Designing Fine-Grained Systems. O’Reilly Media, Inc., 2015. [47] Pettit, J., Gross, J., Pfaff, B., Casado, M., and Crosby, S. Virtual Switching in an Era of Advanced Edges. In Proc. DC CAVES Workshop (2010). [48] Pfaff, B., et al. The Design and Implementation of Open vSwitch. In Proc. USENIX NSDI (2015). [49] Rescorla, E. Security Holes... Who Cares? In Proc. USENIX Security Symposium (2003). [50] Richardson, C., and Smith, F. Microservices: From Design to Deployment, 2016. Nginx, Inc. [51] Srinivasan, V., Suri, S., and Varghese, G. Packet Classification using Tuple Space Search. In Proc. ACM SIGCOMM (1999). [52] Tak, B., Isci, C., Duri, S., Bila, N., Nadgowda, S., and Doran, J. Understanding Security Implications of Using Containers in the Cloud. In Proc. USENIX ATC (2017). [53] Vanveerdeghem, S. VMware NSX Context-Aware Microsegmentation. https://blogs.vmware.com/networkvirtualization/2018/02/ context-aware-micro-segmentation-innovative-approach-application-user. html, 2018. [54] Zheng, O., Poon, J., and Beznosov, K. Application-based TCP Hijacking. In Proc. ACM European Workshop on System Security (2009). |
| Reference URL | https://collections.lib.utah.edu/ark:/87278/s6451mmc |



