Jianer Zhou, Qinghua Wu, Zhenyu Li, S. Uhlig, P. Steenkiste, Jian Chen, Gaogang Xie
TCP is an important factor affecting user-perceived performance of Internet applications. Diagnosing the causes behind TCP performance issues in the wild is essential for better understanding the current shortcomings in TCP. This paper presents a TCP flow performance analysis framework that classifies causes of TCP stalls. The framework forms the basis of a tool that is publicly available to the research community. We use our tool to analyze packet-level traces of three services (cloud storage, software download and web search) deployed by a popular Chinese service provider. We find that as many as 20% of the flows are stalled for half of their lifetime. Network-related causes, especially timeout retransmission, dominate the stalls. A breakdown of the causes for timeout retransmission stalls reveals that double retransmission and tail retransmission are among the top contributors. The importance of these causes depends however on the specific service. We also propose S-RTO, a mechanism that mitigates timeout retransmission stalls. S-RTO has been deployed on production front-end servers and results show that it is effective at improving TCP performance, especially for short flows.
{"title":"Demystifying and mitigating TCP stalls at the server side","authors":"Jianer Zhou, Qinghua Wu, Zhenyu Li, S. Uhlig, P. Steenkiste, Jian Chen, Gaogang Xie","doi":"10.1145/2716281.2836094","DOIUrl":"https://doi.org/10.1145/2716281.2836094","url":null,"abstract":"TCP is an important factor affecting user-perceived performance of Internet applications. Diagnosing the causes behind TCP performance issues in the wild is essential for better understanding the current shortcomings in TCP. This paper presents a TCP flow performance analysis framework that classifies causes of TCP stalls. The framework forms the basis of a tool that is publicly available to the research community. We use our tool to analyze packet-level traces of three services (cloud storage, software download and web search) deployed by a popular Chinese service provider. We find that as many as 20% of the flows are stalled for half of their lifetime. Network-related causes, especially timeout retransmission, dominate the stalls. A breakdown of the causes for timeout retransmission stalls reveals that double retransmission and tail retransmission are among the top contributors. The importance of these causes depends however on the specific service. We also propose S-RTO, a mechanism that mitigates timeout retransmission stalls. S-RTO has been deployed on production front-end servers and results show that it is effective at improving TCP performance, especially for short flows.","PeriodicalId":169539,"journal":{"name":"Proceedings of the 11th ACM Conference on Emerging Networking Experiments and Technologies","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116147012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Chandrasekaran, Georgios Smaragdakis, A. Berger, M. Luckie, Keung-Chi Ng
While the performance characteristics of access networks and end-user-to-server paths are well-studied, measuring the performance of the Internet's core remains, largely, an uncharted territory. With more content being moved closer to the end-user, server-to-server paths have increased in length and have a significant role in dictating the quality of services offered by content and service providers. In this paper, we present a large-scale study of the effects of routing changes and congestion on the end-to-end latencies of server-to-server paths in the core of the Internet. We exploit the distributed platform of a large content delivery network, composed of thousands of servers around the globe, to assess the performance characteristics of the Internet's core. We conduct measurement campaigns between thousands of server pairs, in both forward and reverse directions, and analyze the performance characteristics of server-to-server paths over both long durations (months) and short durations (hours). Our analyses show that there is a large variation in the frequency of routing changes. While routing changes typically have marginal or no impact on the end-to-end round-trip times (RTTs), 20% of them impact IPv4 (IPv6) paths by at least 26 ms (31 ms). We highlight how dual-stack servers can be utilized to reduce server-to-server latencies by up to 50 ms. Our results indicate that significant daily oscillations in end-to-end RTTs of server-to-server paths is not the norm, but does occur, and, in most cases, contributes about a 20 ms increase in server-to-server path latencies.
{"title":"A server-to-server view of the internet","authors":"B. Chandrasekaran, Georgios Smaragdakis, A. Berger, M. Luckie, Keung-Chi Ng","doi":"10.1145/2716281.2836125","DOIUrl":"https://doi.org/10.1145/2716281.2836125","url":null,"abstract":"While the performance characteristics of access networks and end-user-to-server paths are well-studied, measuring the performance of the Internet's core remains, largely, an uncharted territory. With more content being moved closer to the end-user, server-to-server paths have increased in length and have a significant role in dictating the quality of services offered by content and service providers. In this paper, we present a large-scale study of the effects of routing changes and congestion on the end-to-end latencies of server-to-server paths in the core of the Internet. We exploit the distributed platform of a large content delivery network, composed of thousands of servers around the globe, to assess the performance characteristics of the Internet's core. We conduct measurement campaigns between thousands of server pairs, in both forward and reverse directions, and analyze the performance characteristics of server-to-server paths over both long durations (months) and short durations (hours). Our analyses show that there is a large variation in the frequency of routing changes. While routing changes typically have marginal or no impact on the end-to-end round-trip times (RTTs), 20% of them impact IPv4 (IPv6) paths by at least 26 ms (31 ms). We highlight how dual-stack servers can be utilized to reduce server-to-server latencies by up to 50 ms. Our results indicate that significant daily oscillations in end-to-end RTTs of server-to-server paths is not the norm, but does occur, and, in most cases, contributes about a 20 ms increase in server-to-server path latencies.","PeriodicalId":169539,"journal":{"name":"Proceedings of the 11th ACM Conference on Emerging Networking Experiments and Technologies","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115833956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software-defined networks can enable a variety of concurrent, dynamically instantiated, measurement tasks, that provide fine-grain visibility into network traffic. Recently, there have been many proposals for using sketches for network measurement. However, sketches in hardware switches use constrained resources such as SRAM memory, and the accuracy of measurement tasks is a function of the resources devoted to them on each switch. This paper presents SCREAM, a system for allocating resources to sketch-based measurement tasks that ensures a user-specified minimum accuracy. SCREAM estimates the instantaneous accuracy of tasks so as to dynamically adapt the allocated resources for each task. Thus, by finding the right amount of resources for each task on each switch and correctly merging sketches at the controller, SCREAM can multiplex resources among network-wide measurement tasks. Simulations with three measurement tasks (heavy hitter, hierarchical heavy hitter, and super source/destination detection) show that SCREAM can support more measurement tasks with higher accuracy than existing approaches.
{"title":"SCREAM: sketch resource allocation for software-defined measurement","authors":"M. Moshref, Minlan Yu, R. Govindan, Amin Vahdat","doi":"10.1145/2716281.2836099","DOIUrl":"https://doi.org/10.1145/2716281.2836099","url":null,"abstract":"Software-defined networks can enable a variety of concurrent, dynamically instantiated, measurement tasks, that provide fine-grain visibility into network traffic. Recently, there have been many proposals for using sketches for network measurement. However, sketches in hardware switches use constrained resources such as SRAM memory, and the accuracy of measurement tasks is a function of the resources devoted to them on each switch. This paper presents SCREAM, a system for allocating resources to sketch-based measurement tasks that ensures a user-specified minimum accuracy. SCREAM estimates the instantaneous accuracy of tasks so as to dynamically adapt the allocated resources for each task. Thus, by finding the right amount of resources for each task on each switch and correctly merging sketches at the controller, SCREAM can multiplex resources among network-wide measurement tasks. Simulations with three measurement tasks (heavy hitter, hierarchical heavy hitter, and super source/destination detection) show that SCREAM can support more measurement tasks with higher accuracy than existing approaches.","PeriodicalId":169539,"journal":{"name":"Proceedings of the 11th ACM Conference on Emerging Networking Experiments and Technologies","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131515787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advent of virtualization, containerization and the Internet of Things (IoT) is leading to an explosive growth in the number of endpoints. Ideally with Software Defined Networking (SDN), one would like to customize packet handling for each of these endpoints or applications. However this typically leads to a large growth in forwarding state. This growth is avoided in current networks by using aggregation which trades off fine-grained control of micro-flows for reduced forwarding state. It is worthwhile to ask whether the benefits of micro-flow control can be retained without a large growth in forwarding state and without using aggregation. In this paper we describe an incrementally deployable SDN-friendly packet forwarding mechanism called Path Switching that achieves this by compactly encoding a packet's path through the network in the packet's existing address fields. Path Switching provides the same reduction in forwarding state as source routing while retaining the benefits and use of fixed size packet headers and existing protocols. We have extended Open vSwitch (OVS) to transparently support Path Switching as well as an inline service component for folding middlebox services into OVS. The extensions include advanced failover mechanisms like fast reroute. These extensions require no protocol changes as Path Switching leaves header formats unchanged.
{"title":"Path switching: reduced-state flow handling in SDN using path information","authors":"A. Hari, T. V. Lakshman, G. Wilfong","doi":"10.1145/2716281.2836121","DOIUrl":"https://doi.org/10.1145/2716281.2836121","url":null,"abstract":"The advent of virtualization, containerization and the Internet of Things (IoT) is leading to an explosive growth in the number of endpoints. Ideally with Software Defined Networking (SDN), one would like to customize packet handling for each of these endpoints or applications. However this typically leads to a large growth in forwarding state. This growth is avoided in current networks by using aggregation which trades off fine-grained control of micro-flows for reduced forwarding state. It is worthwhile to ask whether the benefits of micro-flow control can be retained without a large growth in forwarding state and without using aggregation. In this paper we describe an incrementally deployable SDN-friendly packet forwarding mechanism called Path Switching that achieves this by compactly encoding a packet's path through the network in the packet's existing address fields. Path Switching provides the same reduction in forwarding state as source routing while retaining the benefits and use of fixed size packet headers and existing protocols. We have extended Open vSwitch (OVS) to transparently support Path Switching as well as an inline service component for folding middlebox services into OVS. The extensions include advanced failover mechanisms like fast reroute. These extensions require no protocol changes as Path Switching leaves header formats unchanged.","PeriodicalId":169539,"journal":{"name":"Proceedings of the 11th ACM Conference on Emerging Networking Experiments and Technologies","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128914619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interactive applications like web browsing are sensitive to latency. Unfortunately, TCP consumes significant time in its start-up phase and loss recovery. Existing sender-side optimizations use more aggressive start-up strategies to reduce latency, but at the same time they harm safety in the sense that they can damage co-existing flows' performance and potentially the network's overall ability to deliver data. In this paper, we experimentally compare existing solutions' latency performance and more importantly, the trade-off between latency and safety at both the flow level and the application level. We argue that existing solutions are still operating away from the sweet spot on this trade-off plane. Based on the diagnosis of existing solutions, we introduce Halfback, a new short-flow transmission mechanism that operates on a better latency-safety trade-off point: Halfback achieves lower latency than the lowest latency previous solution and at the same time significantly better safety. As Halfback is TCP-friendly and requires only sender-side changes, it is feasible to deploy.
{"title":"Halfback: running short flows quickly and safely","authors":"Qingxi Li, M. Dong, Brighten Godfrey","doi":"10.1145/2716281.2836107","DOIUrl":"https://doi.org/10.1145/2716281.2836107","url":null,"abstract":"Interactive applications like web browsing are sensitive to latency. Unfortunately, TCP consumes significant time in its start-up phase and loss recovery. Existing sender-side optimizations use more aggressive start-up strategies to reduce latency, but at the same time they harm safety in the sense that they can damage co-existing flows' performance and potentially the network's overall ability to deliver data. In this paper, we experimentally compare existing solutions' latency performance and more importantly, the trade-off between latency and safety at both the flow level and the application level. We argue that existing solutions are still operating away from the sweet spot on this trade-off plane. Based on the diagnosis of existing solutions, we introduce Halfback, a new short-flow transmission mechanism that operates on a better latency-safety trade-off point: Halfback achieves lower latency than the lowest latency previous solution and at the same time significantly better safety. As Halfback is TCP-friendly and requires only sender-side changes, it is feasible to deploy.","PeriodicalId":169539,"journal":{"name":"Proceedings of the 11th ACM Conference on Emerging Networking Experiments and Technologies","volume":"226 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123709346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dhruvi Sharma, Rishabh Poddar, Kshiteej S. Mahajan, Mohan Dhawan, V. Mann
With majority of the world's data and computation handled by cloud-based systems, cloud management stacks such as Apache's CloudStack, VMware's vSphere and OpenStack have become an increasingly important component in cloud software. However, like every other complex distributed system, these cloud stacks are susceptible to faults, whose root cause is often hard to diagnose. We present HANSEL, a system that leverages non-intrusive network monitoring to expedite root cause analysis of such faults manifesting in OpenStack operations. HANSEL is fast and accurate, and precise even under conditions of stress.
{"title":"Hansel: diagnosing faults in openStack","authors":"Dhruvi Sharma, Rishabh Poddar, Kshiteej S. Mahajan, Mohan Dhawan, V. Mann","doi":"10.1145/2716281.2836108","DOIUrl":"https://doi.org/10.1145/2716281.2836108","url":null,"abstract":"With majority of the world's data and computation handled by cloud-based systems, cloud management stacks such as Apache's CloudStack, VMware's vSphere and OpenStack have become an increasingly important component in cloud software. However, like every other complex distributed system, these cloud stacks are susceptible to faults, whose root cause is often hard to diagnose. We present HANSEL, a system that leverages non-intrusive network monitoring to expedite root cause analysis of such faults manifesting in OpenStack operations. HANSEL is fast and accurate, and precise even under conditions of stress.","PeriodicalId":169539,"journal":{"name":"Proceedings of the 11th ACM Conference on Emerging Networking Experiments and Technologies","volume":" 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120831181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan Miguel Carrascosa, Jakub Mikians, R. C. Rumín, Vijay Erramilli, Nikolaos Laoutaris
Online Behavioural targeted Advertising (OBA) has risen in prominence as a method to increase the effectiveness of online advertising. OBA operates by associating tags or labels to users based on their online activity and then using these labels to target them. This rise has been accompanied by privacy concerns from researchers, regulators and the press. In this paper, we present a novel methodology for measuring and understanding OBA in the online advertising market. We rely on training artificial online personas representing behavioural traits like 'cooking', 'movies', 'motor sports', etc. and build a measurement system that is automated, scalable and supports testing of multiple configurations. We observe that OBA is a frequent practice and notice that categories valued more by advertisers are more intensely targeted. In addition, we provide evidences showing that the advertising market targets sensitive topics (e.g, religion or health) despite the existence of regulation that bans such practices. We also compare the volume of OBA advertising for our personas in two different geographical locations (US and Spain) and see little geographic bias in terms of intensity of OBA targeting. Finally, we check for targeting with do-not-track (DNT) enabled and discover that DNT is not yet enforced in the web.
{"title":"I always feel like somebody's watching me: measuring online behavioural advertising","authors":"Juan Miguel Carrascosa, Jakub Mikians, R. C. Rumín, Vijay Erramilli, Nikolaos Laoutaris","doi":"10.1145/2716281.2836098","DOIUrl":"https://doi.org/10.1145/2716281.2836098","url":null,"abstract":"Online Behavioural targeted Advertising (OBA) has risen in prominence as a method to increase the effectiveness of online advertising. OBA operates by associating tags or labels to users based on their online activity and then using these labels to target them. This rise has been accompanied by privacy concerns from researchers, regulators and the press. In this paper, we present a novel methodology for measuring and understanding OBA in the online advertising market. We rely on training artificial online personas representing behavioural traits like 'cooking', 'movies', 'motor sports', etc. and build a measurement system that is automated, scalable and supports testing of multiple configurations. We observe that OBA is a frequent practice and notice that categories valued more by advertisers are more intensely targeted. In addition, we provide evidences showing that the advertising market targets sensitive topics (e.g, religion or health) despite the existence of regulation that bans such practices. We also compare the volume of OBA advertising for our personas in two different geographical locations (US and Spain) and see little geographic bias in terms of intensity of OBA targeting. Finally, we check for targeting with do-not-track (DNT) enabled and discover that DNT is not yet enforced in the web.","PeriodicalId":169539,"journal":{"name":"Proceedings of the 11th ACM Conference on Emerging Networking Experiments and Technologies","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131034261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Giotsas, Georgios Smaragdakis, B. Huffaker, M. Luckie, K. Claffy
Annotating Internet interconnections with robust physical coordinates at the level of a building facilitates network management including interdomain troubleshooting, but also has practical value for helping to locate points of attacks, congestion, or instability on the Internet. But, like most other aspects of Internet interconnection, its geophysical locus is generally not public; the facility used for a given link must be inferred to construct a macroscopic map of peering. We develop a methodology, called constrained facility search, to infer the physical interconnection facility where an interconnection occurs among all possible candidates. We rely on publicly available data about the presence of networks at different facilities, and execute traceroute measurements from more than 8,500 available measurement servers scattered around the world to identify the technical approach used to establish an interconnection. A key insight of our method is that inference of the technical approach for an interconnection sufficiently constrains the number of candidate facilities such that it is often possible to identify the specific facility where a given interconnection occurs. Validation via private communication with operators confirms the accuracy of our method, which outperforms heuristics based on naming schemes and IP geolocation. Our study also reveals the multiple roles that routers play at interconnection facilities; in many cases the same router implements both private interconnections and public peerings, in some cases via multiple Internet exchange points. Our study also sheds light on peering engineering strategies used by different types of networks around the globe.
{"title":"Mapping peering interconnections to a facility","authors":"V. Giotsas, Georgios Smaragdakis, B. Huffaker, M. Luckie, K. Claffy","doi":"10.1145/2716281.2836122","DOIUrl":"https://doi.org/10.1145/2716281.2836122","url":null,"abstract":"Annotating Internet interconnections with robust physical coordinates at the level of a building facilitates network management including interdomain troubleshooting, but also has practical value for helping to locate points of attacks, congestion, or instability on the Internet. But, like most other aspects of Internet interconnection, its geophysical locus is generally not public; the facility used for a given link must be inferred to construct a macroscopic map of peering. We develop a methodology, called constrained facility search, to infer the physical interconnection facility where an interconnection occurs among all possible candidates. We rely on publicly available data about the presence of networks at different facilities, and execute traceroute measurements from more than 8,500 available measurement servers scattered around the world to identify the technical approach used to establish an interconnection. A key insight of our method is that inference of the technical approach for an interconnection sufficiently constrains the number of candidate facilities such that it is often possible to identify the specific facility where a given interconnection occurs. Validation via private communication with operators confirms the accuracy of our method, which outperforms heuristics based on naming schemes and IP geolocation. Our study also reveals the multiple roles that routers play at interconnection facilities; in many cases the same router implements both private interconnections and public peerings, in some cases via multiple Internet exchange points. Our study also sheds light on peering engineering strategies used by different types of networks around the globe.","PeriodicalId":169539,"journal":{"name":"Proceedings of the 11th ACM Conference on Emerging Networking Experiments and Technologies","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126774484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An Wang, Yang Guo, F. Hao, T. V. Lakshman, Songqing Chen
We study how to provide fine-grained, flexible traffic monitoring in the Open vSwitch (OVS). We argue that the existing OVS monitoring tools are neither flexible nor sufficient for supporting many monitoring applications. We propose UMON, a mechanism that decouples monitoring from forwarding, and offers flexible and fine-grained traffic stats. We describe a prototype implementation of UMON that integrates well with the OVS architecture. Finally, we evaluate the performance using the prototype, and illustrate UMON's efficiency with the example use cases such as detecting port scans.
{"title":"UMON: flexible and fine grained traffic monitoring in open vSwitch","authors":"An Wang, Yang Guo, F. Hao, T. V. Lakshman, Songqing Chen","doi":"10.1145/2716281.2836100","DOIUrl":"https://doi.org/10.1145/2716281.2836100","url":null,"abstract":"We study how to provide fine-grained, flexible traffic monitoring in the Open vSwitch (OVS). We argue that the existing OVS monitoring tools are neither flexible nor sufficient for supporting many monitoring applications. We propose UMON, a mechanism that decouples monitoring from forwarding, and offers flexible and fine-grained traffic stats. We describe a prototype implementation of UMON that integrates well with the OVS architecture. Finally, we evaluate the performance using the prototype, and illustrate UMON's efficiency with the example use cases such as detecting port scans.","PeriodicalId":169539,"journal":{"name":"Proceedings of the 11th ACM Conference on Emerging Networking Experiments and Technologies","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114646929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}