首页 > 最新文献

2021 IEEE 29th International Conference on Network Protocols (ICNP)最新文献

英文 中文
DOVE: Diagnosis-driven SLO Violation Detection DOVE:诊断驱动的SLO违规检测
Pub Date : 2021-11-01 DOI: 10.1109/ICNP52444.2021.9651986
Yiran Lei, Yu Zhou, Yunsenxiao Lin, Mingwei Xu, Yangyang Wang
Service-level objectives (SLOs), as network performance requirements for delay and packet loss typically, should be guaranteed for increasing high-performance applications, e.g., telesurgery and cloud gaming. However, SLO violations are common and destructive in today’s network operation. Detection and diagnosis, meaning monitoring performance to discover anomalies and analyzing causality of SLO violations respectively, are crucial for fast recovery. Unfortunately, existing diagnosis approaches require exhaustive causal information to function. Meanwhile, existing detection tools incur large overhead or are only able to provide limited information for diagnosis. This paper presents DOVE, a diagnosis-driven SLO detection system with high accuracy and low overhead. The key idea is to identify and report the information needed by diagnosis along with SLO violation alerts from the data plane selectively and efficiently. Network segmentation is introduced to balance scalability and accuracy. Novel algorithms to measure packet loss and percentile delay are implemented completely on the data plane without the involvement of the control plane for fine-grained SLO detection. We implement and deploy DOVE on Tofino and P4 software switch (BMv2) and show the effectiveness of DOVE with a use case. The reported SLO violation alerts and diagnosis-needing information are compared with ground truth and show high accuracy (>97%). Our evaluation also shows that DOVE introduces up to two orders of magnitude less traffic overhead than NetSight. In addition, memory utilization and required processing ability are low to be deployable in real network topologies.
服务水平目标(slo),作为延迟和数据包丢失的网络性能要求,应该保证不断增长的高性能应用程序,例如远程外科手术和云游戏。然而,在当今的网络运营中,违反SLO是常见的和具有破坏性的。检测和诊断,即分别监控性能以发现异常和分析违反SLO的因果关系,对于快速恢复至关重要。不幸的是,现有的诊断方法需要详尽的因果信息才能发挥作用。同时,现有的检测工具开销较大,或者只能提供有限的诊断信息。本文介绍了一种诊断驱动的高精度低开销SLO检测系统DOVE。关键思想是有选择地有效地识别和报告诊断所需的信息以及来自数据平面的SLO违规警报。为了平衡可扩展性和准确性,引入了网络分段。为了实现细粒度的SLO检测,在数据平面上完全实现了测量丢包和百分位延迟的新算法,而不需要控制平面的参与。我们在Tofino和P4软件交换机(BMv2)上实现和部署了DOVE,并通过一个用例展示了DOVE的有效性。将报告的SLO违规警报和需要诊断的信息与实际情况进行比较,显示出较高的准确性(>97%)。我们的评估还表明,DOVE带来的流量开销比NetSight少两个数量级。此外,内存利用率和所需的处理能力较低,无法在实际网络拓扑中部署。
{"title":"DOVE: Diagnosis-driven SLO Violation Detection","authors":"Yiran Lei, Yu Zhou, Yunsenxiao Lin, Mingwei Xu, Yangyang Wang","doi":"10.1109/ICNP52444.2021.9651986","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651986","url":null,"abstract":"Service-level objectives (SLOs), as network performance requirements for delay and packet loss typically, should be guaranteed for increasing high-performance applications, e.g., telesurgery and cloud gaming. However, SLO violations are common and destructive in today’s network operation. Detection and diagnosis, meaning monitoring performance to discover anomalies and analyzing causality of SLO violations respectively, are crucial for fast recovery. Unfortunately, existing diagnosis approaches require exhaustive causal information to function. Meanwhile, existing detection tools incur large overhead or are only able to provide limited information for diagnosis. This paper presents DOVE, a diagnosis-driven SLO detection system with high accuracy and low overhead. The key idea is to identify and report the information needed by diagnosis along with SLO violation alerts from the data plane selectively and efficiently. Network segmentation is introduced to balance scalability and accuracy. Novel algorithms to measure packet loss and percentile delay are implemented completely on the data plane without the involvement of the control plane for fine-grained SLO detection. We implement and deploy DOVE on Tofino and P4 software switch (BMv2) and show the effectiveness of DOVE with a use case. The reported SLO violation alerts and diagnosis-needing information are compared with ground truth and show high accuracy (>97%). Our evaluation also shows that DOVE introduces up to two orders of magnitude less traffic overhead than NetSight. In addition, memory utilization and required processing ability are low to be deployable in real network topologies.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125869293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Welcome Message from the ICNP 2021 TPC Chairs ICNP 2021 TPC主席欢迎辞
Pub Date : 2021-11-01 DOI: 10.1109/icnp52444.2021.9651933
{"title":"Welcome Message from the ICNP 2021 TPC Chairs","authors":"","doi":"10.1109/icnp52444.2021.9651933","DOIUrl":"https://doi.org/10.1109/icnp52444.2021.9651933","url":null,"abstract":"","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130147487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Poster: Accelerate Cross-Device Federated Learning With Semi-Reliable Model Multicast Over The Air 海报:用半可靠的空中多播模型加速跨设备联邦学习
Pub Date : 2021-11-01 DOI: 10.1109/ICNP52444.2021.9651964
Yunzhi Lin, Shouxi Luo
To achieve efficient model multicast for cross-device Federated Learning (FL) over shared wireless channels, we propose SRMP, a transport protocol that performs semi-reliable model multicast over the air by leveraging existing PHY-aided wireless multicast techniques. The preliminary study shows that, with novel designs, SRMP could reduce the communication time involved in each round of training significantly.
为了在共享无线信道上实现跨设备联邦学习(FL)的高效模型组播,我们提出了SRMP,这是一种利用现有的物理辅助无线组播技术在空中执行半可靠模型组播的传输协议。初步研究表明,通过新颖的设计,SRMP可以显著减少每轮训练的沟通时间。
{"title":"Poster: Accelerate Cross-Device Federated Learning With Semi-Reliable Model Multicast Over The Air","authors":"Yunzhi Lin, Shouxi Luo","doi":"10.1109/ICNP52444.2021.9651964","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651964","url":null,"abstract":"To achieve efficient model multicast for cross-device Federated Learning (FL) over shared wireless channels, we propose SRMP, a transport protocol that performs semi-reliable model multicast over the air by leveraging existing PHY-aided wireless multicast techniques. The preliminary study shows that, with novel designs, SRMP could reduce the communication time involved in each round of training significantly.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116490447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Loss-freedom, Order-preservation and No-buffering: Pick Any Two During Flow Migration in Network Functions 网络函数流迁移过程中无损失、保序和无缓冲:任选其一
Pub Date : 2021-11-01 DOI: 10.1109/ICNP52444.2021.9651954
Radhika Sukapuram, Ranjan Patowary, G. Barua
Network Functions (NFs) provide security and optimization services to networks by examining and modifying packets and by collecting information. When NFs need to be scaled out to manage higher load or scaled in to conserve energy, flows need to be migrated from one instance of an NF, called the source instance, to another, called the destination instance, or from one chain of instances to another chain of instances. Before flows are migrated, the state information associated with the source instance needs to be migrated to the destination instance. Packets that arrive at the destination instance meanwhile need to be either buffered or dropped until the state information is migrated, for correct functioning of some stateful NFs, while for some others, the destination NF may continue to function. We define the properties of Loss-freedom, where the flow migration system does not drop packets, No-buffering, where it does not buffer packets, and Order-preservation, where it processes packets in the same manner as the source NF, if there was no flow migration. We formalize these properties, for the first time, and prove that it is impossible for a flow migration algorithm in stateful NFs to guarantee satisfying all three of the properties of Loss-freedom (L), Order-preservation (O) and No-buffering (N) during flow migration, even if messages or packets are not lost. We demonstrate how existing algorithms operate with regard to these properties and prove that these properties are compositional.
NFs (Network Functions)通过对报文的检测、修改和信息收集等方式,为网络提供安全和优化服务。当NFs需要向外扩展以管理更高的负载或向内扩展以节省能源时,需要将流从一个NF实例(称为源实例)迁移到另一个NF实例(称为目标实例),或者从一个实例链迁移到另一个实例链。在迁移流之前,需要将与源实例关联的状态信息迁移到目标实例。同时到达目标实例的数据包需要被缓冲或丢弃,直到状态信息被迁移,以便一些有状态NFs的正常运行,而对于其他一些,目标NF可能继续运行。我们定义了Loss-freedom属性,即流迁移系统不丢包;no -buffering属性,即不缓冲数据包;order - preserving属性,即在没有流迁移的情况下,它以与源NF相同的方式处理数据包。我们首次形式化了这些性质,并证明了在有状态NFs中的流迁移算法不可能保证在流迁移过程中满足所有三个性质:丢失自由(L)、顺序保持(O)和无缓冲(N),即使消息或数据包没有丢失。我们演示了现有的算法是如何处理这些属性的,并证明了这些属性是组合的。
{"title":"Loss-freedom, Order-preservation and No-buffering: Pick Any Two During Flow Migration in Network Functions","authors":"Radhika Sukapuram, Ranjan Patowary, G. Barua","doi":"10.1109/ICNP52444.2021.9651954","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651954","url":null,"abstract":"Network Functions (NFs) provide security and optimization services to networks by examining and modifying packets and by collecting information. When NFs need to be scaled out to manage higher load or scaled in to conserve energy, flows need to be migrated from one instance of an NF, called the source instance, to another, called the destination instance, or from one chain of instances to another chain of instances. Before flows are migrated, the state information associated with the source instance needs to be migrated to the destination instance. Packets that arrive at the destination instance meanwhile need to be either buffered or dropped until the state information is migrated, for correct functioning of some stateful NFs, while for some others, the destination NF may continue to function. We define the properties of Loss-freedom, where the flow migration system does not drop packets, No-buffering, where it does not buffer packets, and Order-preservation, where it processes packets in the same manner as the source NF, if there was no flow migration. We formalize these properties, for the first time, and prove that it is impossible for a flow migration algorithm in stateful NFs to guarantee satisfying all three of the properties of Loss-freedom (L), Order-preservation (O) and No-buffering (N) during flow migration, even if messages or packets are not lost. We demonstrate how existing algorithms operate with regard to these properties and prove that these properties are compositional.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126714577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Demo: Simple Deep Packet Inspection with P4 演示:简单的深度包检测与P4
Pub Date : 2021-11-01 DOI: 10.1109/ICNP52444.2021.9651973
Sahil Gupta, D. Gosain, Garegin Grigoryan, Minseok Kwon, H. B. Acharya
The P4 language allows "protocol-independent packet parsing" in network switches, and makes many operations possible in the data plane. But P4 is not built for Deep Packet Inspection – it can only "parse" well-defined packet headers, not free-form headers as seen in HTTPS etc. Thus some very important use cases, such as application-layer firewalls, are considered impossible for P4. This demonstration shows that this limitation is not strictly true: switches, that support only standard P4, are able to independently perform tasks such as blocking specific URLs (without using non-standard "extern" components, help from the SDN controller, or rerouting to a firewall). As more Internet infrastructure becomes SDN-compatible, in future, switches may perform simple application-layer firewall tasks.
P4语言允许在网络交换机中“独立于协议的数据包解析”,并使数据平面上的许多操作成为可能。但是P4不是为深度包检测而构建的——它只能“解析”定义良好的包头,而不是像HTTPS等中看到的自由格式的头。因此,一些非常重要的用例,如应用层防火墙,被认为不可能用于P4。这个演示表明,这种限制并不完全正确:仅支持标准P4的交换机能够独立执行诸如阻止特定url之类的任务(无需使用非标准的“外部”组件、SDN控制器的帮助或重路由到防火墙)。随着越来越多的Internet基础设施与sdn兼容,将来交换机可能会执行简单的应用层防火墙任务。
{"title":"Demo: Simple Deep Packet Inspection with P4","authors":"Sahil Gupta, D. Gosain, Garegin Grigoryan, Minseok Kwon, H. B. Acharya","doi":"10.1109/ICNP52444.2021.9651973","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651973","url":null,"abstract":"The P4 language allows \"protocol-independent packet parsing\" in network switches, and makes many operations possible in the data plane. But P4 is not built for Deep Packet Inspection – it can only \"parse\" well-defined packet headers, not free-form headers as seen in HTTPS etc. Thus some very important use cases, such as application-layer firewalls, are considered impossible for P4. This demonstration shows that this limitation is not strictly true: switches, that support only standard P4, are able to independently perform tasks such as blocking specific URLs (without using non-standard \"extern\" components, help from the SDN controller, or rerouting to a firewall). As more Internet infrastructure becomes SDN-compatible, in future, switches may perform simple application-layer firewall tasks.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127752279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
StaR: Breaking the Scalability Limit for RDMA StaR:打破RDMA的可扩展性限制
Pub Date : 2021-11-01 DOI: 10.1109/ICNP52444.2021.9651935
Xizheng Wang, Guo Chen, Xijin Yin, Huichen Dai, Bojie Li, Binzhang Fu, Kun Tan
Due to its superior performance, Remote Direct Memory Access (RDMA) has been widely deployed in data center networks. It provides applications with ultra-high throughput, ultra-low latency, and far lower CPU utilization than TCP/IP software network stack. However, the connection states that must be stored on the RDMA NIC (RNIC) and the small NIC memory result in poor scalability. The performance drops significantly when the RNIC needs to maintain a large number of concurrent connections.We propose StaR (Stateless RDMA), which solves the scalability problem of RDMA by transferring states to the other communication end. Leveraging the asymmetric communication pattern in data center applications, StaR lets the communication end with low concurrency save states for the other end with high concurrency, thus making the RNIC on the bottleneck side to be stateless. We have implemented StaR on an FPGA board with 10Gbps network port and evaluated its performance on a testbed with 9 machines all equipped with StaR NICs. The experimental results show that in high concurrency scenarios, the throughput of StaR can reach up to 4.13x and 1.35x of the original RNIC and the latest software-based solution, respectively.
RDMA (Remote Direct Memory Access)由于其优越的性能,在数据中心网络中得到了广泛的应用。它为应用程序提供了比TCP/IP软件网络堆栈更高的吞吐量、更低的延迟和更低的CPU利用率。但是,必须存储在RDMA网卡(RNIC)上的连接状态和较小的网卡内存导致可扩展性较差。当RNIC需要维护大量并发连接时,性能会明显下降。我们提出了StaR(无状态RDMA),它通过向另一端传输状态来解决RDMA的可扩展性问题。利用数据中心应用程序中的非对称通信模式,StaR允许具有低并发性的通信端为具有高并发性的另一端保存状态,从而使瓶颈端的RNIC处于无状态状态。我们在带有10Gbps网络端口的FPGA板上实现了StaR,并在配备StaR网卡的9台机器的测试台上对其性能进行了评估。实验结果表明,在高并发场景下,StaR的吞吐量分别可以达到原始RNIC和最新基于软件的解决方案的4.13倍和1.35倍。
{"title":"StaR: Breaking the Scalability Limit for RDMA","authors":"Xizheng Wang, Guo Chen, Xijin Yin, Huichen Dai, Bojie Li, Binzhang Fu, Kun Tan","doi":"10.1109/ICNP52444.2021.9651935","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651935","url":null,"abstract":"Due to its superior performance, Remote Direct Memory Access (RDMA) has been widely deployed in data center networks. It provides applications with ultra-high throughput, ultra-low latency, and far lower CPU utilization than TCP/IP software network stack. However, the connection states that must be stored on the RDMA NIC (RNIC) and the small NIC memory result in poor scalability. The performance drops significantly when the RNIC needs to maintain a large number of concurrent connections.We propose StaR (Stateless RDMA), which solves the scalability problem of RDMA by transferring states to the other communication end. Leveraging the asymmetric communication pattern in data center applications, StaR lets the communication end with low concurrency save states for the other end with high concurrency, thus making the RNIC on the bottleneck side to be stateless. We have implemented StaR on an FPGA board with 10Gbps network port and evaluated its performance on a testbed with 9 machines all equipped with StaR NICs. The experimental results show that in high concurrency scenarios, the throughput of StaR can reach up to 4.13x and 1.35x of the original RNIC and the latest software-based solution, respectively.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"66 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133609501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
CELL: Counter Estimation for Per-flow Traffic in Streams and Sliding Windows CELL:流和滑动窗口中每流流量的计数器估计
Pub Date : 2021-11-01 DOI: 10.1109/ICNP52444.2021.9651924
Rana Shahout, R. Friedman, Dolev Adas
Measurement capabilities are fundamental for a variety of network applications. Typically, recent data items are more relevant than old ones, a notion we can capture through a sliding window abstraction. These capabilities require a large number of counters in order to monitor the traffic of all network flows. However, SRAM memories are too small to contain these counters. Previous works suggested replacing counters with small estimators, trading accuracy for reduced space. But these estimators only focus on the counters’ size, whereas often flow ids consume more space than their respective counters. In this work, we present the CELL algorithm that combines estimators with efficient flow representation for superior memory reduction.We also extend CELL to the sliding window model, which prioritizes the recent data, by presenting two variants named RAND-CELL and SHIFT-CELL. We formally analyze the error and memory consumption of our algorithms and compare their performance against competing approaches using real-world Internet traces. These measurements exhibit the benefits of our work and show that CELL consumes at least 30% less space than the best-known alternative. The code is available in open source.
测量能力是各种网络应用的基础。通常,最近的数据项比旧的数据项更相关,我们可以通过滑动窗口抽象来捕捉这个概念。这些功能需要大量的计数器来监视所有网络流的流量。然而,SRAM存储器太小,无法容纳这些计数器。以前的工作建议用小型估算器代替计数器,以减少空间来换取准确性。但是这些估计器只关注计数器的大小,而流id通常比它们各自的计数器消耗更多的空间。在这项工作中,我们提出了CELL算法,该算法将估计器与有效的流表示相结合,以获得更好的内存减少。我们还将CELL扩展到滑动窗口模型,通过提出两个名为RAND-CELL和SHIFT-CELL的变体,该模型优先考虑最近的数据。我们正式分析了我们的算法的错误和内存消耗,并使用真实的互联网痕迹将它们与竞争方法的性能进行了比较。这些测量显示了我们工作的好处,并表明CELL比最知名的替代方案至少节省30%的空间。该代码是开源的。
{"title":"CELL: Counter Estimation for Per-flow Traffic in Streams and Sliding Windows","authors":"Rana Shahout, R. Friedman, Dolev Adas","doi":"10.1109/ICNP52444.2021.9651924","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651924","url":null,"abstract":"Measurement capabilities are fundamental for a variety of network applications. Typically, recent data items are more relevant than old ones, a notion we can capture through a sliding window abstraction. These capabilities require a large number of counters in order to monitor the traffic of all network flows. However, SRAM memories are too small to contain these counters. Previous works suggested replacing counters with small estimators, trading accuracy for reduced space. But these estimators only focus on the counters’ size, whereas often flow ids consume more space than their respective counters. In this work, we present the CELL algorithm that combines estimators with efficient flow representation for superior memory reduction.We also extend CELL to the sliding window model, which prioritizes the recent data, by presenting two variants named RAND-CELL and SHIFT-CELL. We formally analyze the error and memory consumption of our algorithms and compare their performance against competing approaches using real-world Internet traces. These measurements exhibit the benefits of our work and show that CELL consumes at least 30% less space than the best-known alternative. The code is available in open source.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123889635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
SketchINT: Empowering INT with TowerSketch for Per-flow Per-switch Measurement SketchINT:用TowerSketch授权INT进行逐流逐开关测量
Pub Date : 2021-11-01 DOI: 10.1109/ICNP52444.2021.9651940
Kaicheng Yang, Yuanpeng Li, Zirui Liu, Tong Yang, Yu Zhou, Jintao He, Jing'an Xue, Tong Zhao, Zhengyi Jia, Yongqiang Yang
1 Network measurement is indispensable to network operations. Two most promising measurement solutions are In-band Network Telemetry (INT) solutions and sketching solutions. INT solutions provide fine-grained per-switch per-packet information at the cost of high network overhead. Sketching solutions have low network overhead but fail to achieve both simplicity and accuracy for per-flow measurement. To keep their advantages, and at the same time, overcome their shortcomings, we first design SketchINT to combine INT and sketches, aiming to obtain all per-flow per-switch information with low network overhead. Second, for deployment flexibility and measurement accuracy, we design a new sketch for SketchINT, namely TowerSketch, which achieves both simplicity and accuracy. The key idea of TowerSketch is to use different-sized counters for different arrays under the property that the number of bits used for different arrays stays the same. TowerSketch can automatically record larger flows in larger counters and smaller flows in smaller counters. We have fully implemented our SketchINT prototype on a testbed consisting of 10 switches. We also implement our TowerSketch on P4, single-core CPU, multi-core CPU, and FPGA platforms to verify its deployment flexibility. Extensive experimental results verify that 1) TowerSketch achieves better accuracy than prior art on various tasks, outperforming the state-of-the-art ElasticSketch up to 13.9 times in terms of error; 2) Compared to INT, SketchINT reduces the number of packets in the collection process by 3 4 orders of magnitude with an error smaller than 5%.
1 .网络测量是网络运营不可或缺的一部分。两个最有前途的测量解决方案是带内网络遥测(INT)解决方案和草图解决方案。INT解决方案以高网络开销为代价提供细粒度的每个交换机每个数据包信息。速写解决方案具有较低的网络开销,但无法实现每流测量的简单性和准确性。为了保持它们的优点,同时克服它们的缺点,我们首先设计了SketchINT,将INT和草图结合起来,旨在以低网络开销获取所有的每流每交换机信息。其次,为了部署灵活性和测量精度,我们为SketchINT设计了一个新的草图,即TowerSketch,它既简单又准确。TowerSketch的关键思想是为不同的数组使用不同大小的计数器,其属性是用于不同数组的位数保持不变。TowerSketch可以在较大的计数器中自动记录较大的流量,在较小的计数器中自动记录较小的流量。我们已经在一个由10个开关组成的测试台上完全实现了我们的SketchINT原型。我们还在P4、单核CPU、多核CPU和FPGA平台上实现了TowerSketch,以验证其部署灵活性。大量的实验结果验证了1)TowerSketch在各种任务上比现有技术实现了更好的准确性,在误差方面优于最先进的ElasticSketch高达13.9倍;2)与INT相比,SketchINT将收集过程中的数据包数量减少了34个数量级,误差小于5%。
{"title":"SketchINT: Empowering INT with TowerSketch for Per-flow Per-switch Measurement","authors":"Kaicheng Yang, Yuanpeng Li, Zirui Liu, Tong Yang, Yu Zhou, Jintao He, Jing'an Xue, Tong Zhao, Zhengyi Jia, Yongqiang Yang","doi":"10.1109/ICNP52444.2021.9651940","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651940","url":null,"abstract":"1 Network measurement is indispensable to network operations. Two most promising measurement solutions are In-band Network Telemetry (INT) solutions and sketching solutions. INT solutions provide fine-grained per-switch per-packet information at the cost of high network overhead. Sketching solutions have low network overhead but fail to achieve both simplicity and accuracy for per-flow measurement. To keep their advantages, and at the same time, overcome their shortcomings, we first design SketchINT to combine INT and sketches, aiming to obtain all per-flow per-switch information with low network overhead. Second, for deployment flexibility and measurement accuracy, we design a new sketch for SketchINT, namely TowerSketch, which achieves both simplicity and accuracy. The key idea of TowerSketch is to use different-sized counters for different arrays under the property that the number of bits used for different arrays stays the same. TowerSketch can automatically record larger flows in larger counters and smaller flows in smaller counters. We have fully implemented our SketchINT prototype on a testbed consisting of 10 switches. We also implement our TowerSketch on P4, single-core CPU, multi-core CPU, and FPGA platforms to verify its deployment flexibility. Extensive experimental results verify that 1) TowerSketch achieves better accuracy than prior art on various tasks, outperforming the state-of-the-art ElasticSketch up to 13.9 times in terms of error; 2) Compared to INT, SketchINT reduces the number of packets in the collection process by 3 4 orders of magnitude with an error smaller than 5%.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124523201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Loom: Switch-based Cloud Load Balancer with Compressed States 织机:基于交换机的云负载均衡器与压缩状态
Pub Date : 2021-11-01 DOI: 10.1109/ICNP52444.2021.9651928
Jiao Zhang, Yuxuan Gao, Shubo Wen, Tian Pan, Tao Huang
Layer-4 load balancers play a critical role in large-scale data centers. Recently, load balancers implemented on programmable switches have attracted much attention since they overcome the inflexibility of dedicated load balancers and high latency of software load balancers. However, keeping per-connection state easily leads to storage exhaustion, especially under resource exhaustion attacks. Although several stateless load balancers are proposed to address this issue, the state management burden is offloaded to backend servers, causing high deployment and running costs. In this paper, a load balancer called Loom with compressed states is proposed for large-scale data centers. Firstly, we propose a novel classifier-based load balancer idea to avoid directly maintaining per-connection state. Then, a circulating Bloom filter structure is proposed that can efficiently classify connections as well as be implemented on existing programmable switches. Theoretical analysis shows that Loom can maintain 11 ~ 30x more concurrent connections than those directly storing the 5-tuple of connections. Loom is implemented in hardware P4 switches and experimental results indicate that 11 ~ 29x more concurrent connections can be maintained in Loom, which is close to the theoretical results. Besides, Loom is resistant to resource exhaustion attacks and reduces the percentage of broken connections by up to 57% with an SYN flood.
第4层负载平衡器在大型数据中心中起着至关重要的作用。近年来,在可编程交换机上实现的负载均衡器由于克服了专用负载均衡器的不灵活性和软件负载均衡器的高延迟性而备受关注。但是,保持每个连接的状态很容易导致存储耗尽,特别是在资源耗尽攻击下。尽管提出了几个无状态负载平衡器来解决这个问题,但状态管理的负担被转移到了后端服务器上,导致了高昂的部署和运行成本。本文提出了一种面向大型数据中心的压缩状态负载均衡器Loom。首先,我们提出了一种新的基于分类器的负载均衡器思想,以避免直接维护每个连接的状态。然后,提出了一种循环布隆滤波器结构,可以有效地对连接进行分类,并在现有的可编程开关上实现。理论分析表明,与直接存储5元组的连接相比,Loom可以多维护11 ~ 30倍的并发连接。在硬件P4交换机上实现了织机,实验结果表明,织机可以维持11 ~ 29倍的并发连接,与理论结果接近。此外,Loom可以抵抗资源耗尽攻击,在SYN flood的情况下,可以减少高达57%的连接断开率。
{"title":"Loom: Switch-based Cloud Load Balancer with Compressed States","authors":"Jiao Zhang, Yuxuan Gao, Shubo Wen, Tian Pan, Tao Huang","doi":"10.1109/ICNP52444.2021.9651928","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651928","url":null,"abstract":"Layer-4 load balancers play a critical role in large-scale data centers. Recently, load balancers implemented on programmable switches have attracted much attention since they overcome the inflexibility of dedicated load balancers and high latency of software load balancers. However, keeping per-connection state easily leads to storage exhaustion, especially under resource exhaustion attacks. Although several stateless load balancers are proposed to address this issue, the state management burden is offloaded to backend servers, causing high deployment and running costs. In this paper, a load balancer called Loom with compressed states is proposed for large-scale data centers. Firstly, we propose a novel classifier-based load balancer idea to avoid directly maintaining per-connection state. Then, a circulating Bloom filter structure is proposed that can efficiently classify connections as well as be implemented on existing programmable switches. Theoretical analysis shows that Loom can maintain 11 ~ 30x more concurrent connections than those directly storing the 5-tuple of connections. Loom is implemented in hardware P4 switches and experimental results indicate that 11 ~ 29x more concurrent connections can be maintained in Loom, which is close to the theoretical results. Besides, Loom is resistant to resource exhaustion attacks and reduces the percentage of broken connections by up to 57% with an SYN flood.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121176290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Generalizable and Interpretable Deep Learning for Network Congestion Prediction 网络拥塞预测的可推广和可解释深度学习
Pub Date : 2021-11-01 DOI: 10.1109/ICNP52444.2021.9651937
Konstantinos Poularakis, Qiaofeng Qin, Franck Le, S. Kompella, L. Tassiulas
While recent years have witnessed a steady trend of applying Deep Learning (DL) to networking systems, most of the underlying Deep Neural Networks (DNNs) suffer two major limitations. First, they fail to generalize to topologies unseen during training. This lack of generalizability hampers the ability of the DNNs to make good decisions every time the topology of the networking system changes. Second, existing DNNs commonly operate as "blackboxes" that are difficult to interpret by network operators, and hinder their deployment in practice. In this paper, we propose to rely on a recently developed family of graph-based DNNs to address the aforementioned limitations. More specifically, we focus on a network congestion prediction application and apply Graph Attention (GAT) models to make congestion predictions per link using the graph topology and time series of link loads as inputs. Evaluations on three real backbone networks demonstrate the benefits of our proposed approach in terms of prediction accuracy, generalizability, and interpretability.
虽然近年来将深度学习(DL)应用于网络系统的趋势稳定,但大多数底层深度神经网络(dnn)都存在两个主要限制。首先,它们不能泛化到训练中看不到的拓扑。这种泛化性的缺乏阻碍了dnn在每次网络系统拓扑变化时做出正确决策的能力。其次,现有的深度神经网络通常以“黑盒子”的形式运行,难以被网络运营商解读,阻碍了它们在实践中的部署。在本文中,我们建议依靠最近开发的基于图的深度神经网络来解决上述限制。更具体地说,我们专注于网络拥塞预测应用程序,并应用图注意(GAT)模型,使用图拓扑和链路负载的时间序列作为输入,对每个链路进行拥塞预测。对三个真实骨干网的评估表明,我们提出的方法在预测准确性、通用性和可解释性方面具有优势。
{"title":"Generalizable and Interpretable Deep Learning for Network Congestion Prediction","authors":"Konstantinos Poularakis, Qiaofeng Qin, Franck Le, S. Kompella, L. Tassiulas","doi":"10.1109/ICNP52444.2021.9651937","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651937","url":null,"abstract":"While recent years have witnessed a steady trend of applying Deep Learning (DL) to networking systems, most of the underlying Deep Neural Networks (DNNs) suffer two major limitations. First, they fail to generalize to topologies unseen during training. This lack of generalizability hampers the ability of the DNNs to make good decisions every time the topology of the networking system changes. Second, existing DNNs commonly operate as \"blackboxes\" that are difficult to interpret by network operators, and hinder their deployment in practice. In this paper, we propose to rely on a recently developed family of graph-based DNNs to address the aforementioned limitations. More specifically, we focus on a network congestion prediction application and apply Graph Attention (GAT) models to make congestion predictions per link using the graph topology and time series of link loads as inputs. Evaluations on three real backbone networks demonstrate the benefits of our proposed approach in terms of prediction accuracy, generalizability, and interpretability.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114292503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2021 IEEE 29th International Conference on Network Protocols (ICNP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1