首页 > 最新文献

Proceedings of the 7th Asia-Pacific Workshop on Networking最新文献

英文 中文
Accurate and Scalable Rate Limiter for RDMA NICs 精确和可扩展的RDMA网卡速率限制器
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3600078
Zilong Wang, Xinchen Wan, Chaoliang Zeng, Kai Chen
Rate limiter is required by RDMA NIC (RNIC) to enforce the rate limits calculated by congestion control. RNIC expects the rate limiter to be accurate and scalable: to precisely shape the traffic for numerous flows with minimized resource consumption, thereby mitigating the incasts and congestions and improving the network performance. Previous works, however, fail to meet the performance requirements of RNIC while achieving accuracy and scalability. In this paper, we present Tassel, an accurate and scalable rate limiter for RNICs, including the algorithm and architecture design. Tassel first extends the classical WF2Q + algorithm to support rate limiting in the context of the RNIC scenario. Then Tassel designs a high-precision and resource-friendly rate limiter and integrates it into classical RNIC architecture. Preliminary simulation results show that Tassel precisely enforces the rate limits ranging from 100 Kbps to 100 Gbps among 1 K concurrent flows while the resource consumption is limited.
RDMA网卡(RNIC)需要速率限制器来强制执行由拥塞控制计算的速率限制。RNIC希望速率限制器是准确的和可扩展的:在最小化资源消耗的情况下,精确地为众多流量塑造流量,从而减少投资和拥塞,提高网络性能。然而,以往的工作在实现准确性和可扩展性的同时,未能满足RNIC的性能要求。在本文中,我们提出了一种精确和可扩展的rnic速率限制器Tassel,包括算法和架构设计。Tassel首先扩展了经典的WF2Q +算法,以支持RNIC场景下的速率限制。然后Tassel设计了一个高精度、资源友好型的速率限制器,并将其集成到经典的RNIC架构中。初步的仿真结果表明,在资源消耗有限的情况下,Tassel在1k并发流中精确地执行了100kbps到100gbps的速率限制。
{"title":"Accurate and Scalable Rate Limiter for RDMA NICs","authors":"Zilong Wang, Xinchen Wan, Chaoliang Zeng, Kai Chen","doi":"10.1145/3600061.3600078","DOIUrl":"https://doi.org/10.1145/3600061.3600078","url":null,"abstract":"Rate limiter is required by RDMA NIC (RNIC) to enforce the rate limits calculated by congestion control. RNIC expects the rate limiter to be accurate and scalable: to precisely shape the traffic for numerous flows with minimized resource consumption, thereby mitigating the incasts and congestions and improving the network performance. Previous works, however, fail to meet the performance requirements of RNIC while achieving accuracy and scalability. In this paper, we present Tassel, an accurate and scalable rate limiter for RNICs, including the algorithm and architecture design. Tassel first extends the classical WF2Q + algorithm to support rate limiting in the context of the RNIC scenario. Then Tassel designs a high-precision and resource-friendly rate limiter and integrates it into classical RNIC architecture. Preliminary simulation results show that Tassel precisely enforces the rate limits ranging from 100 Kbps to 100 Gbps among 1 K concurrent flows while the resource consumption is limited.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"174 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133455492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heuristic Binary Search: Adaptive and Fast IPv6 Route Lookup with Incremental Updates 启发式二进制搜索:自适应和快速IPv6路由查找与增量更新
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3600077
Donghong Jiang, Yanbiao Li, Yuxuan Chen, Jing Hu, Yi Huang, Gaogang Xie
The advent of Software Defined Networking (SDN) and Network Function Virtualization (NFV) has revolutionized the deployment of software-based routing and forwarding devices in modern network architectures. However, IPv6 route lookup remains a substantial performance bottleneck in these software-based devices due to two key challenges: (1) the longer addresses and prefixes, which hinder high-speed IPv6 lookup, and (2) the larger address space of IPv6 necessitates adaptability to varied length-based prefix distributions across various network scenarios. Current trie-based methods like SAIL and Poptrie have enhanced IPv4 lookup, but they struggle with adaptive and fast IPv6 lookup due to their fixed search scheme from short to long prefixes. To overcome these challenges, we propose a novel Heuristic Binary Search (HBS) scheme to achieve adaptive and fast IPv6 lookup. HBS refines the traditional "Binary Search on Prefix Lengths" scheme by incorporating two key techniques: (1) a heuristic binary search method for accelerated lookup and (2) a tree rotation method for dynamic adjustment of binary search tree shapes in response to changes in prefix distribution. Our evaluation of HBS demonstrates its superiority in terms of lookup throughput, update speed, memory efficiency, and dynamic adaptability.
软件定义网络(SDN)和网络功能虚拟化(NFV)的出现彻底改变了现代网络架构中基于软件的路由和转发设备的部署。然而,在这些基于软件的设备中,IPv6路由查找仍然是一个实质性的性能瓶颈,因为两个关键挑战:(1)更长的地址和前缀阻碍了高速IPv6查找;(2)IPv6更大的地址空间需要适应各种网络场景中基于长度的前缀分布。目前基于尝试的方法,如SAIL和Poptrie已经增强了IPv4查找,但由于它们固定的从短到长前缀的搜索方案,它们难以适应和快速查找IPv6。为了克服这些挑战,我们提出了一种新的启发式二进制搜索(HBS)方案来实现自适应和快速的IPv6查找。HBS对传统的“基于前缀长度的二叉搜索”方案进行了改进,采用了两项关键技术:(1)用于加速查找的启发式二叉搜索方法;(2)用于响应前缀分布变化动态调整二叉搜索树形状的树旋转方法。我们对HBS的评估表明,它在查找吞吐量、更新速度、内存效率和动态适应性方面具有优势。
{"title":"Heuristic Binary Search: Adaptive and Fast IPv6 Route Lookup with Incremental Updates","authors":"Donghong Jiang, Yanbiao Li, Yuxuan Chen, Jing Hu, Yi Huang, Gaogang Xie","doi":"10.1145/3600061.3600077","DOIUrl":"https://doi.org/10.1145/3600061.3600077","url":null,"abstract":"The advent of Software Defined Networking (SDN) and Network Function Virtualization (NFV) has revolutionized the deployment of software-based routing and forwarding devices in modern network architectures. However, IPv6 route lookup remains a substantial performance bottleneck in these software-based devices due to two key challenges: (1) the longer addresses and prefixes, which hinder high-speed IPv6 lookup, and (2) the larger address space of IPv6 necessitates adaptability to varied length-based prefix distributions across various network scenarios. Current trie-based methods like SAIL and Poptrie have enhanced IPv4 lookup, but they struggle with adaptive and fast IPv6 lookup due to their fixed search scheme from short to long prefixes. To overcome these challenges, we propose a novel Heuristic Binary Search (HBS) scheme to achieve adaptive and fast IPv6 lookup. HBS refines the traditional \"Binary Search on Prefix Lengths\" scheme by incorporating two key techniques: (1) a heuristic binary search method for accelerated lookup and (2) a tree rotation method for dynamic adjustment of binary search tree shapes in response to changes in prefix distribution. Our evaluation of HBS demonstrates its superiority in terms of lookup throughput, update speed, memory efficiency, and dynamic adaptability.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123644319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Detection of 1D and 2D Hierarchical Super-Spreaders in High-Speed Networks 高速网络中一维和二维分层超扩展器的在线检测
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3600080
Haorui Su, Qingjun Xiao
Traditionally, a firewall tracks the per-flow spread of each source and destination IP address to detect network scans and DDoS attacks. It is not designed with hierarchical IP addresses in mind. However, cyberattacks nowadays become more stealthy. To evade the detection, they treat a network subnet instead of a single IP as the victim of an attacking campaign. Therefore, we focus on a new problem: online estimation of each hierarchical flow’s cardinality (or spread), in order to detect the hierarchical super-spreaders (HSSs), which correspond to the IP subnet receiving numerous network connections from an extraordinarily large number of source IPs. For detecting such one-dimensional HSSs, the recent work Hierarchical virtual bitmap estimator (HVE) has been proposed. But it fails to handle the two-dimensional HSSs, and it can not be queried online due to its very high query overhead. In this paper, we propose the Hon-vHLL sketch to address these limitations. It is an innovative hierarchical extension of On-vHLL to support the estimation of conditional spreads for either 1D or 2D hierarchical flows. Hon-vHLL allocates an On-vHLL sketch for each hierarchical level bucket and query conditional spread by merging the virtual estimators of hierarchical flows. We evaluate its performance based on CAIDA network traces. The results show that our Hon-vHLL can improve the query throughput by 578 times than HVE, and also achieve 11% higher HSS detection accuracy.
传统上,防火墙跟踪每个源和目的IP地址的每流传播,以检测网络扫描和DDoS攻击。它在设计时没有考虑分层IP地址。然而,如今的网络攻击变得更加隐蔽。为了逃避检测,他们将网络子网而不是单个IP视为攻击活动的受害者。因此,我们关注一个新问题:在线估计每个分层流的基数(或传播),以检测分层超级传播器(hss),它对应于接收来自大量源IP的大量网络连接的IP子网。为了检测这种一维hss,最近提出了分层虚拟位图估计器(Hierarchical virtual bitmap estimator, HVE)。但是它不能处理二维hss,而且由于查询开销非常大,无法在线查询。在本文中,我们提出了Hon-vHLL草图来解决这些限制。它是On-vHLL的创新分层扩展,以支持一维或二维分层流的条件扩展估计。non - vhll为每个分层级桶分配一个On-vHLL草图,并通过合并分层流的虚拟估计器来查询条件传播。我们基于CAIDA网络轨迹来评估其性能。结果表明,与HVE相比,我们的Hon-vHLL的查询吞吐量提高了578倍,HSS的检测精度也提高了11%。
{"title":"Online Detection of 1D and 2D Hierarchical Super-Spreaders in High-Speed Networks","authors":"Haorui Su, Qingjun Xiao","doi":"10.1145/3600061.3600080","DOIUrl":"https://doi.org/10.1145/3600061.3600080","url":null,"abstract":"Traditionally, a firewall tracks the per-flow spread of each source and destination IP address to detect network scans and DDoS attacks. It is not designed with hierarchical IP addresses in mind. However, cyberattacks nowadays become more stealthy. To evade the detection, they treat a network subnet instead of a single IP as the victim of an attacking campaign. Therefore, we focus on a new problem: online estimation of each hierarchical flow’s cardinality (or spread), in order to detect the hierarchical super-spreaders (HSSs), which correspond to the IP subnet receiving numerous network connections from an extraordinarily large number of source IPs. For detecting such one-dimensional HSSs, the recent work Hierarchical virtual bitmap estimator (HVE) has been proposed. But it fails to handle the two-dimensional HSSs, and it can not be queried online due to its very high query overhead. In this paper, we propose the Hon-vHLL sketch to address these limitations. It is an innovative hierarchical extension of On-vHLL to support the estimation of conditional spreads for either 1D or 2D hierarchical flows. Hon-vHLL allocates an On-vHLL sketch for each hierarchical level bucket and query conditional spread by merging the virtual estimators of hierarchical flows. We evaluate its performance based on CAIDA network traces. The results show that our Hon-vHLL can improve the query throughput by 578 times than HVE, and also achieve 11% higher HSS detection accuracy.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115977328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Secure Transaction Forwarding Strategy for Blockchain Payment Channel Networks b区块链支付通道网络的安全事务转发策略
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3603130
Huaihang Lin, Xiaoyan Li, Yanhua Liu, Weibei Fan
In this poster, we propose a new transaction forwarding strategy for PCN, named PNSFF. Then we perform several experiments to study the effectiveness of the proposed strategy. Experimental results conclude that PNSFF achieves more incentivizing and higher security than previous similar works.
在这张海报中,我们提出了一种新的PCN事务转发策略PNSFF。然后,我们进行了几个实验来研究该策略的有效性。实验结果表明,PNSFF比以往的同类工作具有更高的激励和安全性。
{"title":"A Secure Transaction Forwarding Strategy for Blockchain Payment Channel Networks","authors":"Huaihang Lin, Xiaoyan Li, Yanhua Liu, Weibei Fan","doi":"10.1145/3600061.3603130","DOIUrl":"https://doi.org/10.1145/3600061.3603130","url":null,"abstract":"In this poster, we propose a new transaction forwarding strategy for PCN, named PNSFF. Then we perform several experiments to study the effectiveness of the proposed strategy. Experimental results conclude that PNSFF achieves more incentivizing and higher security than previous similar works.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126378127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In-Network Probabilistic Monitoring Primitives under the Influence of Adversarial Network Inputs 对抗网络输入影响下的网络内概率监测原语
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3600086
Harish S A, K. S. Kumar, Anibrata Majee, Amogh Bedarakota, Praveen Tammana, Pravein G. Kannan, Rinku Shah
Network management tasks heavily rely on network telemetry data. Programmable data planes provide novel ways to collect this telemetry data efficiently using probabilistic data structures like bloom filters and their variants. Despite the benefits of the data structures (and associated data plane primitives), their exposure increases the attack surface. That is, they are at risk of adversarial network inputs. In this work, we examine the effects of adversarial network inputs to bloom filters that are integral to data plane primitives. Bloom filters are probabilistic and inherently susceptible to pollution attacks which increase their false positive rates. To quantify the impact, we demonstrate the feasibility of pollution attacks on FlowRadar, a network monitoring and debugging system that employs a data plane primitive to collect traffic statistics. We observe that the adversary can corrupt traffic statistics with a few well-crafted malicious flows (tens of flows), leading to a 99% drop in the accuracy of the core functionality of the FlowRadar system.
网络管理任务严重依赖于网络遥测数据。可编程数据平面提供了利用概率数据结构(如布隆过滤器及其变体)有效收集遥测数据的新方法。尽管数据结构(以及相关的数据平面原语)有好处,但它们的暴露增加了攻击面。也就是说,它们面临对抗性网络输入的风险。在这项工作中,我们研究了对抗性网络输入对数据平面原语不可或缺的布隆过滤器的影响。布隆过滤器是概率性的,天生就容易受到污染的影响,这增加了它们的误报率。为了量化影响,我们在FlowRadar上演示了污染攻击的可行性,FlowRadar是一个使用数据平面原语收集流量统计数据的网络监控和调试系统。我们观察到,攻击者可以通过一些精心设计的恶意流量(数十个流量)破坏流量统计数据,导致FlowRadar系统核心功能的准确性下降99%。
{"title":"In-Network Probabilistic Monitoring Primitives under the Influence of Adversarial Network Inputs","authors":"Harish S A, K. S. Kumar, Anibrata Majee, Amogh Bedarakota, Praveen Tammana, Pravein G. Kannan, Rinku Shah","doi":"10.1145/3600061.3600086","DOIUrl":"https://doi.org/10.1145/3600061.3600086","url":null,"abstract":"Network management tasks heavily rely on network telemetry data. Programmable data planes provide novel ways to collect this telemetry data efficiently using probabilistic data structures like bloom filters and their variants. Despite the benefits of the data structures (and associated data plane primitives), their exposure increases the attack surface. That is, they are at risk of adversarial network inputs. In this work, we examine the effects of adversarial network inputs to bloom filters that are integral to data plane primitives. Bloom filters are probabilistic and inherently susceptible to pollution attacks which increase their false positive rates. To quantify the impact, we demonstrate the feasibility of pollution attacks on FlowRadar, a network monitoring and debugging system that employs a data plane primitive to collect traffic statistics. We observe that the adversary can corrupt traffic statistics with a few well-crafted malicious flows (tens of flows), leading to a 99% drop in the accuracy of the core functionality of the FlowRadar system.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130019965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Host Efficient Networking Stack Utilizing NIC DRAM 主机高效网络堆栈利用网卡DRAM
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3600070
Byeongkeon Lee, Donghyeon Lee, J. Ok, Wonsup Yoon, Sue Moon
The growth in host resource and network speed is not synchronized, and the status quo of this imbalance from the network speed of 100 ∼ Gbps makes the host resource the bottleneck. We categorize existing body of work to reduce the host burden into the following three approaches: (1) to eliminate payload copy (zero-copy), (2) to utilize special-purpose hardware for payload copy, and (3) to offload protocol to NIC. Each approach, however, has drawbacks. (1) Most zero-copy methods require application modification. Furthermore, the application must ensure its buffer is not modified until network I/O is complete. (2) Copy elimination through special-purpose hardware still uses host memory, consuming considerable memory bandwidth. (3) The protocol offloaded to NIC has limited flexibility. We redesign the networking stack placing only the payload in the NIC DRAM and executing protocol processing in the host to overcome the above limitations. Our work (1) makes the application reuse its own buffer as soon as the payload is transferred data in the NIC DRAM and does not require application modification, (2) saves host memory bandwidth by putting packet payload in NIC and eliminating payload copying on the host, and (3) maintains flexibility by keeping protocol processing on the host. Compared to the networking stack with CPU-based copy, our work saves 38.6% of CPU usage and 54.0% of memory bandwidth.
主机资源和网络速度的增长是不同步的,从100 ~ Gbps的网络速度来看,这种不平衡的现状使主机资源成为瓶颈。我们将减少主机负担的现有工作分为以下三种方法:(1)消除有效载荷复制(零复制),(2)利用专用硬件进行有效载荷复制,以及(3)将协议卸载到NIC。然而,每种方法都有缺点。(1)大多数零拷贝方法需要修改应用程序。此外,应用程序必须确保在网络I/O完成之前不会修改其缓冲区。(2)通过专用硬件消除拷贝仍然占用主机内存,消耗相当大的内存带宽。(3)协议卸载到网卡的灵活性有限。为了克服上述限制,我们重新设计了网络堆栈,仅将有效负载放在NIC DRAM中,并在主机中执行协议处理。我们的工作(1)使应用程序重用自己的缓冲区,只要有效载荷在NIC DRAM中传输数据,而不需要应用程序修改,(2)通过将数据包有效载荷放在NIC中并消除主机上的有效载荷复制来节省主机内存带宽,(3)通过保持主机上的协议处理来保持灵活性。与基于CPU复制的网络堆栈相比,我们的工作节省了38.6%的CPU使用率和54.0%的内存带宽。
{"title":"Host Efficient Networking Stack Utilizing NIC DRAM","authors":"Byeongkeon Lee, Donghyeon Lee, J. Ok, Wonsup Yoon, Sue Moon","doi":"10.1145/3600061.3600070","DOIUrl":"https://doi.org/10.1145/3600061.3600070","url":null,"abstract":"The growth in host resource and network speed is not synchronized, and the status quo of this imbalance from the network speed of 100 ∼ Gbps makes the host resource the bottleneck. We categorize existing body of work to reduce the host burden into the following three approaches: (1) to eliminate payload copy (zero-copy), (2) to utilize special-purpose hardware for payload copy, and (3) to offload protocol to NIC. Each approach, however, has drawbacks. (1) Most zero-copy methods require application modification. Furthermore, the application must ensure its buffer is not modified until network I/O is complete. (2) Copy elimination through special-purpose hardware still uses host memory, consuming considerable memory bandwidth. (3) The protocol offloaded to NIC has limited flexibility. We redesign the networking stack placing only the payload in the NIC DRAM and executing protocol processing in the host to overcome the above limitations. Our work (1) makes the application reuse its own buffer as soon as the payload is transferred data in the NIC DRAM and does not require application modification, (2) saves host memory bandwidth by putting packet payload in NIC and eliminating payload copying on the host, and (3) maintains flexibility by keeping protocol processing on the host. Compared to the networking stack with CPU-based copy, our work saves 38.6% of CPU usage and 54.0% of memory bandwidth.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122778893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quicklayer: A Layer-Stack-Oriented Accelerating Middleware for Fast Deployment in Edge Clouds Quicklayer:用于边缘云快速部署的面向层栈的加速中间件
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3600074
Yicheng Feng, Shihao Shen, Chen Zhang, Xiaofei Wang
Containers are gaining popularity in edge computing due to their standardization and low overhead. This trend has brought new technologies such as container engines and container orchestration platforms (COPs). However, fast and effective container deployment remains a challenge, especially at the edge. Prior work, which was designed for cloud datacenters, is no longer suitable for container deployment in edge clouds due to bandwidth limitations, fluctuating network performance, resource constraints, and geo-distributed organization. These edge features make rapid deployment on the edge difficult. Additionally, integrating with COPs is crucial for successful deployment. We present Quicklayer, a layer-stack-oriented middleware designed to accelerate container deployment in edge clouds. Quicklayer takes a holistic approach that preserves the stack-of-layers structure and is backward-compatible. It includes (1) a layer-based container refactoring solution that optimizes container images while maintaining the layer structure, (2) a customised Kubernetes scheduler that is able to be aware of network performance, disk space, and container layer cache for container placement, and (3) distributed shared layer-stack caches which are optimized for cooperative container deployment among edge clouds. Preliminary results indicate that Quicklayer reduces redundant image size by up to 3.11× and speeds up the deployment process by up to 1.64× compared to the current popular container deployment system.
容器由于其标准化和低开销而在边缘计算中越来越受欢迎。这种趋势带来了诸如容器引擎和容器编排平台(cop)之类的新技术。然而,快速有效的容器部署仍然是一个挑战,特别是在边缘。先前的工作是为云数据中心设计的,由于带宽限制、网络性能波动、资源约束和地理分布式组织,不再适合边缘云中的容器部署。这些边缘特性使得在边缘上的快速部署变得困难。此外,与cop集成对于成功部署至关重要。我们提出了Quicklayer,这是一种面向层栈的中间件,旨在加速边缘云中的容器部署。Quicklayer采用了一种整体的方法,既保留了层的堆栈结构,又向后兼容。它包括(1)一个基于层的容器重构解决方案,在保持层结构的同时优化容器映像;(2)一个定制的Kubernetes调度器,它能够感知网络性能、磁盘空间和容器层缓存,用于容器放置;(3)分布式共享层堆栈缓存,它为边缘云之间的协作容器部署进行了优化。初步结果表明,与目前流行的容器部署系统相比,Quicklayer将冗余图像大小减少了3.11倍,并将部署过程加快了1.64倍。
{"title":"Quicklayer: A Layer-Stack-Oriented Accelerating Middleware for Fast Deployment in Edge Clouds","authors":"Yicheng Feng, Shihao Shen, Chen Zhang, Xiaofei Wang","doi":"10.1145/3600061.3600074","DOIUrl":"https://doi.org/10.1145/3600061.3600074","url":null,"abstract":"Containers are gaining popularity in edge computing due to their standardization and low overhead. This trend has brought new technologies such as container engines and container orchestration platforms (COPs). However, fast and effective container deployment remains a challenge, especially at the edge. Prior work, which was designed for cloud datacenters, is no longer suitable for container deployment in edge clouds due to bandwidth limitations, fluctuating network performance, resource constraints, and geo-distributed organization. These edge features make rapid deployment on the edge difficult. Additionally, integrating with COPs is crucial for successful deployment. We present Quicklayer, a layer-stack-oriented middleware designed to accelerate container deployment in edge clouds. Quicklayer takes a holistic approach that preserves the stack-of-layers structure and is backward-compatible. It includes (1) a layer-based container refactoring solution that optimizes container images while maintaining the layer structure, (2) a customised Kubernetes scheduler that is able to be aware of network performance, disk space, and container layer cache for container placement, and (3) distributed shared layer-stack caches which are optimized for cooperative container deployment among edge clouds. Preliminary results indicate that Quicklayer reduces redundant image size by up to 3.11× and speeds up the deployment process by up to 1.64× compared to the current popular container deployment system.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126445317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extendable MQTT Broker for Feedback-based Resource Management in Large-scale Computing Environments 用于大规模计算环境中基于反馈的资源管理的可扩展MQTT代理
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3603129
Ryo Ouchi, Ryuichi Sakamoto
High-performance computing (HPC) systems demand continuous monitoring to ensure efficient resource allocation and application performance. Recent studies indicate that real-time resource utilization monitoring can significantly improve the performance of dynamic scheduling algorithms. However, latency induced by protocol stack heavily impacts the effectiveness of dynamic scheduling. In this paper, we propose a novel monitoring system that implements the protocol stack on a Field-Programmable Gate Array (FPGA) and adopts a publish/subscribe (pub/sub) communication protocol. Specifically, by introducing an FPGA-based protocol stack, we substantially reduce the latency of protocol stack processing and enable the implementation of custom plugins at the L7 layer. Our experiments demonstrate that the proposed system effectively reduces protocol stack latency and, with the extensibility provided by user-defined plugins, offers great potential for a wide range of HPC monitoring and feedback applications.
高性能计算(HPC)系统需要持续监控,以确保高效的资源分配和应用程序性能。近年来的研究表明,实时资源利用监控可以显著提高动态调度算法的性能。然而,协议栈导致的延迟严重影响了动态调度的有效性。本文提出了一种新的监控系统,该系统在FPGA上实现协议栈,采用发布/订阅(pub/sub)通信协议。具体来说,通过引入基于fpga的协议栈,我们大大减少了协议栈处理的延迟,并能够在L7层实现自定义插件。我们的实验表明,该系统有效地减少了协议栈延迟,并且通过用户定义插件提供的可扩展性,为广泛的HPC监控和反馈应用提供了巨大的潜力。
{"title":"Extendable MQTT Broker for Feedback-based Resource Management in Large-scale Computing Environments","authors":"Ryo Ouchi, Ryuichi Sakamoto","doi":"10.1145/3600061.3603129","DOIUrl":"https://doi.org/10.1145/3600061.3603129","url":null,"abstract":"High-performance computing (HPC) systems demand continuous monitoring to ensure efficient resource allocation and application performance. Recent studies indicate that real-time resource utilization monitoring can significantly improve the performance of dynamic scheduling algorithms. However, latency induced by protocol stack heavily impacts the effectiveness of dynamic scheduling. In this paper, we propose a novel monitoring system that implements the protocol stack on a Field-Programmable Gate Array (FPGA) and adopts a publish/subscribe (pub/sub) communication protocol. Specifically, by introducing an FPGA-based protocol stack, we substantially reduce the latency of protocol stack processing and enable the implementation of custom plugins at the L7 layer. Our experiments demonstrate that the proposed system effectively reduces protocol stack latency and, with the extensibility provided by user-defined plugins, offers great potential for a wide range of HPC monitoring and feedback applications.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128104771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
sRDMA: A General and Low-Overhead Scheduler for RDMA sRDMA: RDMA通用的低开销调度程序
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3600082
Xizheng Wang, Shuai Wang, Dan Li
Remote Direct Memory Access (RDMA) has been widely deployed in data centers to improve application performance. However, the characteristic of RDMA to deliver messages in order cannot meet the emerging requirements of applications for scheduling messages within an RDMA connection, making RDMA unable to be fully utilized. Some works try to schedule the data to be transferred in specific applications before delivering to RDMA, or distribute messages to different connections. However, these approaches tightly couple scheduling logic with application logic and may result in high scheduling overhead. In this paper, we propose sRDMA, a general and low-overhead scheduler working in user-space RDMA driver. sRDMA allows the application to express the expected transfer order to RDMA hardware via work requests (WRs). With priority information in WRs, sRDMA slices and schedules WRs to achieve desired order of message transfer and reduce blocking impact of large messages in the same RDMA connection. Our experiments show that sRDMA can improve the performance of applications, e.g., TensorFlow, by up to , and sRDMA has negligible overhead in terms of CPU and flow throughput.
RDMA (Remote Direct Memory Access)技术被广泛应用于数据中心,以提高应用程序的性能。但是,RDMA按顺序传递消息的特性不能满足应用程序对RDMA连接内消息调度的新需求,无法充分利用RDMA。一些工作尝试在交付到RDMA之前安排在特定应用程序中传输的数据,或者将消息分发到不同的连接。然而,这些方法将调度逻辑与应用程序逻辑紧密耦合,可能导致较高的调度开销。在本文中,我们提出了sRDMA,一个在用户空间RDMA驱动程序中工作的通用的低开销调度程序。sRDMA允许应用程序通过工作请求(wr)向RDMA硬件表达预期的传输顺序。利用wr中的优先级信息,sRDMA对wr进行切片和调度,以实现所需的消息传输顺序,并减少同一RDMA连接中大消息的阻塞影响。我们的实验表明,sRDMA可以将应用程序(例如TensorFlow)的性能提高多达,并且sRDMA在CPU和流量吞吐量方面的开销可以忽略不计。
{"title":"sRDMA: A General and Low-Overhead Scheduler for RDMA","authors":"Xizheng Wang, Shuai Wang, Dan Li","doi":"10.1145/3600061.3600082","DOIUrl":"https://doi.org/10.1145/3600061.3600082","url":null,"abstract":"Remote Direct Memory Access (RDMA) has been widely deployed in data centers to improve application performance. However, the characteristic of RDMA to deliver messages in order cannot meet the emerging requirements of applications for scheduling messages within an RDMA connection, making RDMA unable to be fully utilized. Some works try to schedule the data to be transferred in specific applications before delivering to RDMA, or distribute messages to different connections. However, these approaches tightly couple scheduling logic with application logic and may result in high scheduling overhead. In this paper, we propose sRDMA, a general and low-overhead scheduler working in user-space RDMA driver. sRDMA allows the application to express the expected transfer order to RDMA hardware via work requests (WRs). With priority information in WRs, sRDMA slices and schedules WRs to achieve desired order of message transfer and reduce blocking impact of large messages in the same RDMA connection. Our experiments show that sRDMA can improve the performance of applications, e.g., TensorFlow, by up to , and sRDMA has negligible overhead in terms of CPU and flow throughput.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131123209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ABC: Adaptive Bitrate Algorithm Commander for Multi-Client Video Streaming ABC:多客户端视频流的自适应比特率算法指挥官
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3603134
Xiaoxi Xue, Yuchao Zhang
With the improvement of live streaming technology, ensuring high QoE and fairness of different ABR algorithm clients sharing the same LAN is becoming a pressing issue. However, aggressive and conservative algorithm will make different bitrate adjustment decisions when they share network resources, which leads to unfairness. In this poster, we proposed a regulation mechanism ABC, adjusting the sensitive parameters such as latency, delay and buffer, to coordinate overall system QoE by 68% and improve the fairness problem.
随着直播技术的不断进步,保证共享同一局域网的不同ABR算法客户端的高QoE和公平性已成为一个迫切需要解决的问题。然而,激进算法和保守算法在共享网络资源时会做出不同的比特率调整决策,导致不公平。在这张海报中,我们提出了一种调节机制ABC,通过调整延迟、延迟和缓冲区等敏感参数,使整个系统的QoE协调提高68%,改善公平性问题。
{"title":"ABC: Adaptive Bitrate Algorithm Commander for Multi-Client Video Streaming","authors":"Xiaoxi Xue, Yuchao Zhang","doi":"10.1145/3600061.3603134","DOIUrl":"https://doi.org/10.1145/3600061.3603134","url":null,"abstract":"With the improvement of live streaming technology, ensuring high QoE and fairness of different ABR algorithm clients sharing the same LAN is becoming a pressing issue. However, aggressive and conservative algorithm will make different bitrate adjustment decisions when they share network resources, which leads to unfairness. In this poster, we proposed a regulation mechanism ABC, adjusting the sensitive parameters such as latency, delay and buffer, to coordinate overall system QoE by 68% and improve the fairness problem.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"90-91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116236056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 7th Asia-Pacific Workshop on Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1