首页 > 最新文献

Proceedings of the 7th Asia-Pacific Workshop on Networking最新文献

英文 中文
Toward Fair and Efficient Congestion Control: Machine Learning Aided Congestion Control (MLACC)
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3603275
Ahmed Elbery, Yi Lian, Geng Li
Emerging inter-datacenter applications require massive loads of data transfer which makes them sensitive to packet drops, high latency, and fair resource sharing. However, current congestion control (CC) protocols do not guarantee the optimal outcome of these metrics. In this paper, we introduce a new CC technique, Machine Learning Aided Congestion Control (MLACC), that combines heuristics and machine learning (ML) to improve these three network metrics. The proposed technique achieves a high level of fairness, minimum latency, and minimum drop rate. ML is utilized to estimate the ratio of the available bandwidth of the bottleneck link while the heuristic uses this ratio to enable end-points to cooperatively limit the shared bottleneck link utilization under a predefined threshold in order to minimize latency and drop rate. The key to achieving the desired fairness is using the gradient of the link utilization to control the sending rate. We compared MLACC to BBR (which is at least on par with the state-of-the-art ML-based techniques) as a base case in different network settings. The results show that MLACC can achieve lower and more stable end-to-end latency (25% to 52% latency saving). It also significantly reduces packet drop rates while attaining a higher fairness level. The only cost for these advantages is a small throughput reduction of less than 3.5%.
新兴的跨数据中心应用程序需要大量的数据传输负载,这使得它们对丢包、高延迟和公平的资源共享非常敏感。然而,当前的拥塞控制(CC)协议并不能保证这些指标的最佳结果。在本文中,我们介绍了一种新的CC技术,机器学习辅助拥塞控制(MLACC),它结合了启发式和机器学习(ML)来改进这三个网络指标。所提出的技术实现了高水平的公平性、最小的延迟和最小的丢包率。ML用于估计瓶颈链路可用带宽的比率,启发式使用该比率使端点能够在预定义的阈值下合作限制共享瓶颈链路的利用率,以最小化延迟和丢弃率。实现期望的公平性的关键是利用链路利用率的梯度来控制发送速率。我们将MLACC与BBR(至少与最先进的基于ml的技术相当)作为不同网络设置的基本情况进行了比较。结果表明,MLACC可以实现更低、更稳定的端到端延迟(延迟节省25% ~ 52%)。它还显著降低了丢包率,同时获得了更高的公平性。这些优势的唯一代价是吞吐量减少了不到3.5%。
{"title":"Toward Fair and Efficient Congestion Control: Machine Learning Aided Congestion Control (MLACC)","authors":"Ahmed Elbery, Yi Lian, Geng Li","doi":"10.1145/3600061.3603275","DOIUrl":"https://doi.org/10.1145/3600061.3603275","url":null,"abstract":"Emerging inter-datacenter applications require massive loads of data transfer which makes them sensitive to packet drops, high latency, and fair resource sharing. However, current congestion control (CC) protocols do not guarantee the optimal outcome of these metrics. In this paper, we introduce a new CC technique, Machine Learning Aided Congestion Control (MLACC), that combines heuristics and machine learning (ML) to improve these three network metrics. The proposed technique achieves a high level of fairness, minimum latency, and minimum drop rate. ML is utilized to estimate the ratio of the available bandwidth of the bottleneck link while the heuristic uses this ratio to enable end-points to cooperatively limit the shared bottleneck link utilization under a predefined threshold in order to minimize latency and drop rate. The key to achieving the desired fairness is using the gradient of the link utilization to control the sending rate. We compared MLACC to BBR (which is at least on par with the state-of-the-art ML-based techniques) as a base case in different network settings. The results show that MLACC can achieve lower and more stable end-to-end latency (25% to 52% latency saving). It also significantly reduces packet drop rates while attaining a higher fairness level. The only cost for these advantages is a small throughput reduction of less than 3.5%.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126847792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SlimeMold: Hardware Load Balancer at Scale in Datacenter SlimeMold:数据中心的大规模硬件负载均衡器
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3600067
Ziyuan Liu, Zhixiong Niu, Ran Shu, Liang Gao, Guohong Lai, Na Wang, Zongying He, Jacob Nelson, Dan R. K. Ports, Lihua Yuan, Peng Cheng, Y. Xiong
Stateful load balancers (LB) are essential services in cloud data centers, playing a crucial role in enhancing the availability and capacity of applications. Numerous studies have proposed methods to improve the throughput, connections per second, and concurrent flows of single LBs. For instance, with the advancement of programmable switches, hardware-based load balancers (HLB) have become mainstream due to their high efficiency. However, programmable switches still face the issue of limited registers and table entries, preventing them from fully meeting the performance requirements of data centers. In this paper, rather than solely focusing on enhancing individual HLBs, we introduce SlimeMold, which enables HLBs to work collaboratively at scale as an integrated LB system in data centers. First, we design a novel HLB building block capable of achieving load balancing and exchanging states with other building blocks in the data plane. Next, we decouple forwarding and state operations, organizing the states using our proposed 2-level mapping mechanism. Finally, we optimize the system with flow caching and table entry balancing. We implement a real HLB building block using the Broadcom 56788 SmartToR chip, which attains line rate for state read and >1M OPS for flow write operations. Our simulation demonstrates full scalability in large-scale experiments, supporting 454 million concurrent flows with 512 state-hosting building blocks.
有状态负载平衡器(LB)是云数据中心的基本服务,在提高应用程序的可用性和容量方面起着至关重要的作用。许多研究已经提出了提高单个lb的吞吐量、每秒连接数和并发流的方法。例如,随着可编程交换机的发展,基于硬件的负载平衡器(HLB)因其高效率而成为主流。然而,可编程交换机仍然面临寄存器和表项有限的问题,使其无法完全满足数据中心的性能要求。在本文中,我们不是仅仅专注于增强单个LB,而是介绍了SlimeMold,它使LB能够在数据中心中作为集成LB系统大规模协同工作。首先,我们设计了一个新的HLB构建块,能够实现负载平衡并与数据平面中的其他构建块交换状态。接下来,我们解耦转发和状态操作,使用我们提出的2级映射机制组织状态。最后,我们通过流缓存和表项平衡对系统进行了优化。我们使用Broadcom 56788 SmartToR芯片实现了一个真正的HLB构建块,该芯片实现了状态读取的线速率和>1M OPS的流写入操作。我们的模拟在大规模实验中展示了完全的可扩展性,支持4.54亿个并发流和512个状态托管构建块。
{"title":"SlimeMold: Hardware Load Balancer at Scale in Datacenter","authors":"Ziyuan Liu, Zhixiong Niu, Ran Shu, Liang Gao, Guohong Lai, Na Wang, Zongying He, Jacob Nelson, Dan R. K. Ports, Lihua Yuan, Peng Cheng, Y. Xiong","doi":"10.1145/3600061.3600067","DOIUrl":"https://doi.org/10.1145/3600061.3600067","url":null,"abstract":"Stateful load balancers (LB) are essential services in cloud data centers, playing a crucial role in enhancing the availability and capacity of applications. Numerous studies have proposed methods to improve the throughput, connections per second, and concurrent flows of single LBs. For instance, with the advancement of programmable switches, hardware-based load balancers (HLB) have become mainstream due to their high efficiency. However, programmable switches still face the issue of limited registers and table entries, preventing them from fully meeting the performance requirements of data centers. In this paper, rather than solely focusing on enhancing individual HLBs, we introduce SlimeMold, which enables HLBs to work collaboratively at scale as an integrated LB system in data centers. First, we design a novel HLB building block capable of achieving load balancing and exchanging states with other building blocks in the data plane. Next, we decouple forwarding and state operations, organizing the states using our proposed 2-level mapping mechanism. Finally, we optimize the system with flow caching and table entry balancing. We implement a real HLB building block using the Broadcom 56788 SmartToR chip, which attains line rate for state read and >1M OPS for flow write operations. Our simulation demonstrates full scalability in large-scale experiments, supporting 454 million concurrent flows with 512 state-hosting building blocks.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127884548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Privacy-Preserving Interdomain Configuration Verification via Multi-Party Computation 基于多方计算的保护隐私域间配置验证研究
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3600064
Huisan Xu, Qiuyue Qin, Xing Fang, Qiao Xiang, J. Shu
Interdomain network configuration errors can lead to disastrous financial and social consequences. Although substantial progress has been made in using formal methods to verify whether network configurations conform to certain properties, current tools focus on a single network. The fundamental challenge of configuration verification in an interdomain network is privacy, because each autonomous system (AS) treats its network configuration files as private information and is not willing to share it with others. In this paper, we take a first step toward interdomain network configuration verification and propose InCV, a privacy-preserving interdomain configuration verification system based on data-oblivious computation. Given an interdomain network, InCV allows ASes to collaboratively simulate the running of the network and verify the resulting interdomain routing information base (RIB) without revealing their network configurations to any party. Preliminary evaluation using real-world topologies and synthetic network configurations shows that InCV can verify an interdomain network of 32 ASes within ∼ 52 minutes with reasonable overhead.
域间网络配置错误可能导致灾难性的财务和社会后果。尽管在使用形式化方法验证网络配置是否符合某些属性方面已经取得了实质性进展,但目前的工具主要集中在单个网络上。域间网络中配置验证的基本挑战是隐私,因为每个自治系统(AS)都将其网络配置文件视为私有信息,并且不愿意与其他系统共享。在本文中,我们向域间网络配置验证迈出了第一步,提出了一种基于数据无关计算的保护隐私的域间网络配置验证系统InCV。给定一个域间网络,InCV允许ase协同模拟网络的运行并验证结果的域间路由信息库(RIB),而无需向任何一方透露其网络配置。使用真实拓扑和合成网络配置的初步评估表明,InCV可以在~ 52分钟内以合理的开销验证32个as的域间网络。
{"title":"Toward Privacy-Preserving Interdomain Configuration Verification via Multi-Party Computation","authors":"Huisan Xu, Qiuyue Qin, Xing Fang, Qiao Xiang, J. Shu","doi":"10.1145/3600061.3600064","DOIUrl":"https://doi.org/10.1145/3600061.3600064","url":null,"abstract":"Interdomain network configuration errors can lead to disastrous financial and social consequences. Although substantial progress has been made in using formal methods to verify whether network configurations conform to certain properties, current tools focus on a single network. The fundamental challenge of configuration verification in an interdomain network is privacy, because each autonomous system (AS) treats its network configuration files as private information and is not willing to share it with others. In this paper, we take a first step toward interdomain network configuration verification and propose InCV, a privacy-preserving interdomain configuration verification system based on data-oblivious computation. Given an interdomain network, InCV allows ASes to collaboratively simulate the running of the network and verify the resulting interdomain routing information base (RIB) without revealing their network configurations to any party. Preliminary evaluation using real-world topologies and synthetic network configurations shows that InCV can verify an interdomain network of 32 ASes within ∼ 52 minutes with reasonable overhead.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131060397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gleaning the Consensus for Linearizable and Conflict-Free Per-Replica Local Reads 为线性化和无冲突的每个副本本地读取收集共识
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3603175
Qing Li, Binjie Zhang, Yong Jiang, Dan Zhao, Yuan Yang, Zhenhui Yuan
The optimal read strategy for strong consistent key-value applications is to enable the per-replica local reads that each replica has the ability to serve reads locally. Unfortunately, current schemes for the per-replica local reads are perplexed by two issues. First, some schemes have to violate the per-replica local reads when the workload is skewed, degrading the throughput. Second, most of current schemes rely on leases or a specialized hardware to guarantee the linearizability, bringing difficulties to the deployment. In this paper, we proposes Glean, a linearizable read protocol that solves the issues of current schemes. In Glean, replica nodes always serve reads locally and we ask clients to validate the linearizability. To achieve the validation, Glean designs a novel read algorithm that allows the client to glean a consensus hint from replicas and enables replicas to contribute to the validation lightweight and fast. We implement Glean with a widely-used software stack. Our 3-replica evaluation shows that the throughput of Glean is at most 2.1 × to the throughput of an unreplicated application under heavy-read workloads.
强一致性键值应用程序的最佳读取策略是启用每个副本的本地读取,每个副本都有能力在本地提供读取服务。不幸的是,当前的每个副本本地读取方案被两个问题所困扰。首先,当工作负载倾斜时,某些方案必须违反每个副本的本地读取,从而降低吞吐量。其次,目前大多数方案依赖于租赁或专用硬件来保证线性化,这给部署带来了困难。在本文中,我们提出了一种线性读协议,它解决了当前方案的问题。在lean中,副本节点总是在本地提供读取服务,我们要求客户端验证线性性。为了实现验证,lean设计了一种新颖的读取算法,该算法允许客户端从副本中收集共识提示,并使副本能够轻量级和快速地为验证做出贡献。我们使用一个广泛使用的软件栈来实现lean。我们的3副本评估表明,在高读取工作负载下,lean的吞吐量最多是未复制应用程序吞吐量的2.1倍。
{"title":"Gleaning the Consensus for Linearizable and Conflict-Free Per-Replica Local Reads","authors":"Qing Li, Binjie Zhang, Yong Jiang, Dan Zhao, Yuan Yang, Zhenhui Yuan","doi":"10.1145/3600061.3603175","DOIUrl":"https://doi.org/10.1145/3600061.3603175","url":null,"abstract":"The optimal read strategy for strong consistent key-value applications is to enable the per-replica local reads that each replica has the ability to serve reads locally. Unfortunately, current schemes for the per-replica local reads are perplexed by two issues. First, some schemes have to violate the per-replica local reads when the workload is skewed, degrading the throughput. Second, most of current schemes rely on leases or a specialized hardware to guarantee the linearizability, bringing difficulties to the deployment. In this paper, we proposes Glean, a linearizable read protocol that solves the issues of current schemes. In Glean, replica nodes always serve reads locally and we ask clients to validate the linearizability. To achieve the validation, Glean designs a novel read algorithm that allows the client to glean a consensus hint from replicas and enables replicas to contribute to the validation lightweight and fast. We implement Glean with a widely-used software stack. Our 3-replica evaluation shows that the throughput of Glean is at most 2.1 × to the throughput of an unreplicated application under heavy-read workloads.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132504447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hierarchical Routing Mechanism for Service in CPN CPN中服务的分层路由机制
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3603120
Jiacong Li, Hang Lv, Bo Lei, Yunpeng Xie
Computing power network (CPN) has been proposed to allocate and schedule computing power resources among cloud, network, and edge according to the needs of computing services. CPN can improve the utilization rate of various computing resource pools. However, it brings other challenges that how to transfer data packets based on computing resource information. Since the size of routing table will be too large to store and search with lots of computing information. To solve this problem, we define three computing service types firstly. Then propose a hierarchical routing mechanism for computing services in CPN. Based on this mechanism, CPN can improve data forwarding efficiency and user experience. In the future, we will research standard of computing resource identification to provide more intelligent service for various application.
计算能力网络(CPN)是根据计算业务的需要,在云、网络和边缘之间对计算能力资源进行分配和调度的概念。CPN可以提高各种计算资源池的利用率。然而,如何基于计算资源信息传输数据包也带来了新的挑战。由于路由表的大小太大,不能存储和搜索大量的计算信息。为了解决这个问题,我们首先定义了三种计算服务类型。然后提出了一种面向CPN计算服务的分层路由机制。基于这种机制,CPN可以提高数据转发效率和用户体验。未来,我们将研究计算资源识别标准,为各种应用提供更加智能化的服务。
{"title":"A Hierarchical Routing Mechanism for Service in CPN","authors":"Jiacong Li, Hang Lv, Bo Lei, Yunpeng Xie","doi":"10.1145/3600061.3603120","DOIUrl":"https://doi.org/10.1145/3600061.3603120","url":null,"abstract":"Computing power network (CPN) has been proposed to allocate and schedule computing power resources among cloud, network, and edge according to the needs of computing services. CPN can improve the utilization rate of various computing resource pools. However, it brings other challenges that how to transfer data packets based on computing resource information. Since the size of routing table will be too large to store and search with lots of computing information. To solve this problem, we define three computing service types firstly. Then propose a hierarchical routing mechanism for computing services in CPN. Based on this mechanism, CPN can improve data forwarding efficiency and user experience. In the future, we will research standard of computing resource identification to provide more intelligent service for various application.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"211 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121216768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time Synchronization based on Time Difference of Arrival with Propagation Delay Estimation in 5G-TSN Integrated Networks 5G-TSN综合网络中基于到达时间差和传播延迟估计的时间同步
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3603139
Xiaocong Wei, Yueping Cai, Xiaowen Zhang
This poster presents a time synchronization method based on the time difference of arrival (TDOA) with propagation delay estimation in 5G-TSN integrated networks. It improves the time synchronization accuracy about 8.9% compared to the conventional method based on the round trip delay estimation.
这张海报提出了一种基于到达时间差(TDOA)和传播延迟估计的5G-TSN综合网络时间同步方法。与基于往返时延估计的传统时间同步方法相比,该方法的时间同步精度提高了8.9%左右。
{"title":"Time Synchronization based on Time Difference of Arrival with Propagation Delay Estimation in 5G-TSN Integrated Networks","authors":"Xiaocong Wei, Yueping Cai, Xiaowen Zhang","doi":"10.1145/3600061.3603139","DOIUrl":"https://doi.org/10.1145/3600061.3603139","url":null,"abstract":"This poster presents a time synchronization method based on the time difference of arrival (TDOA) with propagation delay estimation in 5G-TSN integrated networks. It improves the time synchronization accuracy about 8.9% compared to the conventional method based on the round trip delay estimation.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128991989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bamboo: Boosting Training Efficiency for Real-Time Video Streaming via Online Grouped Federated Transfer Learning Bamboo:通过在线分组联邦迁移学习提高实时视频流的训练效率
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3600069
Qian-Zhen Zheng, Hao Chen, Zhanghui Ma
Most of the learning-based algorithms for bitrate adaptation are limited to offline learning, which inevitably suffers from the simulation-to-reality gap. Online learning can better adapt to dynamic real-time communication scenes but still face the challenge of lengthy training convergence time. In this paper, we propose a novel online grouped federated transfer learning framework named Bamboo to accelerate training efficiency. The preliminary experiments validate that our method remarkably improves online training efficiency by up to 302% compared to other reinforcement learning algorithms in various network conditions while ensuring the quality of experience (QoE) of real-time video communication.
大多数基于学习的比特率自适应算法都局限于离线学习,不可避免地存在模拟与现实的差距。在线学习可以更好地适应动态实时通信场景,但仍然面临训练收敛时间过长的挑战。为了提高训练效率,本文提出了一种新的在线分组联邦迁移学习框架Bamboo。初步实验验证了我们的方法在保证实时视频通信的体验质量(QoE)的同时,在各种网络条件下,与其他强化学习算法相比,在线训练效率显著提高了302%。
{"title":"Bamboo: Boosting Training Efficiency for Real-Time Video Streaming via Online Grouped Federated Transfer Learning","authors":"Qian-Zhen Zheng, Hao Chen, Zhanghui Ma","doi":"10.1145/3600061.3600069","DOIUrl":"https://doi.org/10.1145/3600061.3600069","url":null,"abstract":"Most of the learning-based algorithms for bitrate adaptation are limited to offline learning, which inevitably suffers from the simulation-to-reality gap. Online learning can better adapt to dynamic real-time communication scenes but still face the challenge of lengthy training convergence time. In this paper, we propose a novel online grouped federated transfer learning framework named Bamboo to accelerate training efficiency. The preliminary experiments validate that our method remarkably improves online training efficiency by up to 302% compared to other reinforcement learning algorithms in various network conditions while ensuring the quality of experience (QoE) of real-time video communication.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116839827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deadline Enables In-Order Flowlet Switching for Load Balancing Deadline启用按顺序流切换以实现负载平衡
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3603126
Xinglong Diao, Wenting Wei, Huaxi Gu
Fine granularity can greatly enhance load balancing opportunities, but packet reordering is still a challenge. In this paper, we propose EDFLet, a flowlet switching mechanism that uses deadlines to achieve in-order flowlet-level load balancing. We assign a deadline value to each packet of a flowlet based on its burst interval, which ensures that an earlier flowlet completes transmission before the next flowlet of the same flow. We also apply Earliest Deadline First scheduling at the switch, which guarantees that packets that packets meet their deadlines and arrive in order at the receiver. Our experimental results show that EDFLet performs better than existing methods in both symmetric and asymmetric topologies.
细粒度可以极大地提高负载平衡的机会,但是数据包重新排序仍然是一个挑战。在本文中,我们提出了EDFLet,一种使用截止日期来实现有序流级负载平衡的流交换机制。我们根据一个流的突发间隔为每个数据包分配一个截止时间值,以确保较早的流在同一流的下一个流之前完成传输。我们还在交换机上应用最早截止日期优先调度,它保证数据包满足其截止日期并按顺序到达接收器。实验结果表明,EDFLet在对称和非对称拓扑下的性能都优于现有方法。
{"title":"Deadline Enables In-Order Flowlet Switching for Load Balancing","authors":"Xinglong Diao, Wenting Wei, Huaxi Gu","doi":"10.1145/3600061.3603126","DOIUrl":"https://doi.org/10.1145/3600061.3603126","url":null,"abstract":"Fine granularity can greatly enhance load balancing opportunities, but packet reordering is still a challenge. In this paper, we propose EDFLet, a flowlet switching mechanism that uses deadlines to achieve in-order flowlet-level load balancing. We assign a deadline value to each packet of a flowlet based on its burst interval, which ensures that an earlier flowlet completes transmission before the next flowlet of the same flow. We also apply Earliest Deadline First scheduling at the switch, which guarantees that packets that packets meet their deadlines and arrive in order at the receiver. Our experimental results show that EDFLet performs better than existing methods in both symmetric and asymmetric topologies.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114794018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MRP: An Energy Efficient Network Protocol That Avoids Multiple Encryption in Cloud Computing Environment MRP:云计算环境下避免多重加密的高能效网络协议
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3603131
Fenglai Jiang, Yongyang Cheng, Boqin Qin, T. Zhang
To ensure the security of data transmission on the Internet, users usually encrypt the data during the process of sending and receiving data. For example, the commonly used HTTPS protocol verifies the identity of servers through TLS certificates and encrypts communication between browsers and servers. However, the encrypted part of the unstructured data is still re-encrypted by TLS during HTTPS transmission, wasting computing and energy resources. In this paper, we propose an energy efficient network protocol, namely MRP, that avoids multiple encryption. MRP could carry multiple types of application layer protocols, while freely configure the location and encryption approaches of the data that needs to be encrypted. Based on our proposed, users could freely segment application layer data, achieve on-demand encryption of data, reduce encryption costs without compromising user security requirements, avoid redundant double encryption and save energy.
为了保证互联网上数据传输的安全性,用户通常会在发送和接收数据的过程中对数据进行加密。例如,常用的HTTPS协议通过TLS证书验证服务器的身份,并对浏览器和服务器之间的通信进行加密。但是,在HTTPS传输过程中,非结构化数据的加密部分仍然被TLS重新加密,浪费了计算和能源资源。在本文中,我们提出了一种节能的网络协议,即MRP,避免了多重加密。MRP可以携带多种类型的应用层协议,同时可以自由配置需要加密的数据的位置和加密方式。基于我们的方案,用户可以自由地对应用层数据进行分段,实现数据的按需加密,在不影响用户安全需求的情况下降低加密成本,避免冗余的双重加密,节约能源。
{"title":"MRP: An Energy Efficient Network Protocol That Avoids Multiple Encryption in Cloud Computing Environment","authors":"Fenglai Jiang, Yongyang Cheng, Boqin Qin, T. Zhang","doi":"10.1145/3600061.3603131","DOIUrl":"https://doi.org/10.1145/3600061.3603131","url":null,"abstract":"To ensure the security of data transmission on the Internet, users usually encrypt the data during the process of sending and receiving data. For example, the commonly used HTTPS protocol verifies the identity of servers through TLS certificates and encrypts communication between browsers and servers. However, the encrypted part of the unstructured data is still re-encrypted by TLS during HTTPS transmission, wasting computing and energy resources. In this paper, we propose an energy efficient network protocol, namely MRP, that avoids multiple encryption. MRP could carry multiple types of application layer protocols, while freely configure the location and encryption approaches of the data that needs to be encrypted. Based on our proposed, users could freely segment application layer data, achieve on-demand encryption of data, reduce encryption costs without compromising user security requirements, avoid redundant double encryption and save energy.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121125105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The core nodes identification method through adjustable network topology information 通过可调整的网络拓扑信息识别核心节点的方法
Pub Date : 2023-06-29 DOI: 10.1145/3600061.3603127
Xuemei Wang, Seung-Hyun Seo, Changda Wang
A social network has an in-born core-fringe structure. To increase the core nodes resolution, the paper proposes a new method, named KSCNR (K-Shell and Salton index based core node recognition) method, that combines both the local network topology features (Salton index with gravitational centrality) and the global network topology features (K-Shell iteration) to identify core nodes. The KSCNR method utilizes the weights to adjust the influences of the local and the global topology features according to the core nodes preferences, which makes the KSCNR method suitable for different social network scenarios. The experimental results show that the KSCNR method outperforms the known methods such as the K-Shell, the BC, the DC and the CC methods in the light of both effectiveness and accuracy.
社交网络具有与生俱来的核心-边缘结构。为了提高核心节点的分辨率,本文提出了一种结合局部网络拓扑特征(带引力中心性的Salton指数)和全局网络拓扑特征(K-Shell迭代)来识别核心节点的新方法KSCNR (K-Shell and Salton index based core node recognition)。KSCNR方法利用权重根据核心节点的偏好来调整局部和全局拓扑特征的影响,使得KSCNR方法适用于不同的社交网络场景。实验结果表明,KSCNR方法在有效性和准确性方面都优于K-Shell、BC、DC和CC方法。
{"title":"The core nodes identification method through adjustable network topology information","authors":"Xuemei Wang, Seung-Hyun Seo, Changda Wang","doi":"10.1145/3600061.3603127","DOIUrl":"https://doi.org/10.1145/3600061.3603127","url":null,"abstract":"A social network has an in-born core-fringe structure. To increase the core nodes resolution, the paper proposes a new method, named KSCNR (K-Shell and Salton index based core node recognition) method, that combines both the local network topology features (Salton index with gravitational centrality) and the global network topology features (K-Shell iteration) to identify core nodes. The KSCNR method utilizes the weights to adjust the influences of the local and the global topology features according to the core nodes preferences, which makes the KSCNR method suitable for different social network scenarios. The experimental results show that the KSCNR method outperforms the known methods such as the K-Shell, the BC, the DC and the CC methods in the light of both effectiveness and accuracy.","PeriodicalId":228934,"journal":{"name":"Proceedings of the 7th Asia-Pacific Workshop on Networking","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134015758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 7th Asia-Pacific Workshop on Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1