Yihong Tao, Bo Lei, Haoyang Shi, Jingkai Chen, Xing Zhang
With the development of satellite communication technology, satellite-terrestrial integrated networks (STIN), which integrate satellite networks and ground networks, can realize seamless global coverage of communication services. Confronting the intricacies of network dynamics, the diversity of resource heterogeneity, and the unpredictability of user mobility, dynamic resource allocation within networks faces formidable challenges. Digital twin (DT), as a new technique, can reflect a physical network to a virtual network to monitor, analyze, and optimize the physical network. Nevertheless, in the process of constructing the DT model, the deployment location and resource allocation of DTs may adversely affect its performance. Therefore, we propose a STIN model, which alleviates the problem of insufficient single-layer deployment flexibility of the traditional edge network by deploying DTs in multi-layer nodes in a STIN. To address the challenge of deploying DTs in the network, we propose multi-layer DT deployment in a STIN to reduce system delay. Then we adopt a multi-agent reinforcement learning (MARL) scheme to explore the optimal strategy of the DT multi-layer deployment problem. The implemented scheme demonstrates a notable reduction in system delay, as evidenced by simulation outcomes.
{"title":"Adaptive Multi-Layer Deployment for A Digital Twin Empowered Satellite-Terrestrial Integrated Network","authors":"Yihong Tao, Bo Lei, Haoyang Shi, Jingkai Chen, Xing Zhang","doi":"arxiv-2409.05480","DOIUrl":"https://doi.org/arxiv-2409.05480","url":null,"abstract":"With the development of satellite communication technology,\u0000satellite-terrestrial integrated networks (STIN), which integrate satellite\u0000networks and ground networks, can realize seamless global coverage of\u0000communication services. Confronting the intricacies of network dynamics, the\u0000diversity of resource heterogeneity, and the unpredictability of user mobility,\u0000dynamic resource allocation within networks faces formidable challenges.\u0000Digital twin (DT), as a new technique, can reflect a physical network to a\u0000virtual network to monitor, analyze, and optimize the physical network.\u0000Nevertheless, in the process of constructing the DT model, the deployment\u0000location and resource allocation of DTs may adversely affect its performance.\u0000Therefore, we propose a STIN model, which alleviates the problem of\u0000insufficient single-layer deployment flexibility of the traditional edge\u0000network by deploying DTs in multi-layer nodes in a STIN. To address the\u0000challenge of deploying DTs in the network, we propose multi-layer DT deployment\u0000in a STIN to reduce system delay. Then we adopt a multi-agent reinforcement\u0000learning (MARL) scheme to explore the optimal strategy of the DT multi-layer\u0000deployment problem. The implemented scheme demonstrates a notable reduction in\u0000system delay, as evidenced by simulation outcomes.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"141 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yifan Hua, Jinlong Pang, Xiaoxue Zhang, Yi Liu, Xiaofeng Shi, Bao Wang, Yang Liu, Chen Qian
Decentralized federated learning (DFL) uses peer-to-peer communication to avoid the single point of failure problem in federated learning and has been considered an attractive solution for machine learning tasks on distributed devices. We provide the first solution to a fundamental network problem of DFL: what overlay network should DFL use to achieve fast training of highly accurate models, low communication, and decentralized construction and maintenance? Overlay topologies of DFL have been investigated, but no existing DFL topology includes decentralized protocols for network construction and topology maintenance. Without these protocols, DFL cannot run in practice. This work presents an overlay network, called FedLay, which provides fast training and low communication cost for practical DFL. FedLay is the first solution for constructing near-random regular topologies in a decentralized manner and maintaining the topologies under node joins and failures. Experiments based on prototype implementation and simulations show that FedLay achieves the fastest model convergence and highest accuracy on real datasets compared to existing DFL solutions while incurring small communication costs and being resilient to node joins and failures.
{"title":"Towards Practical Overlay Networks for Decentralized Federated Learning","authors":"Yifan Hua, Jinlong Pang, Xiaoxue Zhang, Yi Liu, Xiaofeng Shi, Bao Wang, Yang Liu, Chen Qian","doi":"arxiv-2409.05331","DOIUrl":"https://doi.org/arxiv-2409.05331","url":null,"abstract":"Decentralized federated learning (DFL) uses peer-to-peer communication to\u0000avoid the single point of failure problem in federated learning and has been\u0000considered an attractive solution for machine learning tasks on distributed\u0000devices. We provide the first solution to a fundamental network problem of DFL:\u0000what overlay network should DFL use to achieve fast training of highly accurate\u0000models, low communication, and decentralized construction and maintenance?\u0000Overlay topologies of DFL have been investigated, but no existing DFL topology\u0000includes decentralized protocols for network construction and topology\u0000maintenance. Without these protocols, DFL cannot run in practice. This work\u0000presents an overlay network, called FedLay, which provides fast training and\u0000low communication cost for practical DFL. FedLay is the first solution for\u0000constructing near-random regular topologies in a decentralized manner and\u0000maintaining the topologies under node joins and failures. Experiments based on\u0000prototype implementation and simulations show that FedLay achieves the fastest\u0000model convergence and highest accuracy on real datasets compared to existing\u0000DFL solutions while incurring small communication costs and being resilient to\u0000node joins and failures.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"66 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paulo Furtado Correia, Andre Coelho, Manuel Ricardo
In wireless communications, the need to cover operation areas, such as seaports, is at the forefront of discussion, especially regarding network capacity provisioning. Radio network planning typically involves determining the number of fixed cells, considering link budgets and deploying them geometrically centered across targeted areas. This paper proposes a solution to determine the optimal position for a mobile cell, considering 3GPP path loss models. The optimal position for the mobile cell maximises the aggregate network capacity offered to a set of User Equipments (UEs), with gains up to 187% compared to the positioning of the mobile cell at the UEs geometrical center. The proposed solution can be used by network planners and integrated into network optimisation tools. This has the potential to reduce costs associated with the Radio Access Network (RAN) planning by enhancing flexibility for on-demand deployments.
{"title":"Positioning of a Next Generation Mobile Cell to Maximise Aggregate Network Capacity","authors":"Paulo Furtado Correia, Andre Coelho, Manuel Ricardo","doi":"arxiv-2409.06098","DOIUrl":"https://doi.org/arxiv-2409.06098","url":null,"abstract":"In wireless communications, the need to cover operation areas, such as\u0000seaports, is at the forefront of discussion, especially regarding network\u0000capacity provisioning. Radio network planning typically involves determining\u0000the number of fixed cells, considering link budgets and deploying them\u0000geometrically centered across targeted areas. This paper proposes a solution to\u0000determine the optimal position for a mobile cell, considering 3GPP path loss\u0000models. The optimal position for the mobile cell maximises the aggregate\u0000network capacity offered to a set of User Equipments (UEs), with gains up to\u0000187% compared to the positioning of the mobile cell at the UEs geometrical\u0000center. The proposed solution can be used by network planners and integrated\u0000into network optimisation tools. This has the potential to reduce costs\u0000associated with the Radio Access Network (RAN) planning by enhancing\u0000flexibility for on-demand deployments.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traffic sampling has become an indispensable tool in network management. While there exists a plethora of sampling systems, they generally assume flow rates are stable and predictable over a sampling period. Consequently, when deployed in networks with dynamic flow rates, some flows may be missed or under-sampled, while others are over-sampled. This paper presents the design and evaluation of dSamp, a network-wide sampling system capable of handling dynamic flow rates in Software-Defined Networks (SDNs). The key idea in dSamp is to consider flow rate fluctuations when deciding on which network switches and at what rate to sample each flow. To this end, we develop a general model for sampling allocation with dynamic flow rates, and then design an efficient approximate integer linear program called APX that can be used to compute sampling allocations even in large-scale networks. To show the efficacy of dSamp for network monitoring, we have implemented APX and several existing solutions in ns-3 and conducted extensive experiments using model-driven as well as trace-driven simulations. Our results indicate that, by considering dynamic flow rates, APX outperforms the existing solutions by up to 10% in sampling more flows at a given sampling rate.
{"title":"Coordinated Sampling in SDNs with Dynamic Flow Rates","authors":"Soroosh Esmaeilian, Mahdi Dolati, Sogand Sadrhaghighi, Majid Ghaderi","doi":"arxiv-2409.05966","DOIUrl":"https://doi.org/arxiv-2409.05966","url":null,"abstract":"Traffic sampling has become an indispensable tool in network management.\u0000While there exists a plethora of sampling systems, they generally assume flow\u0000rates are stable and predictable over a sampling period. Consequently, when\u0000deployed in networks with dynamic flow rates, some flows may be missed or\u0000under-sampled, while others are over-sampled. This paper presents the design\u0000and evaluation of dSamp, a network-wide sampling system capable of handling\u0000dynamic flow rates in Software-Defined Networks (SDNs). The key idea in dSamp\u0000is to consider flow rate fluctuations when deciding on which network switches\u0000and at what rate to sample each flow. To this end, we develop a general model\u0000for sampling allocation with dynamic flow rates, and then design an efficient\u0000approximate integer linear program called APX that can be used to compute\u0000sampling allocations even in large-scale networks. To show the efficacy of\u0000dSamp for network monitoring, we have implemented APX and several existing\u0000solutions in ns-3 and conducted extensive experiments using model-driven as\u0000well as trace-driven simulations. Our results indicate that, by considering\u0000dynamic flow rates, APX outperforms the existing solutions by up to 10% in\u0000sampling more flows at a given sampling rate.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we reexamine an assumption that underpinned the development of the Internet architecture, namely that a stateless and loosely synchronous point-to-point datagram delivery service would be sufficient to meet the needs of all network applications, including those which deliver content and services to a mass audience at global scale. Such applications are inherently asynchronous and point-to-multipoint in nature. We explain how the inability of distributed systems based on this stateless datagram service to provide adequate and affordable support for them within the public (I.e., universally shared and available) network led to the development of private overlay infrastructures, specifically Content Delivery Networks and distributed Cloud data centers. We argue that the burdens imposed by reliance on these private overlays may have been an obstacle to achieving the Open Data Networking goals of early Internet advocates. The contradiction between those initial goals and the exploitative commercial imperatives of hypergiant overlay operators is offered as a possibly important reason for the negative impact of their most profitable applications (e.g., social media) and monetization strategies (e.g., targeted advertisement). We propose that one important step in resolving this contradiction may be to reconsider the adequacy Internet's stateless datagram service model.
{"title":"How We Lost The Internet","authors":"Micah Beck, Terry Moore","doi":"arxiv-2409.05264","DOIUrl":"https://doi.org/arxiv-2409.05264","url":null,"abstract":"In this paper we reexamine an assumption that underpinned the development of\u0000the Internet architecture, namely that a stateless and loosely synchronous\u0000point-to-point datagram delivery service would be sufficient to meet the needs\u0000of all network applications, including those which deliver content and services\u0000to a mass audience at global scale. Such applications are inherently\u0000asynchronous and point-to-multipoint in nature. We explain how the inability of\u0000distributed systems based on this stateless datagram service to provide\u0000adequate and affordable support for them within the public (I.e., universally\u0000shared and available) network led to the development of private overlay\u0000infrastructures, specifically Content Delivery Networks and distributed Cloud\u0000data centers. We argue that the burdens imposed by reliance on these private\u0000overlays may have been an obstacle to achieving the Open Data Networking goals\u0000of early Internet advocates. The contradiction between those initial goals and\u0000the exploitative commercial imperatives of hypergiant overlay operators is\u0000offered as a possibly important reason for the negative impact of their most\u0000profitable applications (e.g., social media) and monetization strategies (e.g.,\u0000targeted advertisement). We propose that one important step in resolving this\u0000contradiction may be to reconsider the adequacy Internet's stateless datagram\u0000service model.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Geymerson S. Ramos, Razvan Stanica, Rian G. S. Pinheiro, Andre L. L. Aquino
This study aims to optimize vehicular user association to base stations in a mobile network. We propose an efficient heuristic solution that considers the base station average handover frequency, the channel quality indicator, and bandwidth capacity. We evaluate this solution using real-world base station locations from S~ao Paulo, Brazil, and the SUMO mobility simulator. We compare our approach against a state of the art solution which uses route prediction, maintaining or surpassing the provided quality of service with the same number of handover operations. Additionally, the proposed solution reduces the execution time by more than 80% compared to an exact method, while achieving optimal solutions.
{"title":"Optimizing Vehicular Users Association in Urban Mobile Networks","authors":"Geymerson S. Ramos, Razvan Stanica, Rian G. S. Pinheiro, Andre L. L. Aquino","doi":"arxiv-2409.05845","DOIUrl":"https://doi.org/arxiv-2409.05845","url":null,"abstract":"This study aims to optimize vehicular user association to base stations in a\u0000mobile network. We propose an efficient heuristic solution that considers the\u0000base station average handover frequency, the channel quality indicator, and\u0000bandwidth capacity. We evaluate this solution using real-world base station\u0000locations from S~ao Paulo, Brazil, and the SUMO mobility simulator. We compare\u0000our approach against a state of the art solution which uses route prediction,\u0000maintaining or surpassing the provided quality of service with the same number\u0000of handover operations. Additionally, the proposed solution reduces the\u0000execution time by more than 80% compared to an exact method, while achieving\u0000optimal solutions.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the utilization of network traces for the network measurement research becomes increasingly prevalent, concerns regarding privacy leakage from network traces have garnered the public's attention. To safeguard network traces, researchers have proposed the trace synthesis that retains the essential properties of the raw data. However, previous works also show that synthesis traces with generative models are vulnerable under linkage attacks. This paper introduces NetDPSyn, the first system to synthesize high-fidelity network traces under privacy guarantees. NetDPSyn is built with the Differential Privacy (DP) framework as its core, which is significantly different from prior works that apply DP when training the generative model. The experiments conducted on three flow and two packet datasets indicate that NetDPSyn achieves much better data utility in downstream tasks like anomaly detection. NetDPSyn is also 2.5 times faster than the other methods on average in data synthesis.
{"title":"NetDPSyn: Synthesizing Network Traces under Differential Privacy","authors":"Danyu Sun, Joann Qiongna Chen, Chen Gong, Tianhao Wang, Zhou Li","doi":"arxiv-2409.05249","DOIUrl":"https://doi.org/arxiv-2409.05249","url":null,"abstract":"As the utilization of network traces for the network measurement research\u0000becomes increasingly prevalent, concerns regarding privacy leakage from network\u0000traces have garnered the public's attention. To safeguard network traces,\u0000researchers have proposed the trace synthesis that retains the essential\u0000properties of the raw data. However, previous works also show that synthesis\u0000traces with generative models are vulnerable under linkage attacks. This paper introduces NetDPSyn, the first system to synthesize high-fidelity\u0000network traces under privacy guarantees. NetDPSyn is built with the\u0000Differential Privacy (DP) framework as its core, which is significantly\u0000different from prior works that apply DP when training the generative model.\u0000The experiments conducted on three flow and two packet datasets indicate that\u0000NetDPSyn achieves much better data utility in downstream tasks like anomaly\u0000detection. NetDPSyn is also 2.5 times faster than the other methods on average\u0000in data synthesis.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Asif Habibi, Bin Han, Merve Saimler, Ignacio Labrador Pavon, Hans D. Schotten
The emergence of the open radio access network (O-RAN) architecture offers a paradigm shift in cellular network management and service orchestration, leveraging data-driven, intent-based, autonomous, and intelligent solutions. Within O-RAN, the service management and orchestration (SMO) framework plays a pivotal role in managing network functions (NFs), resource allocation, service provisioning, and others. However, the increasing complexity and scale of O-RANs demand autonomous and intelligent models for optimizing SMO operations. To achieve this goal, it is essential to integrate intelligence and automation into the operations of SMO. In this manuscript, we propose three scenarios for integrating machine learning (ML) algorithms into SMO. We then focus on exploring one of the scenarios in which the non-real-time RAN intelligence controller (Non-RT RIC) plays a major role in data collection, as well as model training, deployment, and refinement, by proposing a centralized ML architecture. Finally, we identify potential challenges associated with implementing a centralized ML solution within SMO.
开放式无线接入网(O-RAN)架构的出现为蜂窝网络管理和服务协调提供了一个范式转变,它利用了数据驱动、基于意图、自主和智能的解决方案。在 O-RAN 中,服务管理和协调(SMO)框架在管理网络功能(NF)、资源分配、服务供应等方面发挥着关键作用。然而,O-RAN 的复杂性和规模不断扩大,需要自主和智能的模型来优化 SMO 的运营。在本手稿中,我们提出了将机器学习(ML)算法集成到 SMO 中的三种方案。然后,我们通过提出一种集中式 ML 架构,重点探索了其中一种方案,在这种方案中,非实时 RAN 智能控制器(Non-RT RIC)在数据收集以及模型训练、部署和完善方面发挥了重要作用。最后,我们确定了在 SMO 中实施集中式 ML 解决方案可能面临的挑战。
{"title":"Towards an AI/ML-driven SMO Framework in O-RAN: Scenarios, Solutions, and Challenges","authors":"Mohammad Asif Habibi, Bin Han, Merve Saimler, Ignacio Labrador Pavon, Hans D. Schotten","doi":"arxiv-2409.05092","DOIUrl":"https://doi.org/arxiv-2409.05092","url":null,"abstract":"The emergence of the open radio access network (O-RAN) architecture offers a\u0000paradigm shift in cellular network management and service orchestration,\u0000leveraging data-driven, intent-based, autonomous, and intelligent solutions.\u0000Within O-RAN, the service management and orchestration (SMO) framework plays a\u0000pivotal role in managing network functions (NFs), resource allocation, service\u0000provisioning, and others. However, the increasing complexity and scale of\u0000O-RANs demand autonomous and intelligent models for optimizing SMO operations.\u0000To achieve this goal, it is essential to integrate intelligence and automation\u0000into the operations of SMO. In this manuscript, we propose three scenarios for\u0000integrating machine learning (ML) algorithms into SMO. We then focus on\u0000exploring one of the scenarios in which the non-real-time RAN intelligence\u0000controller (Non-RT RIC) plays a major role in data collection, as well as model\u0000training, deployment, and refinement, by proposing a centralized ML\u0000architecture. Finally, we identify potential challenges associated with\u0000implementing a centralized ML solution within SMO.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multiple network management tasks, from resource allocation to intrusion detection, rely on some form of ML-based network-traffic classification (MNC). Despite their potential, MNCs are vulnerable to adversarial inputs, which can lead to outages, poor decision-making, and security violations, among other issues. The goal of this paper is to help network operators assess and enhance the robustness of their MNC against adversarial inputs. The most critical step for this is generating inputs that can fool the MNC while being realizable under various threat models. Compared to other ML models, finding adversarial inputs against MNCs is more challenging due to the existence of non-differentiable components e.g., traffic engineering and the need to constrain inputs to preserve semantics and ensure reliability. These factors prevent the direct use of well-established gradient-based methods developed in adversarial ML (AML). To address these challenges, we introduce PANTS, a practical white-box framework that uniquely integrates AML techniques with Satisfiability Modulo Theories (SMT) solvers to generate adversarial inputs for MNCs. We also embed PANTS into an iterative adversarial training process that enhances the robustness of MNCs against adversarial inputs. PANTS is 70% and 2x more likely in median to find adversarial inputs against target MNCs compared to two state-of-the-art baselines, namely Amoeba and BAP. Integrating PANTS into the adversarial training process enhances the robustness of the target MNCs by 52.7% without sacrificing their accuracy. Critically, these PANTS-robustified MNCs are more robust than their vanilla counterparts against distinct attack-generation methodologies.
{"title":"PANTS: Practical Adversarial Network Traffic Samples against ML-powered Networking Classifiers","authors":"Minhao Jin, Maria Apostolaki","doi":"arxiv-2409.04691","DOIUrl":"https://doi.org/arxiv-2409.04691","url":null,"abstract":"Multiple network management tasks, from resource allocation to intrusion\u0000detection, rely on some form of ML-based network-traffic classification (MNC).\u0000Despite their potential, MNCs are vulnerable to adversarial inputs, which can\u0000lead to outages, poor decision-making, and security violations, among other\u0000issues. The goal of this paper is to help network operators assess and enhance the\u0000robustness of their MNC against adversarial inputs. The most critical step for\u0000this is generating inputs that can fool the MNC while being realizable under\u0000various threat models. Compared to other ML models, finding adversarial inputs\u0000against MNCs is more challenging due to the existence of non-differentiable\u0000components e.g., traffic engineering and the need to constrain inputs to\u0000preserve semantics and ensure reliability. These factors prevent the direct use\u0000of well-established gradient-based methods developed in adversarial ML (AML). To address these challenges, we introduce PANTS, a practical white-box\u0000framework that uniquely integrates AML techniques with Satisfiability Modulo\u0000Theories (SMT) solvers to generate adversarial inputs for MNCs. We also embed\u0000PANTS into an iterative adversarial training process that enhances the\u0000robustness of MNCs against adversarial inputs. PANTS is 70% and 2x more likely\u0000in median to find adversarial inputs against target MNCs compared to two\u0000state-of-the-art baselines, namely Amoeba and BAP. Integrating PANTS into the\u0000adversarial training process enhances the robustness of the target MNCs by\u000052.7% without sacrificing their accuracy. Critically, these PANTS-robustified\u0000MNCs are more robust than their vanilla counterparts against distinct\u0000attack-generation methodologies.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"67 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Efficient load balancing is crucial in cloud computing environments to ensure optimal resource utilization, minimize response times, and prevent server overload. Traditional load balancing algorithms, such as round-robin or least connections, are often static and unable to adapt to the dynamic and fluctuating nature of cloud workloads. In this paper, we propose a novel adaptive load balancing framework using Reinforcement Learning (RL) to address these challenges. The RL-based approach continuously learns and improves the distribution of tasks by observing real-time system performance and making decisions based on traffic patterns and resource availability. Our framework is designed to dynamically reallocate tasks to minimize latency and ensure balanced resource usage across servers. Experimental results show that the proposed RL-based load balancer outperforms traditional algorithms in terms of response time, resource utilization, and adaptability to changing workloads. These findings highlight the potential of AI-driven solutions for enhancing the efficiency and scalability of cloud infrastructures.
{"title":"Reinforcement Learning-Based Adaptive Load Balancing for Dynamic Cloud Environments","authors":"Kavish Chawla","doi":"arxiv-2409.04896","DOIUrl":"https://doi.org/arxiv-2409.04896","url":null,"abstract":"Efficient load balancing is crucial in cloud computing environments to ensure\u0000optimal resource utilization, minimize response times, and prevent server\u0000overload. Traditional load balancing algorithms, such as round-robin or least\u0000connections, are often static and unable to adapt to the dynamic and\u0000fluctuating nature of cloud workloads. In this paper, we propose a novel\u0000adaptive load balancing framework using Reinforcement Learning (RL) to address\u0000these challenges. The RL-based approach continuously learns and improves the\u0000distribution of tasks by observing real-time system performance and making\u0000decisions based on traffic patterns and resource availability. Our framework is\u0000designed to dynamically reallocate tasks to minimize latency and ensure\u0000balanced resource usage across servers. Experimental results show that the\u0000proposed RL-based load balancer outperforms traditional algorithms in terms of\u0000response time, resource utilization, and adaptability to changing workloads.\u0000These findings highlight the potential of AI-driven solutions for enhancing the\u0000efficiency and scalability of cloud infrastructures.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}