Data-driven algorithms play a pivotal role in the automated orchestration and management of network slices in 5G and beyond networks, however, their efficacy hinges on the timely and accurate monitoring of the network and its components. To support 5G slicing, monitoring must be comprehensive and encompass network slices end-to-end (E2E). Yet, several challenges arise with E2E network slice monitoring. Firstly, existing solutions are piecemeal and cannot correlate network-wide data from multiple sources (e.g., different network segments). Secondly, different slices can have different requirements regarding Key Performance Indicators (KPIs) and monitoring granularity, which necessitates dynamic adjustments in both KPI monitoring and data collection rates in real-time to minimize network resource overhead. To address these challenges, in this paper, we present Monarch, a scalable monitoring architecture for 5G. Monarch is designed for cloud-native 5G deployments and focuses on network slice monitoring and per-slice KPI computation. We validate the proposed architecture by implementing Monarch on a 5G network slice testbed, with up to 50 network slices. We exemplify Monarch’s role in 5G network monitoring by showcasing two scenarios: monitoring KPIs at both slice and network function levels. Our evaluations demonstrate Monarch’s scalability, with the architecture adeptly handling varying numbers of slices while maintaining consistent ingestion times between 2.25 to 2.75 ms. Furthermore, we showcase the effectiveness of Monarch’s adaptive monitoring mechanism, exemplified by a simple heuristic, on a real-world 5G dataset. The adaptive monitoring mechanism significantly reduces the overhead of network slice monitoring by up to 76% while ensuring acceptable accuracy.
{"title":"Monarch: Monitoring Architecture for 5G and Beyond Network Slices","authors":"Niloy Saha;Nashid Shahriar;Muhammad Sulaiman;Noura Limam;Raouf Boutaba;Aladdin Saleh","doi":"10.1109/TNSM.2024.3479246","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3479246","url":null,"abstract":"Data-driven algorithms play a pivotal role in the automated orchestration and management of network slices in 5G and beyond networks, however, their efficacy hinges on the timely and accurate monitoring of the network and its components. To support 5G slicing, monitoring must be comprehensive and encompass network slices end-to-end (E2E). Yet, several challenges arise with E2E network slice monitoring. Firstly, existing solutions are piecemeal and cannot correlate network-wide data from multiple sources (e.g., different network segments). Secondly, different slices can have different requirements regarding Key Performance Indicators (KPIs) and monitoring granularity, which necessitates dynamic adjustments in both KPI monitoring and data collection rates in real-time to minimize network resource overhead. To address these challenges, in this paper, we present Monarch, a scalable monitoring architecture for 5G. Monarch is designed for cloud-native 5G deployments and focuses on network slice monitoring and per-slice KPI computation. We validate the proposed architecture by implementing Monarch on a 5G network slice testbed, with up to 50 network slices. We exemplify Monarch’s role in 5G network monitoring by showcasing two scenarios: monitoring KPIs at both slice and network function levels. Our evaluations demonstrate Monarch’s scalability, with the architecture adeptly handling varying numbers of slices while maintaining consistent ingestion times between 2.25 to 2.75 ms. Furthermore, we showcase the effectiveness of Monarch’s adaptive monitoring mechanism, exemplified by a simple heuristic, on a real-world 5G dataset. The adaptive monitoring mechanism significantly reduces the overhead of network slice monitoring by up to 76% while ensuring acceptable accuracy.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"777-790"},"PeriodicalIF":4.7,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143619080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-14DOI: 10.1109/TNSM.2024.3479870
Krishna Pal Thakur;Basabdatta Palit
In this work, we have proposed link adaptation-based spectrum and power allocation algorithms for the uplink communication in 5G Cellular Vehicle-to-Everything (C-V2X) systems. In C-V2X, vehicle-to-vehicle (V2V) users share radio resources with vehicle-to-infrastructure (V2I) users. Existing works primarily focus on the optimal pairing of V2V and V2I users, assuming that each V2I user needs a single resource block (RB) while minimizing interference through power allocation. In contrast, in this work, we have considered that the number of RBs needed by the users is a function of their channel condition and Quality of Service (QoS) - a method called link adaptation. It effectively compensates for the frequent channel quality fluctuations at the high frequencies of 5G communication. 5G uses a multi-numerology frame structure to support diverse QoS requirements, which has also been considered in this work. The first algorithm proposed in this article greedily allocates RBs to V2I users using link adaptation. It then uses the Hungarian algorithm to pair V2V with V2I users while minimizing interference through power allocation. The second proposed method groups RBs into resource chunks (RCs) and uses the Hungarian algorithm twice - first to allocate RCs to V2I users and then to pair V2I users with V2V users. Extensive simulations reveal that link adaptation increases the number of satisfied V2I users and their sum rate while also improving the QoS of V2I and V2V users, making it indispensable for 5G C-V2X systems.
{"title":"A QoS-Aware Uplink Spectrum and Power Allocation With Link Adaptation for Vehicular Communications in 5G Networks","authors":"Krishna Pal Thakur;Basabdatta Palit","doi":"10.1109/TNSM.2024.3479870","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3479870","url":null,"abstract":"In this work, we have proposed link adaptation-based spectrum and power allocation algorithms for the uplink communication in 5G Cellular Vehicle-to-Everything (C-V2X) systems. In C-V2X, vehicle-to-vehicle (V2V) users share radio resources with vehicle-to-infrastructure (V2I) users. Existing works primarily focus on the optimal pairing of V2V and V2I users, assuming that each V2I user needs a single resource block (RB) while minimizing interference through power allocation. In contrast, in this work, we have considered that the number of RBs needed by the users is a function of their channel condition and Quality of Service (QoS) - a method called link adaptation. It effectively compensates for the frequent channel quality fluctuations at the high frequencies of 5G communication. 5G uses a multi-numerology frame structure to support diverse QoS requirements, which has also been considered in this work. The first algorithm proposed in this article greedily allocates RBs to V2I users using link adaptation. It then uses the Hungarian algorithm to pair V2V with V2I users while minimizing interference through power allocation. The second proposed method groups RBs into resource chunks (RCs) and uses the Hungarian algorithm twice - first to allocate RCs to V2I users and then to pair V2I users with V2V users. Extensive simulations reveal that link adaptation increases the number of satisfied V2I users and their sum rate while also improving the QoS of V2I and V2V users, making it indispensable for 5G C-V2X systems.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"592-604"},"PeriodicalIF":4.7,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Guest Editors’ Introduction: Special Issue on Robust and Resilient Future Communication Networks","authors":"Massimo Tornatore;Teresa Gomes;Carmen Mas-Machuca;Eiji Oki;Chadi Assi;Dominic Schupke","doi":"10.1109/TNSM.2024.3469308","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3469308","url":null,"abstract":"","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 5","pages":"4929-4935"},"PeriodicalIF":4.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10715485","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142408765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-11DOI: 10.1109/TNSM.2024.3479076
Roberto G. Pacheco;Divya J. Bajpai;Mark Shifrin;Rodrigo S. Couto;Daniel Sadoc Menasché;Manjesh K. Hanawal;Miguel Elias M. Campista
Deep Neural Networks (DNNs) have demonstrated exceptional performance in diverse tasks. However, deploying DNNs on resource-constrained devices presents challenges due to energy consumption and delay overheads. To mitigate these issues, early-exit DNNs (EE-DNNs) incorporate exit branches within intermediate layers to enable early inferences. These branches estimate prediction confidence and employ a fixed threshold to determine early termination. Nonetheless, fixed thresholds yield suboptimal performance in dynamic contexts, where context refers to distortions caused by environmental conditions, in image classification, or variations in input distribution due to concept drift, in NLP. In this article, we introduce Upper Confidence Bound in EE-DNNs (UCBEE), an online algorithm that dynamically adjusts early exit thresholds based on context. UCBEE leverages confidence levels at intermediate layers and learns without the need for true labels. Through extensive experiments in image classification and NLP, we demonstrate that UCBEE achieves logarithmic regret, converging after just a few thousand observations across multiple contexts. We evaluate UCBEE for image classification and text mining. In the latter, we show that UCBEE can reduce cumulative regret and lower latency by approximately 10%–20% without compromising accuracy when compared to fixed threshold alternatives. Our findings highlight UCBEE as an effective method for enhancing EE-DNN efficiency.
{"title":"UCBEE: A Multi Armed Bandit Approach for Early-Exit in Neural Networks","authors":"Roberto G. Pacheco;Divya J. Bajpai;Mark Shifrin;Rodrigo S. Couto;Daniel Sadoc Menasché;Manjesh K. Hanawal;Miguel Elias M. Campista","doi":"10.1109/TNSM.2024.3479076","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3479076","url":null,"abstract":"Deep Neural Networks (DNNs) have demonstrated exceptional performance in diverse tasks. However, deploying DNNs on resource-constrained devices presents challenges due to energy consumption and delay overheads. To mitigate these issues, early-exit DNNs (EE-DNNs) incorporate exit branches within intermediate layers to enable early inferences. These branches estimate prediction confidence and employ a fixed threshold to determine early termination. Nonetheless, fixed thresholds yield suboptimal performance in dynamic contexts, where context refers to distortions caused by environmental conditions, in image classification, or variations in input distribution due to concept drift, in NLP. In this article, we introduce Upper Confidence Bound in EE-DNNs (UCBEE), an online algorithm that dynamically adjusts early exit thresholds based on context. UCBEE leverages confidence levels at intermediate layers and learns without the need for true labels. Through extensive experiments in image classification and NLP, we demonstrate that UCBEE achieves logarithmic regret, converging after just a few thousand observations across multiple contexts. We evaluate UCBEE for image classification and text mining. In the latter, we show that UCBEE can reduce cumulative regret and lower latency by approximately 10%–20% without compromising accuracy when compared to fixed threshold alternatives. Our findings highlight UCBEE as an effective method for enhancing EE-DNN efficiency.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"107-120"},"PeriodicalIF":4.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143619083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-11DOI: 10.1109/TNSM.2024.3479150
Qianwei Meng;Qingjun Yuan;Weina Niu;Yongjuan Wang;Siqi Lu;Guangsong Li;Xiangbin Wang;Wenqi He
Identifying Decentralized Applications (DApps) from encrypted network traffic plays an important role in areas such as network management and threat detection. However, DApps deployed on the same platform use the same encryption settings, resulting in DApps generating encrypted traffic with great similarity. In addition, existing flow-based methods only consider each flow as an isolated individual and feed it sequentially into the neural network for feature extraction, ignoring other rich information introduced between flows, and therefore the relationship between different flows is not effectively utilized. In this study, we propose a novel encrypted traffic classification model IIT to heterogeneously mine the potential features of intra- and inter-flows, which contain two types of encoders based on the multi-head self-attention mechanism. By combining the complementary intra- and inter-flow perspectives, the entire process of information flow can be more completely understood and described. IIT provides a more complete perspective on network flows, with the intra-flow perspective focusing on information transfer between different packets within a flow, and the inter-flow perspective placing more emphasis on information interaction between different flows. We captured 44 classes of DApps in the real world and evaluated the IIT model on two datasets, including DApps and malicious traffic classification tasks. The results demonstrate that the IIT model achieves a classification accuracy of greater than 97% on the real-world dataset of 44 DApps, outperforming other state-of-the-art methods. In addition, the IIT model exhibits good generalization in the malicious traffic classification task.
{"title":"IIT: Accurate Decentralized Application Identification Through Mining Intra- and Inter-Flow Relationships","authors":"Qianwei Meng;Qingjun Yuan;Weina Niu;Yongjuan Wang;Siqi Lu;Guangsong Li;Xiangbin Wang;Wenqi He","doi":"10.1109/TNSM.2024.3479150","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3479150","url":null,"abstract":"Identifying Decentralized Applications (DApps) from encrypted network traffic plays an important role in areas such as network management and threat detection. However, DApps deployed on the same platform use the same encryption settings, resulting in DApps generating encrypted traffic with great similarity. In addition, existing flow-based methods only consider each flow as an isolated individual and feed it sequentially into the neural network for feature extraction, ignoring other rich information introduced between flows, and therefore the relationship between different flows is not effectively utilized. In this study, we propose a novel encrypted traffic classification model IIT to heterogeneously mine the potential features of intra- and inter-flows, which contain two types of encoders based on the multi-head self-attention mechanism. By combining the complementary intra- and inter-flow perspectives, the entire process of information flow can be more completely understood and described. IIT provides a more complete perspective on network flows, with the intra-flow perspective focusing on information transfer between different packets within a flow, and the inter-flow perspective placing more emphasis on information interaction between different flows. We captured 44 classes of DApps in the real world and evaluated the IIT model on two datasets, including DApps and malicious traffic classification tasks. The results demonstrate that the IIT model achieves a classification accuracy of greater than 97% on the real-world dataset of 44 DApps, outperforming other state-of-the-art methods. In addition, the IIT model exhibits good generalization in the malicious traffic classification task.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"394-408"},"PeriodicalIF":4.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-11DOI: 10.1109/TNSM.2024.3468997
Yu-Zhen Janice Chen;Daniel S. Menasché;Don Towsley
Effective resource allocation in sensor networks, IoT systems, and distributed computing is essential for applications such as environmental monitoring, surveillance, and smart infrastructure. Sensors or agents must optimize their resource allocation to maximize the accuracy of parameter estimation. In this work, we consider a group of sensors or agents, each sampling from a different variable of a multivariate Gaussian distribution and having a different estimation objective. We formulate a sensor or agent’s data collection and collaboration policy design problem as a Fisher information maximization (or Cramer-Rao bound minimization) problem. This formulation captures a novel trade-off in energy use, between locally collecting univariate samples and collaborating to produce multivariate samples. When knowledge of the correlation between variables is available, we analytically identify two cases: (1) where the optimal data collection policy entails investing resources to transfer information for collaborative sampling, and (2) where knowledge of the correlation between samples cannot enhance estimation efficiency. When knowledge of certain correlations is unavailable, but collaboration remains potentially beneficial, we propose novel approaches that apply multi-armed bandit algorithms to learn the optimal data collection and collaboration policy in our sequential distributed parameter estimation problem. We illustrate the effectiveness of the proposed algorithms, DOUBLE-F, DOUBLE-Z, UCB-F, UCB-Z, through simulation.
{"title":"On Collaboration in Distributed Parameter Estimation With Resource Constraints","authors":"Yu-Zhen Janice Chen;Daniel S. Menasché;Don Towsley","doi":"10.1109/TNSM.2024.3468997","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3468997","url":null,"abstract":"Effective resource allocation in sensor networks, IoT systems, and distributed computing is essential for applications such as environmental monitoring, surveillance, and smart infrastructure. Sensors or agents must optimize their resource allocation to maximize the accuracy of parameter estimation. In this work, we consider a group of sensors or agents, each sampling from a different variable of a multivariate Gaussian distribution and having a different estimation objective. We formulate a sensor or agent’s data collection and collaboration policy design problem as a Fisher information maximization (or Cramer-Rao bound minimization) problem. This formulation captures a novel trade-off in energy use, between locally collecting univariate samples and collaborating to produce multivariate samples. When knowledge of the correlation between variables is available, we analytically identify two cases: (1) where the optimal data collection policy entails investing resources to transfer information for collaborative sampling, and (2) where knowledge of the correlation between samples cannot enhance estimation efficiency. When knowledge of certain correlations is unavailable, but collaboration remains potentially beneficial, we propose novel approaches that apply multi-armed bandit algorithms to learn the optimal data collection and collaboration policy in our sequential distributed parameter estimation problem. We illustrate the effectiveness of the proposed algorithms, <monospace>DOUBLE-F</monospace>, DOUBLE-Z, UCB-F, <monospace>UCB-Z</monospace>, through simulation.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"151-167"},"PeriodicalIF":4.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Open Radio Access Network (O-RAN) has recently emerged as a new trend for mobile network architecture. It is based on four founding principles: disaggregation, intelligence, virtualization, and open interfaces. In particular, RAN disaggregation involves dividing base station virtualized networking functions (VNFs) into three distinct components - the Open-Central Unit (O-CU), the Open-Distributed Unit (O-DU), and the Open-Radio Unit (O-RU) - enabling each component to be implemented independently. Such disaggregation improves system performance and allows rapid and open innovation in many components while ensuring multi-vendor operability. As the disaggregation of network architecture becomes a key enabler of O-RAN, the deployment scenarios of VNFs on O-RAN clouds become critical. In this context, we propose an optimal and dynamic placement scheme of the O-CU and O-DU functionalities on the edge or in regional O-clouds. The objective is to maximize users’ admittance ratio by considering mid-haul delay and server capacity requirements. We develop an Integer Linear Programming (ILP) model for O-CU and O-DU placement in O-RAN architecture. Additionally, we introduce a Recurrent Neural Network (RNN) heuristic model that can effectively emulate the behavior of the ILP model. The results are promising in terms of improving users’ admittance ratio by up to 10% when compared to baselines from state-of-the-art. Moreover, our proposed model minimizes the deployment costs and increases the overall throughput. Furthermore, we assess the optimal model’s performance across diverse network conditions, including variable functional split options, link capacity bottlenecks, and channel bandwidth limitations. Our analysis delves into placement decisions, evaluating admittance ratio, radio and link resource utilization, and quantifying the impact on different service types.
{"title":"On Flexible Placement of O-CU and O-DU Functionalities in Open-RAN Architecture","authors":"Hiba Hojeij;Mahdi Sharara;Sahar Hoteit;Véronique Vèque","doi":"10.1109/TNSM.2024.3476939","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3476939","url":null,"abstract":"Open Radio Access Network (O-RAN) has recently emerged as a new trend for mobile network architecture. It is based on four founding principles: disaggregation, intelligence, virtualization, and open interfaces. In particular, RAN disaggregation involves dividing base station virtualized networking functions (VNFs) into three distinct components - the Open-Central Unit (O-CU), the Open-Distributed Unit (O-DU), and the Open-Radio Unit (O-RU) - enabling each component to be implemented independently. Such disaggregation improves system performance and allows rapid and open innovation in many components while ensuring multi-vendor operability. As the disaggregation of network architecture becomes a key enabler of O-RAN, the deployment scenarios of VNFs on O-RAN clouds become critical. In this context, we propose an optimal and dynamic placement scheme of the O-CU and O-DU functionalities on the edge or in regional O-clouds. The objective is to maximize users’ admittance ratio by considering mid-haul delay and server capacity requirements. We develop an Integer Linear Programming (ILP) model for O-CU and O-DU placement in O-RAN architecture. Additionally, we introduce a Recurrent Neural Network (RNN) heuristic model that can effectively emulate the behavior of the ILP model. The results are promising in terms of improving users’ admittance ratio by up to 10% when compared to baselines from state-of-the-art. Moreover, our proposed model minimizes the deployment costs and increases the overall throughput. Furthermore, we assess the optimal model’s performance across diverse network conditions, including variable functional split options, link capacity bottlenecks, and channel bandwidth limitations. Our analysis delves into placement decisions, evaluating admittance ratio, radio and link resource utilization, and quantifying the impact on different service types.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"660-674"},"PeriodicalIF":4.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-09DOI: 10.1109/TNSM.2024.3476480
Daniel Ayepah-Mensah;Guolin Sun;Gordon Owusu Boateng;Guisong Liu
Resource sharing in radio access networks (RAN) can be conceptualized as a resource trading process between infrastructure providers (InPs) and multiple mobile virtual network operators (MVNO), where InPs lease essential network resources, such as spectrum and infrastructure, to MVNOs. Given the dynamic nature of RANs, deep reinforcement learning (DRL) is a more suitable approach to decision-making and resource optimization that ensures adaptive and efficient resource allocation strategies. In RAN slicing, DRL struggles due to imbalanced data distribution and reliance on high-quality training data. In addition, the trade-off between the global solution and individual agent goals can lead to oscillatory behavior, preventing convergence to an optimal solution. Therefore, we propose a collaborative intelligent resource trading framework with a graph-based digital twin (DT) for multiple InPs and MVNOs based on Federated DRL. First, we present a customized mutual policy distillation scheme for resource trading, where complex MVNO teacher policies are distilled into InP student models and vice versa. This mutual distillation encourages collaboration to achieve personalized resource trading decisions that reach the optimal local and global solution. Second, the DT uses a graph-based model to capture the dynamic interactions between InPs and MVNOs to improve resource-trade decisions. DT can accurately predict resource prices and demand from MVNO to provide high-quality training data. In addition, DT identifies the underlying patterns and trends through advanced analytics, enabling proactive resource allocation and pricing strategies. The simulation results and analysis confirm the effectiveness and robustness of the proposed framework to an unbalanced data distribution.
{"title":"Federated Policy Distillation for Digital Twin-Enabled Intelligent Resource Trading in 5G Network Slicing","authors":"Daniel Ayepah-Mensah;Guolin Sun;Gordon Owusu Boateng;Guisong Liu","doi":"10.1109/TNSM.2024.3476480","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3476480","url":null,"abstract":"Resource sharing in radio access networks (RAN) can be conceptualized as a resource trading process between infrastructure providers (InPs) and multiple mobile virtual network operators (MVNO), where InPs lease essential network resources, such as spectrum and infrastructure, to MVNOs. Given the dynamic nature of RANs, deep reinforcement learning (DRL) is a more suitable approach to decision-making and resource optimization that ensures adaptive and efficient resource allocation strategies. In RAN slicing, DRL struggles due to imbalanced data distribution and reliance on high-quality training data. In addition, the trade-off between the global solution and individual agent goals can lead to oscillatory behavior, preventing convergence to an optimal solution. Therefore, we propose a collaborative intelligent resource trading framework with a graph-based digital twin (DT) for multiple InPs and MVNOs based on Federated DRL. First, we present a customized mutual policy distillation scheme for resource trading, where complex MVNO teacher policies are distilled into InP student models and vice versa. This mutual distillation encourages collaboration to achieve personalized resource trading decisions that reach the optimal local and global solution. Second, the DT uses a graph-based model to capture the dynamic interactions between InPs and MVNOs to improve resource-trade decisions. DT can accurately predict resource prices and demand from MVNO to provide high-quality training data. In addition, DT identifies the underlying patterns and trends through advanced analytics, enabling proactive resource allocation and pricing strategies. The simulation results and analysis confirm the effectiveness and robustness of the proposed framework to an unbalanced data distribution.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"361-379"},"PeriodicalIF":4.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-08DOI: 10.1109/TNSM.2024.3476138
Yonghan Wu;Jin Li;Min Zhang;Bing Ye;Xiongyan Tang
Large-scale transmission network (LSTN) puts forward high requirements to 6G in quality of service (QoS). In the LSTN, bounded and low delay, low packet loss rates, and controllable bandwidth are required to provide guaranteed QoS, involving techniques from the network layer and physical layer. In those techniques, routing computation is one of the fundamental problems to ensure high QoS, especially for bounded and low delay. Routing computation in LSTN researches include the routing recovery based on searching and pruning strategies, individual-component routing and fiber connections, and multi-point relaying (MRP)-based topology and routing selection. However, these schemes reduce the routing time only through simple topological pruning or linear constraints, which is unsuitable for efficient routing in LSTN with increasing scales and dynamics. In this paper, an efficient and comprehensive {routing computation algorithm namely multi-factor assessment and compression for network topologies (MC) is proposed. Multiple parameters from nodes and links in networks are jointly assessed, and topology compression for network topologies is executed based on MC to accelerate routing computation. Simulation results show that MC brings space complexity but reduces time cost of routing computation obviously. In larger network topologies, compared with classic and advanced routing algorithms, the higher performance improvement about routing computation time, the number of transmitted service, average throughput of single routing, and packet loss rates of MC-based routing algorithms are realized, which has potentials to meet the high QoS requirements in LSTN.
{"title":"A Comprehensive and Efficient Topology Representation in Routing Computation for Large-Scale Transmission Networks","authors":"Yonghan Wu;Jin Li;Min Zhang;Bing Ye;Xiongyan Tang","doi":"10.1109/TNSM.2024.3476138","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3476138","url":null,"abstract":"Large-scale transmission network (LSTN) puts forward high requirements to 6G in quality of service (QoS). In the LSTN, bounded and low delay, low packet loss rates, and controllable bandwidth are required to provide guaranteed QoS, involving techniques from the network layer and physical layer. In those techniques, routing computation is one of the fundamental problems to ensure high QoS, especially for bounded and low delay. Routing computation in LSTN researches include the routing recovery based on searching and pruning strategies, individual-component routing and fiber connections, and multi-point relaying (MRP)-based topology and routing selection. However, these schemes reduce the routing time only through simple topological pruning or linear constraints, which is unsuitable for efficient routing in LSTN with increasing scales and dynamics. In this paper, an efficient and comprehensive {routing computation algorithm namely multi-factor assessment and compression for network topologies (MC) is proposed. Multiple parameters from nodes and links in networks are jointly assessed, and topology compression for network topologies is executed based on MC to accelerate routing computation. Simulation results show that MC brings space complexity but reduces time cost of routing computation obviously. In larger network topologies, compared with classic and advanced routing algorithms, the higher performance improvement about routing computation time, the number of transmitted service, average throughput of single routing, and packet loss rates of MC-based routing algorithms are realized, which has potentials to meet the high QoS requirements in LSTN.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"220-241"},"PeriodicalIF":4.7,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-07DOI: 10.1109/TNSM.2024.3474717
Kai Zhao;Xiaowei Chuo;Fangchao Yu;Bo Zeng;Zhi Pang;Lina Wang
Split learning has emerged as a practical and efficient privacy-preserving distributed machine learning paradigm. Understanding the privacy risks of split learning is critical for its application in privacy-sensitive scenarios. However, previous attacks against split learning generally depended on unduly strong assumptions or non-standard settings advantageous to the attacker. This paper proposes a novel auxiliary model-based label inference attack framework against learning, named SplitAUM. SplitAUM first builds an auxiliary model on the client side using intermediate representations of the cut layer and a small number of dummy labels. Then, the learning regularization objective is carefully designed to train the auxiliary model and transfer the knowledge of the server model to the client. Finally, SplitAUM uses the auxiliary model output on local data to infer the server’s privacy label. In addition, to further improve the attack effect, we use semi-supervised clustering to initialize the dummy labels of the auxiliary model. Since SplitAUM relies only on auxiliary models, it is highly scalable. We conduct extensive experiments on three different categories of datasets, comparing four typical attacks. Experimental results demonstrate that SplitAUM can effectively infer privacy labels and outperform existing attack frameworks in challenging yet practical scenarios. We hope our work paves the way for future analyses of the security of split learning.
{"title":"SplitAUM: Auxiliary Model-Based Label Inference Attack Against Split Learning","authors":"Kai Zhao;Xiaowei Chuo;Fangchao Yu;Bo Zeng;Zhi Pang;Lina Wang","doi":"10.1109/TNSM.2024.3474717","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3474717","url":null,"abstract":"Split learning has emerged as a practical and efficient privacy-preserving distributed machine learning paradigm. Understanding the privacy risks of split learning is critical for its application in privacy-sensitive scenarios. However, previous attacks against split learning generally depended on unduly strong assumptions or non-standard settings advantageous to the attacker. This paper proposes a novel auxiliary model-based label inference attack framework against learning, named <monospace>SplitAUM</monospace>. <monospace>SplitAUM</monospace> first builds an auxiliary model on the client side using intermediate representations of the cut layer and a small number of dummy labels. Then, the learning regularization objective is carefully designed to train the auxiliary model and transfer the knowledge of the server model to the client. Finally, <monospace>SplitAUM</monospace> uses the auxiliary model output on local data to infer the server’s privacy label. In addition, to further improve the attack effect, we use semi-supervised clustering to initialize the dummy labels of the auxiliary model. Since <monospace>SplitAUM</monospace> relies only on auxiliary models, it is highly scalable. We conduct extensive experiments on three different categories of datasets, comparing four typical attacks. Experimental results demonstrate that <monospace>SplitAUM</monospace> can effectively infer privacy labels and outperform existing attack frameworks in challenging yet practical scenarios. We hope our work paves the way for future analyses of the security of split learning.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"930-940"},"PeriodicalIF":4.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}