Pub Date : 2025-12-05DOI: 10.1109/TNSM.2025.3640713
Dongyi Han;Qiang Zhi
In federated learning (FL), effective client and privacy management are crucial for maintaining system efficiency and model performance. However, existing FL frameworks face challenges such as imbalanced client contributions, inefficient resource allocation, and static privacy mechanisms, making scalable client management and adaptive privacy control essential. To address these issues, this paper proposes DGDPFL, a novel FL framework that enhances client selection, resource management, and privacy control through dynamic client grouping and adaptive privacy budgeting. The framework optimizes client management by clustering participants based on device capabilities, bandwidth, and data quality, enabling efficient resource allocation. A contribution-aware selection mechanism ensures fair participation, while a privacy-aware control strategy dynamically adjusts privacy budgets based on model similarity, improving both privacy guarantees and learning performance. We evaluate DGDPFL in real-world and simulated environments. On CIFAR-10 and Fashion-MNIST, DGDPFL achieves 77.83% and 88.35% test accuracy respectively with only 10–20 clients and 40 training rounds, outperforming state-of-the-art baselines by up to 12.36%. On audio datasets FSDD and SAD, the accuracy reaches up to 97%, validating the method’s robustness across modalities. Experimental results demonstrate that DGDPFL outperforms existing approaches by achieving higher model accuracy, improved system efficiency, and better privacy-utility balance. These findings highlight DGDPFL’s effectiveness in managing clients and privacy in FL environments.
{"title":"DGDPFL: Dynamic Grouping and Privacy Budget Adjustment for Federated Learning in Networked Service Management","authors":"Dongyi Han;Qiang Zhi","doi":"10.1109/TNSM.2025.3640713","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3640713","url":null,"abstract":"In federated learning (FL), effective client and privacy management are crucial for maintaining system efficiency and model performance. However, existing FL frameworks face challenges such as imbalanced client contributions, inefficient resource allocation, and static privacy mechanisms, making scalable client management and adaptive privacy control essential. To address these issues, this paper proposes DGDPFL, a novel FL framework that enhances client selection, resource management, and privacy control through dynamic client grouping and adaptive privacy budgeting. The framework optimizes client management by clustering participants based on device capabilities, bandwidth, and data quality, enabling efficient resource allocation. A contribution-aware selection mechanism ensures fair participation, while a privacy-aware control strategy dynamically adjusts privacy budgets based on model similarity, improving both privacy guarantees and learning performance. We evaluate DGDPFL in real-world and simulated environments. On CIFAR-10 and Fashion-MNIST, DGDPFL achieves 77.83% and 88.35% test accuracy respectively with only 10–20 clients and 40 training rounds, outperforming state-of-the-art baselines by up to 12.36%. On audio datasets FSDD and SAD, the accuracy reaches up to 97%, validating the method’s robustness across modalities. Experimental results demonstrate that DGDPFL outperforms existing approaches by achieving higher model accuracy, improved system efficiency, and better privacy-utility balance. These findings highlight DGDPFL’s effectiveness in managing clients and privacy in FL environments.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1826-1841"},"PeriodicalIF":5.4,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-04DOI: 10.1109/TNSM.2025.3640070
Long Chen;Yukang Jiang;Zishang Qiu;Donglin Zhu;Zhiquan Liu;Zhenzhou Tang
Efficient deployment of thousands of energy-constrained sensor nodes (SNs) in large-scale wireless sensor networks (WSNs) is critical for reliable data transmission and target sensing. This study addresses the Minimum Energy Q-Coverage and C-Connectivity (MinEQC) problem for heterogeneous SNs in three-dimensional environments. MnPF (Metaheuristic–Neural Network Parallel Framework), a two-phase method that can embed most metaheuristic algorithms (MAs) and neural networks (NNs), is proposed to address the above problem. Phase-I partitions the monitoring region via divide-and-conquer and applies NN-based dimensionality reduction to accelerate parallel optimization of local Q-coverage and C-connectivity. Phase-II employs an MA-based adaptive restoration strategy to restore connectivity among subregions and systematically assess how different partitioning strategies affect the number of restoration steps. Experiments with four NNs and twelve MAs demonstrate efficiency, scalability, and adaptability of MnPF, while ablation studies confirm the necessity of both phases. MnPF bridges scalability and energy efficiency, providing a generalizable approach to SN deployment in large-scale WSNs.
{"title":"Toward Energy-Saving Deployment in Large-Scale Heterogeneous Wireless Sensor Networks for Q-Coverage and C-Connectivity: An Efficient Parallel Framework","authors":"Long Chen;Yukang Jiang;Zishang Qiu;Donglin Zhu;Zhiquan Liu;Zhenzhou Tang","doi":"10.1109/TNSM.2025.3640070","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3640070","url":null,"abstract":"Efficient deployment of thousands of energy-constrained sensor nodes (SNs) in large-scale wireless sensor networks (WSNs) is critical for reliable data transmission and target sensing. This study addresses the Minimum Energy Q-Coverage and C-Connectivity (MinEQC) problem for heterogeneous SNs in three-dimensional environments. MnPF (Metaheuristic–Neural Network Parallel Framework), a two-phase method that can embed most metaheuristic algorithms (MAs) and neural networks (NNs), is proposed to address the above problem. Phase-I partitions the monitoring region via divide-and-conquer and applies NN-based dimensionality reduction to accelerate parallel optimization of local Q-coverage and C-connectivity. Phase-II employs an MA-based adaptive restoration strategy to restore connectivity among subregions and systematically assess how different partitioning strategies affect the number of restoration steps. Experiments with four NNs and twelve MAs demonstrate efficiency, scalability, and adaptability of MnPF, while ablation studies confirm the necessity of both phases. MnPF bridges scalability and energy efficiency, providing a generalizable approach to SN deployment in large-scale WSNs.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1242-1259"},"PeriodicalIF":5.4,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1109/TNSM.2025.3639958
Jiahe Xu;Chao Guo;Moshe Zukerman
Virtual Network Embedding (VNE) is an important problem in network virtualization, involving the optimal allocation of resources from substrate networks to service requests in the form of Virtual Networks (VNs). This paper addresses a specific VNE problem in the context of Composable/Disaggregated Data Center (DDC) networks, characterized by the decoupling and reassembly of different resources into resource pools. Existing research on the VNE problem within Data Center (DC) networks primarily focuses on the Server-based DC (SDC) architecture. In the VNE problem within SDCs, a virtual node is typically mapped to a single server to fulfill its requirements for various resources. However, in the case of DDCs, a virtual node needs to be mapped to different resource nodes for different resources. We aim to design an optimization method to achieve the most efficient VNE within DDCs. To this end, we provide an embedding scheme that acts on each arriving VN request to embed the VN with minimized power consumption. Through this scheme, we demonstrate that we also achieve a high long-term acceptance ratio. We provide Mixed Integer Linear Programming (MILP) and scalable greedy algorithms to implement this scheme. We validate the efficiency of our greedy algorithms by comparing their performance against the MILP for small problems and demonstrate their superiority over baseline algorithms through comprehensive evaluations using both synthetic simulations and real-world Google cluster traces.
{"title":"Virtual Network Embedding for Data Centers With Composable or Disaggregated Architectures","authors":"Jiahe Xu;Chao Guo;Moshe Zukerman","doi":"10.1109/TNSM.2025.3639958","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3639958","url":null,"abstract":"Virtual Network Embedding (VNE) is an important problem in network virtualization, involving the optimal allocation of resources from substrate networks to service requests in the form of Virtual Networks (VNs). This paper addresses a specific VNE problem in the context of Composable/Disaggregated Data Center (DDC) networks, characterized by the decoupling and reassembly of different resources into resource pools. Existing research on the VNE problem within Data Center (DC) networks primarily focuses on the Server-based DC (SDC) architecture. In the VNE problem within SDCs, a virtual node is typically mapped to a single server to fulfill its requirements for various resources. However, in the case of DDCs, a virtual node needs to be mapped to different resource nodes for different resources. We aim to design an optimization method to achieve the most efficient VNE within DDCs. To this end, we provide an embedding scheme that acts on each arriving VN request to embed the VN with minimized power consumption. Through this scheme, we demonstrate that we also achieve a high long-term acceptance ratio. We provide Mixed Integer Linear Programming (MILP) and scalable greedy algorithms to implement this scheme. We validate the efficiency of our greedy algorithms by comparing their performance against the MILP for small problems and demonstrate their superiority over baseline algorithms through comprehensive evaluations using both synthetic simulations and real-world Google cluster traces.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1276-1296"},"PeriodicalIF":5.4,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1109/TNSM.2025.3640098
Sajjad Alizadeh;Majid Khabbazian
Payment channel networks have great potential to scale cryptocurrency payment systems. However, their scalability power is limited as payments occasionally fail in these networks due to various factors. In this work, we study these factors and analyze their imposing limitations. To this end, we propose a model where a payment channel network is viewed as a compression method. In this model, the compression rate is defined as the ratio of the total number of payments entering the network to the total number of transactions that are placed on the blockchain to handle failed payments or (re)open channels. We analyze the compression rate and its upper limit, referred to as compression capacity, for various payment models, channel-reopening strategies, and network topologies. For networks with a tree topology, we show that the compression rate is inversely proportional to the average path length traversed by payments. For general networks, we show that if payment rates are even slightly asymmetric and channels are not reopened regularly, a constant fraction of payments will always fail regardless of the number of channels, the topology of the network, the routing algorithm used and the amount of allocated funds in the network. We also examine the impact of routing and channel rebalancing on the network’s compression rate. We show that rebalancing and strategic routing can enhance the compression rate in payment channel networks where channels may be reopened, differing from the established literature on credit networks, which suggests these factors do not have an effect.
{"title":"On Scalability Power of Payment Channel Networks","authors":"Sajjad Alizadeh;Majid Khabbazian","doi":"10.1109/TNSM.2025.3640098","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3640098","url":null,"abstract":"Payment channel networks have great potential to scale cryptocurrency payment systems. However, their scalability power is limited as payments occasionally fail in these networks due to various factors. In this work, we study these factors and analyze their imposing limitations. To this end, we propose a model where a payment channel network is viewed as a compression method. In this model, the compression rate is defined as the ratio of the total number of payments entering the network to the total number of transactions that are placed on the blockchain to handle failed payments or (re)open channels. We analyze the compression rate and its upper limit, referred to as compression capacity, for various payment models, channel-reopening strategies, and network topologies. For networks with a tree topology, we show that the compression rate is inversely proportional to the average path length traversed by payments. For general networks, we show that if payment rates are even slightly asymmetric and channels are not reopened regularly, a constant fraction of payments will always fail regardless of the number of channels, the topology of the network, the routing algorithm used and the amount of allocated funds in the network. We also examine the impact of routing and channel rebalancing on the network’s compression rate. We show that rebalancing and strategic routing can enhance the compression rate in payment channel networks where channels may be reopened, differing from the established literature on credit networks, which suggests these factors do not have an effect.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1342-1355"},"PeriodicalIF":5.4,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.1109/TNSM.2025.3636785
Remi Hendriks;Mattijs Jonker;Roland van Rijswijk-Deij;Raffaele Sommese
Load Balancing (LB) is a routing strategy that increases performance by distributing traffic over multiple outgoing paths. In this work, we introduce a novel methodology to detect the influence of LB on anycast routing, which can be used by operators to detect networks that experience anycast site flipping, where traffic from a single client reaches multiple anycast sites. We use our methodology to measure the effects of LB-behavior on anycast routing at a global scale, covering both IPv4 and IPv6. Our results show that LB-induced anycast site flipping is widespread. The results also show our method can detect LB implementations on the global Internet, including detection and classification of Points-of-Presence (PoP) and egress selection techniques deployed by hypergiants, cloud providers, and network operators. We observe LB-induced site flipping directs distinct flows to different anycast sites with significant latency inflation. In cases with two paths between an anycast instance and a load-balanced destination, we observe an average RTT difference of 30 ms with 8% of load-balanced destinations seeing RTT differences of over 100 ms. Being able to detect these cases can help anycast operators significantly improve their service for affected clients.
{"title":"Load-Balancing Versus Anycast: A First Look at Operational Challenges","authors":"Remi Hendriks;Mattijs Jonker;Roland van Rijswijk-Deij;Raffaele Sommese","doi":"10.1109/TNSM.2025.3636785","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3636785","url":null,"abstract":"Load Balancing (LB) is a routing strategy that increases performance by distributing traffic over multiple outgoing paths. In this work, we introduce a novel methodology to detect the influence of LB on anycast routing, which can be used by operators to detect networks that experience anycast site flipping, where traffic from a single client reaches multiple anycast sites. We use our methodology to measure the effects of LB-behavior on anycast routing at a global scale, covering both IPv4 and IPv6. Our results show that LB-induced anycast site flipping is widespread. The results also show our method can detect LB implementations on the global Internet, including detection and classification of Points-of-Presence (PoP) and egress selection techniques deployed by hypergiants, cloud providers, and network operators. We observe LB-induced site flipping directs distinct flows to different anycast sites with significant latency inflation. In cases with two paths between an anycast instance and a load-balanced destination, we observe an average RTT difference of 30 ms with 8% of load-balanced destinations seeing RTT differences of over 100 ms. Being able to detect these cases can help anycast operators significantly improve their service for affected clients.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"814-823"},"PeriodicalIF":5.4,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145852501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Uncrewed Aerial Vehicle (UAV) swarms are a cornerstone technology in the rapidly growing low-altitude economy, with significant applications in logistics, smart cities, and emergency response. However, their deployment is constrained by challenges in secure communication, dynamic group coordination, and resource constraints. Although there are various cryptographic techniques, efficient and scalable group key management plays a critical role in secure task allocation in UAV swarms. Existing group key agreement schemes, both symmetric and asymmetric, often fail to adequately address these challenges due to their reliance on centralized control, high computational overhead, sender restrictions, and insufficient protection against physical attacks. To address these issues, we propose PCDCB (Pairing-free Certificateless Dynamic Contributory Broadcast encryption), a blockchain-assisted lightweight key management scheme designed for UAV swarm task allocation. PCDCB is particularly suitable for swarm operations as it supports efficient one-to-many broadcast of task commands, enables dynamic node join/leave, and eliminates key escrow by combining certificateless cryptography with Physical Unclonable Functions (PUFs) for hardware-bound key regeneration. Blockchain is used to maintain tamper-resistant update tables and ensure auditability, while a privacy-preserving mechanism with pseudonyms and a round mapping table provides task anonymity and unlinkability. Comprehensive security analysis confirms that PCDCB is secure and resistant to multiple attacks. Performance evaluation shows that, in large-scale swarm scenarios (n = 100), PCDCB reduces the cost of group key computation by 54.4% (up to 96.9%) and reduces the time to generate the decryption keys by at least 29.7%. In addition, PCDCB achieves the lowest communication cost among all compared schemes and demonstrates strong scalability with increasing group size.
{"title":"Blockchain-Based Lightweight Key Management Scheme for Secure UAV Swarm Task Allocation","authors":"Yaqing Zhu;Liquan Chen;Suhui Liu;Bo Yang;Shang Gao","doi":"10.1109/TNSM.2025.3636562","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3636562","url":null,"abstract":"Uncrewed Aerial Vehicle (UAV) swarms are a cornerstone technology in the rapidly growing low-altitude economy, with significant applications in logistics, smart cities, and emergency response. However, their deployment is constrained by challenges in secure communication, dynamic group coordination, and resource constraints. Although there are various cryptographic techniques, efficient and scalable group key management plays a critical role in secure task allocation in UAV swarms. Existing group key agreement schemes, both symmetric and asymmetric, often fail to adequately address these challenges due to their reliance on centralized control, high computational overhead, sender restrictions, and insufficient protection against physical attacks. To address these issues, we propose PCDCB (Pairing-free Certificateless Dynamic Contributory Broadcast encryption), a blockchain-assisted lightweight key management scheme designed for UAV swarm task allocation. PCDCB is particularly suitable for swarm operations as it supports efficient one-to-many broadcast of task commands, enables dynamic node join/leave, and eliminates key escrow by combining certificateless cryptography with Physical Unclonable Functions (PUFs) for hardware-bound key regeneration. Blockchain is used to maintain tamper-resistant update tables and ensure auditability, while a privacy-preserving mechanism with pseudonyms and a round mapping table provides task anonymity and unlinkability. Comprehensive security analysis confirms that PCDCB is secure and resistant to multiple attacks. Performance evaluation shows that, in large-scale swarm scenarios (n = 100), PCDCB reduces the cost of group key computation by 54.4% (up to 96.9%) and reduces the time to generate the decryption keys by at least 29.7%. In addition, PCDCB achieves the lowest communication cost among all compared schemes and demonstrates strong scalability with increasing group size.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1201-1216"},"PeriodicalIF":5.4,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145852530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.1109/TNSM.2025.3636557
Josef Koumar;Timotej Smoleň;Kamil Jeřábek;Tomáš Čejka
Accurate network traffic forecasting is crucial for Internet service providers to optimize resources, improve user experience, and detect anomalies. Until recently, the lack of large-scale, real-world datasets limited the fair evaluation of forecasting methods. The newly released CESNET-TimeSeries24 dataset addresses this gap by providing multivariate traffic data from thousands of devices over 40 weeks at multiple aggregation granularities and hierarchy levels. In this study, we leverage the CESNET-TimeSeries24 dataset to conduct a systematic evaluation of state-of-the-art deep learning models and provide practical insights. Moreover, our analysis reveals trade-offs between prediction accuracy and computational efficiency across different levels of granularity. Beyond model comparison, we establish a transparent and reproducible benchmarking framework, releasing source code and experiments to encourage standardized evaluation and accelerate progress in network traffic forecasting research.
{"title":"Comparative Analysis of Deep Learning Models for Real-World ISP Network Traffic Forecasting","authors":"Josef Koumar;Timotej Smoleň;Kamil Jeřábek;Tomáš Čejka","doi":"10.1109/TNSM.2025.3636557","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3636557","url":null,"abstract":"Accurate network traffic forecasting is crucial for Internet service providers to optimize resources, improve user experience, and detect anomalies. Until recently, the lack of large-scale, real-world datasets limited the fair evaluation of forecasting methods. The newly released CESNET-TimeSeries24 dataset addresses this gap by providing multivariate traffic data from thousands of devices over 40 weeks at multiple aggregation granularities and hierarchy levels. In this study, we leverage the CESNET-TimeSeries24 dataset to conduct a systematic evaluation of state-of-the-art deep learning models and provide practical insights. Moreover, our analysis reveals trade-offs between prediction accuracy and computational efficiency across different levels of granularity. Beyond model comparison, we establish a transparent and reproducible benchmarking framework, releasing source code and experiments to encourage standardized evaluation and accelerate progress in network traffic forecasting research.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"715-728"},"PeriodicalIF":5.4,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145852532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21DOI: 10.1109/TNSM.2025.3635876
Awaneesh Kumar Yadav;An Braeken
The 5G Authentication and Key Management for Applications (AKMA) protocol is a 5G standard proposed by 3GPP in order to standardize the authentication procedure of mobile users towards applications based on the authentication of the user to the mobile network. As pointed out by several authors, the 5G-AKMA protocol inherently poses severe security issues, including privacy, unlinkability, ephemeral secret leakage and stolen device attacks. Also, the protocol does not offer perfect forward secrecy. In addition, the network operator is able to record all applications to which the user is subscribed and any outsider eavesdropping the communication channel is able to link requests to different applications coming from the same user. While the state of the shows that various protocols are proposed to solve the 5G-AKMA security issues, they are either vulnerable to severe attacks or are computationally extensive. In this paper, we provide a new version of the protocol able to solve these privacy issues in an effective manner. In addition, we also extend the protocol such that it can be used for communications in multi-access edge computing (MEC) applications, taking into account handover procedures from one MEC server to another. The proposed protocol has been thoroughly compared to existing ones, revealing its efficiency in terms of communication, computation, storage, and energy costs. The comparative analysis shows that the proposed 5G-AKMA reduces computational cost by 92%, communication cost by 74%, storage cost by 38%, and energy consumption cost by 58%. The security verification has been conducted using informal and formal methods (Real-Or-Random (ROR) and Scyther Validation tools) to ensure the protocol’s security. Additionally, we conduct a comparative analysis under an unknown attack scenario. Furthermore, the simulation is carried out using NS3.
AKMA (5G Authentication and Key Management for Applications)协议是3GPP提出的5G标准,目的是在用户对移动网络进行认证的基础上,规范移动用户对应用的认证过程。正如几位作者所指出的那样,5G-AKMA协议本身就存在严重的安全问题,包括隐私、不可链接性、短暂的秘密泄露和被盗设备攻击。此外,该协议不提供完美的前向保密。此外,网络运营商能够记录用户订阅的所有应用程序,任何窃听通信通道的外部人员都能够将请求链接到来自同一用户的不同应用程序。虽然目前的状态表明,提出了各种协议来解决5G-AKMA的安全问题,但它们要么容易受到严重攻击,要么计算量太大。在本文中,我们提供了一个新的协议版本,能够有效地解决这些隐私问题。此外,我们还扩展了该协议,以便它可以用于多访问边缘计算(MEC)应用程序中的通信,同时考虑到从一个MEC服务器到另一个MEC服务器的切换过程。该协议与现有协议进行了彻底的比较,揭示了其在通信、计算、存储和能源成本方面的效率。对比分析表明,提出的5G-AKMA降低了92%的计算成本、74%的通信成本、38%的存储成本和58%的能耗成本。使用非正式和正式的方法(Real-Or-Random (ROR)和Scyther验证工具)进行安全验证,以确保协议的安全性。此外,我们在未知的攻击场景下进行了比较分析。在此基础上,采用NS3进行了仿真。
{"title":"Efficient Privacy-Preserving 5G Authentication and Key Agreement for Applications (5G-AKMA) in Multi-Access Edge Computing","authors":"Awaneesh Kumar Yadav;An Braeken","doi":"10.1109/TNSM.2025.3635876","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3635876","url":null,"abstract":"The 5G Authentication and Key Management for Applications (AKMA) protocol is a 5G standard proposed by 3GPP in order to standardize the authentication procedure of mobile users towards applications based on the authentication of the user to the mobile network. As pointed out by several authors, the 5G-AKMA protocol inherently poses severe security issues, including privacy, unlinkability, ephemeral secret leakage and stolen device attacks. Also, the protocol does not offer perfect forward secrecy. In addition, the network operator is able to record all applications to which the user is subscribed and any outsider eavesdropping the communication channel is able to link requests to different applications coming from the same user. While the state of the shows that various protocols are proposed to solve the 5G-AKMA security issues, they are either vulnerable to severe attacks or are computationally extensive. In this paper, we provide a new version of the protocol able to solve these privacy issues in an effective manner. In addition, we also extend the protocol such that it can be used for communications in multi-access edge computing (MEC) applications, taking into account handover procedures from one MEC server to another. The proposed protocol has been thoroughly compared to existing ones, revealing its efficiency in terms of communication, computation, storage, and energy costs. The comparative analysis shows that the proposed 5G-AKMA reduces computational cost by 92%, communication cost by 74%, storage cost by 38%, and energy consumption cost by 58%. The security verification has been conducted using informal and formal methods (Real-Or-Random (ROR) and Scyther Validation tools) to ensure the protocol’s security. Additionally, we conduct a comparative analysis under an unknown attack scenario. Furthermore, the simulation is carried out using NS3.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1874-1890"},"PeriodicalIF":5.4,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21DOI: 10.1109/TNSM.2025.3635692
Jiachen Liang;Yang Du;He Huang;Yu-E Sun;Guoju Gao;Yonglong Luo
Identifying the hierarchical heavy hitters (HHHs), i.e., the frequent aggregated flows based on common IP prefixes, is a vital task in network traffic measurement and security. Existing methods typically employ dynamic trie structures to track numerous prefixes or utilize multiple separate sketch instances, one for each hierarchical level, to capture HHHs across different levels, while both approaches suffer from low memory efficiency and limited compatibility with programmable switches. In this paper, we introduce two novel HHH detection solutions, respectively, Hierarchical Heavy Detector (HHD) and the Compressed Hierarchical Heavy Detector (CHHD), to achieve high memory efficiency and enhanced hardware compatibility. The key idea of HHD is to design a shared bucket array structure to identify and record HHHs from all hierarchical levels, which avoids the memory wastage of maintaining separate sketches to achieve high memory efficiency and allows feasible deployment of both byte-hierarchy and bit-hierarchy HHH detection on programmable switches using minimal processing stage resources. Additionally, HHD utilizes a sampling-based update strategy to effectively balance packet processing speed and detection accuracy. Furthermore, we present the CHHD, which enhances HHH detection in bit hierarchies through a more compact cell structure, which allows for compressing several ancestor and descendant prefixes within a single cell, further boosting memory efficiency and accuracy. We have implemented HHD and CHHD on a P4-based programmable switch with limited switch resources. Experimental results based on real-world Internet traces demonstrate that HHD and CHHD outperform the state-of-the-art by achieving up to 56 percentage points higher detection precision and $2.6times $ higher throughput.
{"title":"Memory-Efficient and Hardware-Friendly Sketches for Hierarchical Heavy Hitter Detection","authors":"Jiachen Liang;Yang Du;He Huang;Yu-E Sun;Guoju Gao;Yonglong Luo","doi":"10.1109/TNSM.2025.3635692","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3635692","url":null,"abstract":"Identifying the hierarchical heavy hitters (HHHs), i.e., the frequent aggregated flows based on common IP prefixes, is a vital task in network traffic measurement and security. Existing methods typically employ dynamic trie structures to track numerous prefixes or utilize multiple separate sketch instances, one for each hierarchical level, to capture HHHs across different levels, while both approaches suffer from low memory efficiency and limited compatibility with programmable switches. In this paper, we introduce two novel HHH detection solutions, respectively, Hierarchical Heavy Detector (HHD) and the Compressed Hierarchical Heavy Detector (CHHD), to achieve high memory efficiency and enhanced hardware compatibility. The key idea of HHD is to design a shared bucket array structure to identify and record HHHs from all hierarchical levels, which avoids the memory wastage of maintaining separate sketches to achieve high memory efficiency and allows feasible deployment of both byte-hierarchy and bit-hierarchy HHH detection on programmable switches using minimal processing stage resources. Additionally, HHD utilizes a sampling-based update strategy to effectively balance packet processing speed and detection accuracy. Furthermore, we present the CHHD, which enhances HHH detection in bit hierarchies through a more compact cell structure, which allows for compressing several ancestor and descendant prefixes within a single cell, further boosting memory efficiency and accuracy. We have implemented HHD and CHHD on a P4-based programmable switch with limited switch resources. Experimental results based on real-world Internet traces demonstrate that HHD and CHHD outperform the state-of-the-art by achieving up to 56 percentage points higher detection precision and <inline-formula> <tex-math>$2.6times $ </tex-math></inline-formula> higher throughput.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"582-593"},"PeriodicalIF":5.4,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145852533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Energy efficiency and minimization of redundant transmissions are critical challenges in Wireless Sensor Networks (WSNs), especially in heterogeneous IoT environments where sensor nodes (SNs) are resource-constrained and deployed in remote or inaccessible areas. This paper aims to address the dual problem of uneven energy distribution and limited network lifespan by proposing a novel Artificial Protozoa Optimizer-based Cluster Head Selection (APO-CHS) algorithm. The proposed APO-CHS is inspired by the adaptive behavior of Euglena, integrating foraging, dormancy, and reproduction mechanisms to optimize cluster head and relay node selection through a multi-objective fitness function. The function incorporates residual energy, node density, neighbor distance, and energy consumption rate to guide the selection process effectively. Additionally, to tackle communication inefficiency, a lightweight data aggregation scheme is employed. This scheme reduces redundant transmissions by introducing a multi-level aggregation model that eliminates full, partial, and duplicate data in both intra- and inter-cluster communication. The simulation results demonstrate that the proposed framework improves network stability by 29.24%, extends network lifetime by 283.96%, and increases throughput by over 60% compared to baseline methods, thus making it a highly efficient and scalable solution for energy-aware IoT-enabled WSN applications.
{"title":"Intelligent Energy-Aware Routing via Protozoa Behavior in IoT-Enabled WSNs","authors":"Samayveer Singh;Vikas Tyagi;Aruna Malik;Rajeev Kumar;Ankur;Neeraj Kumar","doi":"10.1109/TNSM.2025.3636202","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3636202","url":null,"abstract":"Energy efficiency and minimization of redundant transmissions are critical challenges in Wireless Sensor Networks (WSNs), especially in heterogeneous IoT environments where sensor nodes (SNs) are resource-constrained and deployed in remote or inaccessible areas. This paper aims to address the dual problem of uneven energy distribution and limited network lifespan by proposing a novel Artificial Protozoa Optimizer-based Cluster Head Selection (APO-CHS) algorithm. The proposed APO-CHS is inspired by the adaptive behavior of Euglena, integrating foraging, dormancy, and reproduction mechanisms to optimize cluster head and relay node selection through a multi-objective fitness function. The function incorporates residual energy, node density, neighbor distance, and energy consumption rate to guide the selection process effectively. Additionally, to tackle communication inefficiency, a lightweight data aggregation scheme is employed. This scheme reduces redundant transmissions by introducing a multi-level aggregation model that eliminates full, partial, and duplicate data in both intra- and inter-cluster communication. The simulation results demonstrate that the proposed framework improves network stability by 29.24%, extends network lifetime by 283.96%, and increases throughput by over 60% compared to baseline methods, thus making it a highly efficient and scalable solution for energy-aware IoT-enabled WSN applications.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1960-1969"},"PeriodicalIF":5.4,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}