Pub Date : 2025-12-17DOI: 10.1109/TNSM.2025.3645079
Yali Yuan;Ruolin Ma;Jian Ge;Guang Cheng
This paper introduces an innovative blind flow watermarking framework on the basis of Invertible Neural Network (INN) called IFW, which aims to solve the problem of suboptimal encoder-decoder coupling in existing end-to-end watermarking architectures. The framework tightly couples the encoder and decoder to achieve highly consistent feature mapping using the same parameters, thus effectively avoiding redundant feature embedding. In addition, this paper adopts the INN to implement watermarking, which supports forward encoding and backward decoding, and the watermark extraction is completely dependent on the embedding algorithm without the need for the original network flow. This feature enables both the embedding and the blind extraction of watermarks simultaneously. Extensive experiments demonstrate that the proposed IFW method achieves a watermark extraction accuracy exceeding 96.6% and maintains a stable K-S test p-value above 0.85 in both simulated and real-world Tor traffic environments. These results indicate a clear advantage over mainstream baselines, highlighting the method’s ability to jointly ensure robustness and invisibility, as well as its strong potential for real-world deployment.
{"title":"Robust and Invisible Flow Watermarking With Invertible Neural Network for Traffic Tracking","authors":"Yali Yuan;Ruolin Ma;Jian Ge;Guang Cheng","doi":"10.1109/TNSM.2025.3645079","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3645079","url":null,"abstract":"This paper introduces an innovative blind flow watermarking framework on the basis of Invertible Neural Network (INN) called IFW, which aims to solve the problem of suboptimal encoder-decoder coupling in existing end-to-end watermarking architectures. The framework tightly couples the encoder and decoder to achieve highly consistent feature mapping using the same parameters, thus effectively avoiding redundant feature embedding. In addition, this paper adopts the INN to implement watermarking, which supports forward encoding and backward decoding, and the watermark extraction is completely dependent on the embedding algorithm without the need for the original network flow. This feature enables both the embedding and the blind extraction of watermarks simultaneously. Extensive experiments demonstrate that the proposed IFW method achieves a watermark extraction accuracy exceeding 96.6% and maintains a stable K-S test p-value above 0.85 in both simulated and real-world Tor traffic environments. These results indicate a clear advantage over mainstream baselines, highlighting the method’s ability to jointly ensure robustness and invisibility, as well as its strong potential for real-world deployment.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1381-1394"},"PeriodicalIF":5.4,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1109/TNSM.2025.3645449
Ioannis Dimolitsas;Maria Diamanti;Stefanos Voikos;Symeon Papavassiliou
The evolution toward sixth-generation (6G) networks necessitates integrated resource management solutions to address the interdependencies between network segments, such as Radio Access Network (RAN) and Edge Cloud (EC) infrastructures. Unified management of network and compute fabrics is crucial for achieving seamless service delivery, end-to-end power efficiency, and delay guarantees, while resiliency becomes a key enabler for adapting to various application demands and diverse network segment conditions. In this context, this paper proposes a unified framework for dependable wireless EC networks that jointly addresses the problems of RAN selection and Service Function Chain (SFC) embedding to minimize the total power consumption across network segments under end-to-end delay SFC deployment constraints. The framework iteratively solves these problems, considering the interdependencies between RAN ingress points and the EC network resource constraints. To deal with the high dimensionality of the considered parameters and achieve timely and scalable decision-making, a coalition formation game optimizes RAN selection, while a delay-aware heuristic approach undertakes the power-efficient embedding of multiple SFCs within the EC network. Simulation results demonstrate the framework’s efficiency in reducing power consumption compared to segment-specific approaches, highlighting the importance of cross-segment dependencies. Also, the adaptability of the proposed unified modeling and the framework’s scalability are demonstrated, ensuring resilient performance under varying network parameter settings.
{"title":"Resilient RAN Selection and SFC Deployment in Dependable Wireless Edge Cloud Networks","authors":"Ioannis Dimolitsas;Maria Diamanti;Stefanos Voikos;Symeon Papavassiliou","doi":"10.1109/TNSM.2025.3645449","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3645449","url":null,"abstract":"The evolution toward sixth-generation (6G) networks necessitates integrated resource management solutions to address the interdependencies between network segments, such as Radio Access Network (RAN) and Edge Cloud (EC) infrastructures. Unified management of network and compute fabrics is crucial for achieving seamless service delivery, end-to-end power efficiency, and delay guarantees, while resiliency becomes a key enabler for adapting to various application demands and diverse network segment conditions. In this context, this paper proposes a unified framework for dependable wireless EC networks that jointly addresses the problems of RAN selection and Service Function Chain (SFC) embedding to minimize the total power consumption across network segments under end-to-end delay SFC deployment constraints. The framework iteratively solves these problems, considering the interdependencies between RAN ingress points and the EC network resource constraints. To deal with the high dimensionality of the considered parameters and achieve timely and scalable decision-making, a coalition formation game optimizes RAN selection, while a delay-aware heuristic approach undertakes the power-efficient embedding of multiple SFCs within the EC network. Simulation results demonstrate the framework’s efficiency in reducing power consumption compared to segment-specific approaches, highlighting the importance of cross-segment dependencies. Also, the adaptability of the proposed unified modeling and the framework’s scalability are demonstrated, ensuring resilient performance under varying network parameter settings.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1312-1328"},"PeriodicalIF":5.4,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-10DOI: 10.1109/TNSM.2025.3642315
Chang Chen;Guoyu Yang;Dawei Zhang;Wei Wang;Qi Chen;Jin Li
The widespread deployment of Internet of Things (IoT) devices has driven their segmentation into distinct trust domains for the purpose of governance, creating a critical need for secure cross-domain authentication (CDA). CDA must preserve both anonymity and traceability of device identities to enable trustworthy data exchange. However, existing approaches, while exploring this trade-off, remain vulnerable to single points of failure and Sybil attacks—threats that are especially severe for unattended and resource-constrained devices. In this paper, we propose a Self-Sovereign and Supervised Cross-domain authentication scheme (S3Cross) to tackle these issues. The main building block we designed is a pseudonym management scheme (PMS) that allows devices to generate and use pseudonyms without relying on a trusted party. Although devices has full control of their identities, PMS still ensures traceability, Sybil resistance, and revocability. We define the formal security models of PMS, instantiate it under two different approaches, namely group signature (S3Cross-GS) and zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs, S3Cross-ZK), and present security proofs for our proposal. We implemented and evaluated S3Cross. The result shows that our scheme achieves an effective trade-off between security and efficiency.
{"title":"S3Cross: Blockchain-Based Cross-Domain Authentication With Self-Sovereign and Supervised Identity Management","authors":"Chang Chen;Guoyu Yang;Dawei Zhang;Wei Wang;Qi Chen;Jin Li","doi":"10.1109/TNSM.2025.3642315","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3642315","url":null,"abstract":"The widespread deployment of Internet of Things (IoT) devices has driven their segmentation into distinct trust domains for the purpose of governance, creating a critical need for secure cross-domain authentication (CDA). CDA must preserve both anonymity and traceability of device identities to enable trustworthy data exchange. However, existing approaches, while exploring this trade-off, remain vulnerable to single points of failure and Sybil attacks—threats that are especially severe for unattended and resource-constrained devices. In this paper, we propose a Self-Sovereign and Supervised Cross-domain authentication scheme (S3Cross) to tackle these issues. The main building block we designed is a pseudonym management scheme (PMS) that allows devices to generate and use pseudonyms without relying on a trusted party. Although devices has full control of their identities, PMS still ensures traceability, Sybil resistance, and revocability. We define the formal security models of PMS, instantiate it under two different approaches, namely group signature (S3Cross-GS) and zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs, S3Cross-ZK), and present security proofs for our proposal. We implemented and evaluated S3Cross. The result shows that our scheme achieves an effective trade-off between security and efficiency.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1217-1231"},"PeriodicalIF":5.4,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Industrial Internet of Things (IIoT) leverages Federated Learning (FL) for distributed model training while preserving data privacy, and meta-computing enhances FL by optimizing and integrating distributed computing resources, improving efficiency and scalability. Efficient IIoT operations require a trade-off between model quality and training latency. Consequently, a primary challenge of FL in IIoT is to optimize overall system performance by balancing model quality and training latency. This paper designs a satisfaction function that accounts for data size, Age of Information (AoI), and training latency for meta-computing. Additionally, the satisfaction function is incorporated into the utility function to incentivize IIoT nodes to participate in model training. We model the utility functions of servers and nodes as a two-stage Stackelberg game and employ a deep reinforcement learning approach to learn the Stackelberg equilibrium. This approach ensures balanced rewards and enhances the applicability of the incentive scheme for IIoT. Simulation results demonstrate that, under the same budget constraints, the proposed incentive scheme improves utility by at least 23.7% compared to existing FL schemes without compromising model accuracy.
工业物联网(IIoT)利用联邦学习(FL)进行分布式模型训练,同时保护数据隐私,元计算通过优化和集成分布式计算资源、提高效率和可扩展性来增强联邦学习。高效的工业物联网操作需要在模型质量和训练延迟之间进行权衡。因此,人工智能在工业物联网中的主要挑战是通过平衡模型质量和训练延迟来优化整体系统性能。本文设计了一个考虑数据大小、信息时代(Age of Information, AoI)和元计算训练延迟的满意度函数。此外,在效用函数中加入满意度函数,激励IIoT节点参与模型训练。我们将服务器和节点的效用函数建模为两阶段Stackelberg博弈,并采用深度强化学习方法来学习Stackelberg均衡。这种方法确保了平衡的奖励,并增强了激励方案对工业物联网的适用性。仿真结果表明,在相同的预算约束下,与现有的FL方案相比,所提出的激励方案在不影响模型精度的情况下,提高了至少23.7%的效用。
{"title":"Meta-Computing Enhanced Federated Learning in IIoT: Satisfaction-Aware Incentive Scheme via DRL-Based Stackelberg Game","authors":"Xiaohuan Li;Shaowen Qin;Xin Tang;Jiawen Kang;Jin Ye;Zhonghua Zhao;Yusi Zheng;Dusit Niyato","doi":"10.1109/TNSM.2025.3642395","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3642395","url":null,"abstract":"The Industrial Internet of Things (IIoT) leverages Federated Learning (FL) for distributed model training while preserving data privacy, and meta-computing enhances FL by optimizing and integrating distributed computing resources, improving efficiency and scalability. Efficient IIoT operations require a trade-off between model quality and training latency. Consequently, a primary challenge of FL in IIoT is to optimize overall system performance by balancing model quality and training latency. This paper designs a satisfaction function that accounts for data size, Age of Information (AoI), and training latency for meta-computing. Additionally, the satisfaction function is incorporated into the utility function to incentivize IIoT nodes to participate in model training. We model the utility functions of servers and nodes as a two-stage Stackelberg game and employ a deep reinforcement learning approach to learn the Stackelberg equilibrium. This approach ensures balanced rewards and enhances the applicability of the incentive scheme for IIoT. Simulation results demonstrate that, under the same budget constraints, the proposed incentive scheme improves utility by at least 23.7% compared to existing FL schemes without compromising model accuracy.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1356-1368"},"PeriodicalIF":5.4,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.1109/TNSM.2025.3640095
Narendra K. Dewangan;Preeti Chandrakar
Blockchain is increasingly used in industrial, financial, and IoT settings for secure and auditable transaction processing; however, existing leader election and consensus methods, such as PBFT, Raft, and reputation-based schemes, suffer from static leadership, unfair vote distribution, and limited scalability. To address these gaps, we propose VLSA (Vote-based Leader Selection Algorithm), a decentralized rotation-based mechanism that ensures fairness in leader election, and MPoAh (Modified Proof-of-Authentication), a lightweight consensus protocol tailored for multi-party signatures. Our implementation, built with Python, CouchDB, and Ed25519 cryptography, achieves a 35% reduction in signature and verification latency and a 30% decrease in on-chain storage compared to state-of-the-art approaches. Simulation further shows 95% packet delivery, average authentication latency of 12 ms, and ledger throughput of 250 tx/s. These results demonstrate that the proposed system enables democratic participation in consensus, supports deployment on resource-constrained devices, and strengthens resistance against insider and Sybil attacks, thereby advancing secure and scalable blockchain-based authentication.
{"title":"VLSA: Voting-Based Leader Selection Algorithm for Multi-Party Signature Blockchain Transactions","authors":"Narendra K. Dewangan;Preeti Chandrakar","doi":"10.1109/TNSM.2025.3640095","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3640095","url":null,"abstract":"Blockchain is increasingly used in industrial, financial, and IoT settings for secure and auditable transaction processing; however, existing leader election and consensus methods, such as PBFT, Raft, and reputation-based schemes, suffer from static leadership, unfair vote distribution, and limited scalability. To address these gaps, we propose VLSA (Vote-based Leader Selection Algorithm), a decentralized rotation-based mechanism that ensures fairness in leader election, and MPoAh (Modified Proof-of-Authentication), a lightweight consensus protocol tailored for multi-party signatures. Our implementation, built with Python, CouchDB, and Ed25519 cryptography, achieves a 35% reduction in signature and verification latency and a 30% decrease in on-chain storage compared to state-of-the-art approaches. Simulation further shows 95% packet delivery, average authentication latency of 12 ms, and ledger throughput of 250 tx/s. These results demonstrate that the proposed system enables democratic participation in consensus, supports deployment on resource-constrained devices, and strengthens resistance against insider and Sybil attacks, thereby advancing secure and scalable blockchain-based authentication.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1395-1405"},"PeriodicalIF":5.4,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-05DOI: 10.1109/TNSM.2025.3640713
Dongyi Han;Qiang Zhi
In federated learning (FL), effective client and privacy management are crucial for maintaining system efficiency and model performance. However, existing FL frameworks face challenges such as imbalanced client contributions, inefficient resource allocation, and static privacy mechanisms, making scalable client management and adaptive privacy control essential. To address these issues, this paper proposes DGDPFL, a novel FL framework that enhances client selection, resource management, and privacy control through dynamic client grouping and adaptive privacy budgeting. The framework optimizes client management by clustering participants based on device capabilities, bandwidth, and data quality, enabling efficient resource allocation. A contribution-aware selection mechanism ensures fair participation, while a privacy-aware control strategy dynamically adjusts privacy budgets based on model similarity, improving both privacy guarantees and learning performance. We evaluate DGDPFL in real-world and simulated environments. On CIFAR-10 and Fashion-MNIST, DGDPFL achieves 77.83% and 88.35% test accuracy respectively with only 10–20 clients and 40 training rounds, outperforming state-of-the-art baselines by up to 12.36%. On audio datasets FSDD and SAD, the accuracy reaches up to 97%, validating the method’s robustness across modalities. Experimental results demonstrate that DGDPFL outperforms existing approaches by achieving higher model accuracy, improved system efficiency, and better privacy-utility balance. These findings highlight DGDPFL’s effectiveness in managing clients and privacy in FL environments.
{"title":"DGDPFL: Dynamic Grouping and Privacy Budget Adjustment for Federated Learning in Networked Service Management","authors":"Dongyi Han;Qiang Zhi","doi":"10.1109/TNSM.2025.3640713","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3640713","url":null,"abstract":"In federated learning (FL), effective client and privacy management are crucial for maintaining system efficiency and model performance. However, existing FL frameworks face challenges such as imbalanced client contributions, inefficient resource allocation, and static privacy mechanisms, making scalable client management and adaptive privacy control essential. To address these issues, this paper proposes DGDPFL, a novel FL framework that enhances client selection, resource management, and privacy control through dynamic client grouping and adaptive privacy budgeting. The framework optimizes client management by clustering participants based on device capabilities, bandwidth, and data quality, enabling efficient resource allocation. A contribution-aware selection mechanism ensures fair participation, while a privacy-aware control strategy dynamically adjusts privacy budgets based on model similarity, improving both privacy guarantees and learning performance. We evaluate DGDPFL in real-world and simulated environments. On CIFAR-10 and Fashion-MNIST, DGDPFL achieves 77.83% and 88.35% test accuracy respectively with only 10–20 clients and 40 training rounds, outperforming state-of-the-art baselines by up to 12.36%. On audio datasets FSDD and SAD, the accuracy reaches up to 97%, validating the method’s robustness across modalities. Experimental results demonstrate that DGDPFL outperforms existing approaches by achieving higher model accuracy, improved system efficiency, and better privacy-utility balance. These findings highlight DGDPFL’s effectiveness in managing clients and privacy in FL environments.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1826-1841"},"PeriodicalIF":5.4,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-04DOI: 10.1109/TNSM.2025.3640070
Long Chen;Yukang Jiang;Zishang Qiu;Donglin Zhu;Zhiquan Liu;Zhenzhou Tang
Efficient deployment of thousands of energy-constrained sensor nodes (SNs) in large-scale wireless sensor networks (WSNs) is critical for reliable data transmission and target sensing. This study addresses the Minimum Energy Q-Coverage and C-Connectivity (MinEQC) problem for heterogeneous SNs in three-dimensional environments. MnPF (Metaheuristic–Neural Network Parallel Framework), a two-phase method that can embed most metaheuristic algorithms (MAs) and neural networks (NNs), is proposed to address the above problem. Phase-I partitions the monitoring region via divide-and-conquer and applies NN-based dimensionality reduction to accelerate parallel optimization of local Q-coverage and C-connectivity. Phase-II employs an MA-based adaptive restoration strategy to restore connectivity among subregions and systematically assess how different partitioning strategies affect the number of restoration steps. Experiments with four NNs and twelve MAs demonstrate efficiency, scalability, and adaptability of MnPF, while ablation studies confirm the necessity of both phases. MnPF bridges scalability and energy efficiency, providing a generalizable approach to SN deployment in large-scale WSNs.
{"title":"Toward Energy-Saving Deployment in Large-Scale Heterogeneous Wireless Sensor Networks for Q-Coverage and C-Connectivity: An Efficient Parallel Framework","authors":"Long Chen;Yukang Jiang;Zishang Qiu;Donglin Zhu;Zhiquan Liu;Zhenzhou Tang","doi":"10.1109/TNSM.2025.3640070","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3640070","url":null,"abstract":"Efficient deployment of thousands of energy-constrained sensor nodes (SNs) in large-scale wireless sensor networks (WSNs) is critical for reliable data transmission and target sensing. This study addresses the Minimum Energy Q-Coverage and C-Connectivity (MinEQC) problem for heterogeneous SNs in three-dimensional environments. MnPF (Metaheuristic–Neural Network Parallel Framework), a two-phase method that can embed most metaheuristic algorithms (MAs) and neural networks (NNs), is proposed to address the above problem. Phase-I partitions the monitoring region via divide-and-conquer and applies NN-based dimensionality reduction to accelerate parallel optimization of local Q-coverage and C-connectivity. Phase-II employs an MA-based adaptive restoration strategy to restore connectivity among subregions and systematically assess how different partitioning strategies affect the number of restoration steps. Experiments with four NNs and twelve MAs demonstrate efficiency, scalability, and adaptability of MnPF, while ablation studies confirm the necessity of both phases. MnPF bridges scalability and energy efficiency, providing a generalizable approach to SN deployment in large-scale WSNs.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1242-1259"},"PeriodicalIF":5.4,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1109/TNSM.2025.3639958
Jiahe Xu;Chao Guo;Moshe Zukerman
Virtual Network Embedding (VNE) is an important problem in network virtualization, involving the optimal allocation of resources from substrate networks to service requests in the form of Virtual Networks (VNs). This paper addresses a specific VNE problem in the context of Composable/Disaggregated Data Center (DDC) networks, characterized by the decoupling and reassembly of different resources into resource pools. Existing research on the VNE problem within Data Center (DC) networks primarily focuses on the Server-based DC (SDC) architecture. In the VNE problem within SDCs, a virtual node is typically mapped to a single server to fulfill its requirements for various resources. However, in the case of DDCs, a virtual node needs to be mapped to different resource nodes for different resources. We aim to design an optimization method to achieve the most efficient VNE within DDCs. To this end, we provide an embedding scheme that acts on each arriving VN request to embed the VN with minimized power consumption. Through this scheme, we demonstrate that we also achieve a high long-term acceptance ratio. We provide Mixed Integer Linear Programming (MILP) and scalable greedy algorithms to implement this scheme. We validate the efficiency of our greedy algorithms by comparing their performance against the MILP for small problems and demonstrate their superiority over baseline algorithms through comprehensive evaluations using both synthetic simulations and real-world Google cluster traces.
{"title":"Virtual Network Embedding for Data Centers With Composable or Disaggregated Architectures","authors":"Jiahe Xu;Chao Guo;Moshe Zukerman","doi":"10.1109/TNSM.2025.3639958","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3639958","url":null,"abstract":"Virtual Network Embedding (VNE) is an important problem in network virtualization, involving the optimal allocation of resources from substrate networks to service requests in the form of Virtual Networks (VNs). This paper addresses a specific VNE problem in the context of Composable/Disaggregated Data Center (DDC) networks, characterized by the decoupling and reassembly of different resources into resource pools. Existing research on the VNE problem within Data Center (DC) networks primarily focuses on the Server-based DC (SDC) architecture. In the VNE problem within SDCs, a virtual node is typically mapped to a single server to fulfill its requirements for various resources. However, in the case of DDCs, a virtual node needs to be mapped to different resource nodes for different resources. We aim to design an optimization method to achieve the most efficient VNE within DDCs. To this end, we provide an embedding scheme that acts on each arriving VN request to embed the VN with minimized power consumption. Through this scheme, we demonstrate that we also achieve a high long-term acceptance ratio. We provide Mixed Integer Linear Programming (MILP) and scalable greedy algorithms to implement this scheme. We validate the efficiency of our greedy algorithms by comparing their performance against the MILP for small problems and demonstrate their superiority over baseline algorithms through comprehensive evaluations using both synthetic simulations and real-world Google cluster traces.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1276-1296"},"PeriodicalIF":5.4,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1109/TNSM.2025.3640098
Sajjad Alizadeh;Majid Khabbazian
Payment channel networks have great potential to scale cryptocurrency payment systems. However, their scalability power is limited as payments occasionally fail in these networks due to various factors. In this work, we study these factors and analyze their imposing limitations. To this end, we propose a model where a payment channel network is viewed as a compression method. In this model, the compression rate is defined as the ratio of the total number of payments entering the network to the total number of transactions that are placed on the blockchain to handle failed payments or (re)open channels. We analyze the compression rate and its upper limit, referred to as compression capacity, for various payment models, channel-reopening strategies, and network topologies. For networks with a tree topology, we show that the compression rate is inversely proportional to the average path length traversed by payments. For general networks, we show that if payment rates are even slightly asymmetric and channels are not reopened regularly, a constant fraction of payments will always fail regardless of the number of channels, the topology of the network, the routing algorithm used and the amount of allocated funds in the network. We also examine the impact of routing and channel rebalancing on the network’s compression rate. We show that rebalancing and strategic routing can enhance the compression rate in payment channel networks where channels may be reopened, differing from the established literature on credit networks, which suggests these factors do not have an effect.
{"title":"On Scalability Power of Payment Channel Networks","authors":"Sajjad Alizadeh;Majid Khabbazian","doi":"10.1109/TNSM.2025.3640098","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3640098","url":null,"abstract":"Payment channel networks have great potential to scale cryptocurrency payment systems. However, their scalability power is limited as payments occasionally fail in these networks due to various factors. In this work, we study these factors and analyze their imposing limitations. To this end, we propose a model where a payment channel network is viewed as a compression method. In this model, the compression rate is defined as the ratio of the total number of payments entering the network to the total number of transactions that are placed on the blockchain to handle failed payments or (re)open channels. We analyze the compression rate and its upper limit, referred to as compression capacity, for various payment models, channel-reopening strategies, and network topologies. For networks with a tree topology, we show that the compression rate is inversely proportional to the average path length traversed by payments. For general networks, we show that if payment rates are even slightly asymmetric and channels are not reopened regularly, a constant fraction of payments will always fail regardless of the number of channels, the topology of the network, the routing algorithm used and the amount of allocated funds in the network. We also examine the impact of routing and channel rebalancing on the network’s compression rate. We show that rebalancing and strategic routing can enhance the compression rate in payment channel networks where channels may be reopened, differing from the established literature on credit networks, which suggests these factors do not have an effect.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1342-1355"},"PeriodicalIF":5.4,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.1109/TNSM.2025.3636785
Remi Hendriks;Mattijs Jonker;Roland van Rijswijk-Deij;Raffaele Sommese
Load Balancing (LB) is a routing strategy that increases performance by distributing traffic over multiple outgoing paths. In this work, we introduce a novel methodology to detect the influence of LB on anycast routing, which can be used by operators to detect networks that experience anycast site flipping, where traffic from a single client reaches multiple anycast sites. We use our methodology to measure the effects of LB-behavior on anycast routing at a global scale, covering both IPv4 and IPv6. Our results show that LB-induced anycast site flipping is widespread. The results also show our method can detect LB implementations on the global Internet, including detection and classification of Points-of-Presence (PoP) and egress selection techniques deployed by hypergiants, cloud providers, and network operators. We observe LB-induced site flipping directs distinct flows to different anycast sites with significant latency inflation. In cases with two paths between an anycast instance and a load-balanced destination, we observe an average RTT difference of 30 ms with 8% of load-balanced destinations seeing RTT differences of over 100 ms. Being able to detect these cases can help anycast operators significantly improve their service for affected clients.
{"title":"Load-Balancing Versus Anycast: A First Look at Operational Challenges","authors":"Remi Hendriks;Mattijs Jonker;Roland van Rijswijk-Deij;Raffaele Sommese","doi":"10.1109/TNSM.2025.3636785","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3636785","url":null,"abstract":"Load Balancing (LB) is a routing strategy that increases performance by distributing traffic over multiple outgoing paths. In this work, we introduce a novel methodology to detect the influence of LB on anycast routing, which can be used by operators to detect networks that experience anycast site flipping, where traffic from a single client reaches multiple anycast sites. We use our methodology to measure the effects of LB-behavior on anycast routing at a global scale, covering both IPv4 and IPv6. Our results show that LB-induced anycast site flipping is widespread. The results also show our method can detect LB implementations on the global Internet, including detection and classification of Points-of-Presence (PoP) and egress selection techniques deployed by hypergiants, cloud providers, and network operators. We observe LB-induced site flipping directs distinct flows to different anycast sites with significant latency inflation. In cases with two paths between an anycast instance and a load-balanced destination, we observe an average RTT difference of 30 ms with 8% of load-balanced destinations seeing RTT differences of over 100 ms. Being able to detect these cases can help anycast operators significantly improve their service for affected clients.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"814-823"},"PeriodicalIF":5.4,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145852501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}