Pub Date : 2025-08-18DOI: 10.1109/TNSM.2025.3599393
Mohammad Saleh Mahdizadeh;Behnam Bahrak;Mohammad Sayad Haghighi
The Bitcoin Lightning Network, as a second-layer solution for enhancing the scalability of Bitcoin transactions, facilitates transactions through payment channels between nodes. However, the rapid growth of the network and rising transaction volumes have exacerbated the challenge of managing payment channel imbalances. Payment channel imbalance, characterized by the concentration of liquidity in one direction, leads to a decrease in payment success rates, a reduction in the effective lifespan of payment channels, and a decline in the network’s overall efficiency and throughput. This study introduces a graph neural network-based recommendation strategy designed to enhance the Lightning Network’s autopilot system. The proposed approach proactively mitigates channel imbalances by optimizing channel recommendations, enabling dynamic and scalable liquidity management for network users. Simulations conducted using the CLoTH tool demonstrate a 45% increase in payment success rates, a 46% reduction in imbalanced channels, and a 14% increase in the lifespan of payment channels across the network compared to the existing autopilot recommendation strategies, and when compared with the commonly adopted circular rebalancing method, the proposed strategy achieves a 27% improvement in payment success rates. Additionally, we offer a comparative topological analysis between two snapshots of the LN, taken in November 2021 and August 2023, to facilitate unsupervised learning tasks. The results highlight an increase in network centralization alongside a decrease in the network size, emphasizing the growing need for decentralization strategies in the LN, such as the one proposed in this study.
{"title":"A GNN-Based Autopilot Recommendation Strategy to Mitigate Payment Channel Imbalance Problem in Bitcoin Lightning Network","authors":"Mohammad Saleh Mahdizadeh;Behnam Bahrak;Mohammad Sayad Haghighi","doi":"10.1109/TNSM.2025.3599393","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3599393","url":null,"abstract":"The Bitcoin Lightning Network, as a second-layer solution for enhancing the scalability of Bitcoin transactions, facilitates transactions through payment channels between nodes. However, the rapid growth of the network and rising transaction volumes have exacerbated the challenge of managing payment channel imbalances. Payment channel imbalance, characterized by the concentration of liquidity in one direction, leads to a decrease in payment success rates, a reduction in the effective lifespan of payment channels, and a decline in the network’s overall efficiency and throughput. This study introduces a graph neural network-based recommendation strategy designed to enhance the Lightning Network’s autopilot system. The proposed approach proactively mitigates channel imbalances by optimizing channel recommendations, enabling dynamic and scalable liquidity management for network users. Simulations conducted using the CLoTH tool demonstrate a 45% increase in payment success rates, a 46% reduction in imbalanced channels, and a 14% increase in the lifespan of payment channels across the network compared to the existing autopilot recommendation strategies, and when compared with the commonly adopted circular rebalancing method, the proposed strategy achieves a 27% improvement in payment success rates. Additionally, we offer a comparative topological analysis between two snapshots of the LN, taken in November 2021 and August 2023, to facilitate unsupervised learning tasks. The results highlight an increase in network centralization alongside a decrease in the network size, emphasizing the growing need for decentralization strategies in the LN, such as the one proposed in this study.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"23 ","pages":"1863-1873"},"PeriodicalIF":5.4,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-15DOI: 10.1109/TNSM.2025.3599203
Anna Prado;Fidan Mehmeti;Wolfgang Kellerer
Signal quality fluctuates significantly due to blockages of Line of Sight, shadowing, and user mobility. This renders mobility management in 5G quite challenging. To improve it, 3GPP introduced Conditional Handover (CHO), which reduces handover failures by preparing target Base Stations (BSs) in advance. CHO adapts to the varying channel conditions and constantly prepares/releases cells, which leads to an increased exchange of control messages between the user and BSs. Connecting to the BS with the strongest signal is not always beneficial because the available resources and other users’ channels should be considered for a successful network operation. Hence, the need to carefully decide when to hand over, and when that happens, to select the best target BS. In this paper, we first formulate an optimization problem that minimizes network signaling by reducing the number of unprepared handovers and wasted cell preparations while providing a minimum rate to everyone. As the problem is NP-hard, we relax it and obtain a lower bound. Then, we propose a Cost-Efficient CHO (CECHO) algorithm with performance guarantees. Using 5G datasets, we compare CECHO with two baselines and show that it outperforms them by at least 45% while being near-optimal. However, reducing the signaling decreases the total throughput, which is an important metric for the network operator. Thus, we expand our initial problem into a Multi-Objective (MO) optimization, where we additionally maximize the network sum throughput. Results show that CECHO-MO increases the sum throughput more than $3times $ with only a 4% increase in signaling.
由于视线、阴影和用户移动性的阻塞,信号质量波动很大。这使得5G的移动性管理非常具有挑战性。为了改进这一点,3GPP引入了条件切换(CHO),通过提前准备目标基站(BSs)来减少切换失败。CHO适应不同的信道条件并不断准备/释放cell,这导致用户和BSs之间控制消息的交换增加。连接到具有最强信号的BS并不总是有益的,因为为了成功的网络操作,应该考虑可用资源和其他用户的信道。因此,需要仔细决定何时移交,以及当这种情况发生时,选择最佳目标BS。在本文中,我们首先制定了一个优化问题,通过减少未准备的移交数量和浪费的细胞准备,同时为每个人提供最小的速率,从而最大限度地减少网络信令。由于问题是np困难的,我们将其松弛并得到一个下界。然后,我们提出了一种具有性能保证的Cost-Efficient CHO (CECHO)算法。使用5G数据集,我们将CECHO与两条基线进行比较,结果表明,CECHO在接近最佳的情况下,其性能至少优于它们45%。然而,减少信令会降低总吞吐量,这是网络运营商的一个重要指标。因此,我们将初始问题扩展为多目标(MO)优化,其中我们额外最大化网络和吞吐量。结果表明,ceho - mo使总吞吐量增加了3倍以上,而信令量仅增加了4%。
{"title":"Reducing Mobility-Related Signaling With Network Sum Throughput Maximization in 5G","authors":"Anna Prado;Fidan Mehmeti;Wolfgang Kellerer","doi":"10.1109/TNSM.2025.3599203","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3599203","url":null,"abstract":"Signal quality fluctuates significantly due to blockages of Line of Sight, shadowing, and user mobility. This renders mobility management in 5G quite challenging. To improve it, 3GPP introduced Conditional Handover (CHO), which reduces handover failures by preparing target Base Stations (BSs) in advance. CHO adapts to the varying channel conditions and constantly prepares/releases cells, which leads to an increased exchange of control messages between the user and BSs. Connecting to the BS with the strongest signal is not always beneficial because the available resources and other users’ channels should be considered for a successful network operation. Hence, the need to carefully decide when to hand over, and when that happens, to select the best target BS. In this paper, we first formulate an optimization problem that minimizes network signaling by reducing the number of unprepared handovers and wasted cell preparations while providing a minimum rate to everyone. As the problem is NP-hard, we relax it and obtain a lower bound. Then, we propose a Cost-Efficient CHO (CECHO) algorithm with performance guarantees. Using 5G datasets, we compare CECHO with two baselines and show that it outperforms them by at least 45% while being near-optimal. However, reducing the signaling decreases the total throughput, which is an important metric for the network operator. Thus, we expand our initial problem into a Multi-Objective (MO) optimization, where we additionally maximize the network sum throughput. Results show that CECHO-MO increases the sum throughput more than <inline-formula> <tex-math>$3times $ </tex-math></inline-formula> with only a 4% increase in signaling.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 6","pages":"6048-6065"},"PeriodicalIF":5.4,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11126168","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145665775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate and comprehensive data acquisition is critical for modern data-driven environmental applications. Mobile Crowdsensing (MCS) offers an effective approach by leveraging user participation to collect environmental data through task assignment. To minimize costs, MCS platforms often partition the environment into subareas and utilize inference algorithms to extrapolate data for entire subareas based on partial sensing in a limited subset. However, determining the optimal set of users for sensing tasks remains challenging due to constraints such as user availability and the complexity of data inference models. This paper introduces Sensify, a task assignment strategy that optimizes data acquisition by accounting for data correlations and budget constraints. Sensify efficiently selects subareas and recruits cost-effective users for sensing tasks, incorporating user-specific contexts such as location and device power availability during task assignment. To adaptively manage the platform budget, the strategy considers a dynamic set of users with varying costs over time. A deep recurrent reinforcement learning-based network is employed to select optimal subareas for sensing, while user recruitment is dynamically optimized using a reinforcement learning approach. Specifically, a modified Contextual Combinatorial Multi-Armed Bandit (CC-MAB) framework is utilized to handle the volatility and variability in user costs. Experiments conducted on two real-world datasets demonstrate that Sensify can improve data acquisition by up to 7% compared to existing approaches.
{"title":"Sensify: A Learning-Based Budget-Aware Task Assignment in Mobile Crowdsensing","authors":"Shabnam Seradji;Ahmad Khonsari;Vahid Shah-Mansouri;Mahdi Dolati;Masoumeh Moradian","doi":"10.1109/TNSM.2025.3597953","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3597953","url":null,"abstract":"Accurate and comprehensive data acquisition is critical for modern data-driven environmental applications. Mobile Crowdsensing (MCS) offers an effective approach by leveraging user participation to collect environmental data through task assignment. To minimize costs, MCS platforms often partition the environment into subareas and utilize inference algorithms to extrapolate data for entire subareas based on partial sensing in a limited subset. However, determining the optimal set of users for sensing tasks remains challenging due to constraints such as user availability and the complexity of data inference models. This paper introduces Sensify, a task assignment strategy that optimizes data acquisition by accounting for data correlations and budget constraints. Sensify efficiently selects subareas and recruits cost-effective users for sensing tasks, incorporating user-specific contexts such as location and device power availability during task assignment. To adaptively manage the platform budget, the strategy considers a dynamic set of users with varying costs over time. A deep recurrent reinforcement learning-based network is employed to select optimal subareas for sensing, while user recruitment is dynamically optimized using a reinforcement learning approach. Specifically, a modified Contextual Combinatorial Multi-Armed Bandit (CC-MAB) framework is utilized to handle the volatility and variability in user costs. Experiments conducted on two real-world datasets demonstrate that Sensify can improve data acquisition by up to 7% compared to existing approaches.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 6","pages":"6128-6142"},"PeriodicalIF":5.4,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145665754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In mobile crowdsensing, task interruptions can cause failures and reduce system stability. Despite the significance of this issue, few studies have addressed task allocation under interruptions. To bridge this gap, we propose IT-STA, an interruption-based stable task allocation algorithm that reallocates interrupted tasks to improve completion rates and maintain system stability. First, an efficient detection mechanism is designed to promptly identify interrupted tasks, ensuring timely intervention. Second, a distributed reallocation strategy is developed to assign interrupted tasks to suitable participants, leveraging a novel individual migration strategy that enables parallel coordination among nodes, ensuring efficient global matching and avoiding suboptimal solutions. Experimental results demonstrate IT-STA’s superiority over baselines in task allocation stability and performance.
{"title":"Stable Task Allocation in Mobile Crowdsensing: An Interruption-Driven Approach","authors":"Kaimin Wei;Guozi Qi;Lin Cui;Jinpeng Chen;Xiaohui Chen;Ke Xu","doi":"10.1109/TNSM.2025.3598025","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3598025","url":null,"abstract":"In mobile crowdsensing, task interruptions can cause failures and reduce system stability. Despite the significance of this issue, few studies have addressed task allocation under interruptions. To bridge this gap, we propose IT-STA, an interruption-based stable task allocation algorithm that reallocates interrupted tasks to improve completion rates and maintain system stability. First, an efficient detection mechanism is designed to promptly identify interrupted tasks, ensuring timely intervention. Second, a distributed reallocation strategy is developed to assign interrupted tasks to suitable participants, leveraging a novel individual migration strategy that enables parallel coordination among nodes, ensuring efficient global matching and avoiding suboptimal solutions. Experimental results demonstrate IT-STA’s superiority over baselines in task allocation stability and performance.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 6","pages":"6190-6199"},"PeriodicalIF":5.4,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145665763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-11DOI: 10.1109/TNSM.2025.3597550
Shiwen Zhang;Wen Zhang;Wei Liang;Wenqiang Jin;Keqin Li
Internet of Things (IoT) networks have penetrated our daily life and industries. However, IoT devices are typically small-sized with constrained storage. Distributed storage systems are emerging as promising solutions to tackle such challenges. InterPlanetary File System (IPFS) is a desired framework enabling IoT devices to upload its data to a distributed cloud while returning a hash-ID for downloading and file-sharing purposes. Nevertheless, IPFS lacks of robust security design and is vulnerable to security threats such as data tampering, and data leakage. In particular, whenever device A’s file hash-ID is shared to an arbitrary device B, device A will fully lose the control over file. In other words, device B could further share it to anyone without device A’s agreements. To conquer the challenge, we propose a comprehensive design for securing the distributed IoT storage systems, named StorSec. Specifically, we design a new heterogeneous framework using an improved attribute encryption algorithm to eliminate the single-point performance bottleneck problem, which not only realizes fine-grained access control and ensures the security of data during transmission, but also improves the performance of key generation. Secondly, we design an anomaly detection algorithm, which is based on hashchain technology and combines the user privacy metadata stored on the blockchain to complete the verification process, effectively protecting the file hash identifier, ensuring access control to the file, and thus providing protection for the security and integrity of data storage. Furthermore, we design an auditing algorithm that helps the system in tracking malicious entities. Ultimately, the security and efficiency of the proposed scheme are evaluated by both security analysis and experimental results.
{"title":"StorSec: A Comprehensive Design for Securing the Distributed IoT Storage Systems","authors":"Shiwen Zhang;Wen Zhang;Wei Liang;Wenqiang Jin;Keqin Li","doi":"10.1109/TNSM.2025.3597550","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3597550","url":null,"abstract":"Internet of Things (IoT) networks have penetrated our daily life and industries. However, IoT devices are typically small-sized with constrained storage. Distributed storage systems are emerging as promising solutions to tackle such challenges. InterPlanetary File System (IPFS) is a desired framework enabling IoT devices to upload its data to a distributed cloud while returning a hash-ID for downloading and file-sharing purposes. Nevertheless, IPFS lacks of robust security design and is vulnerable to security threats such as data tampering, and data leakage. In particular, whenever device A’s file hash-ID is shared to an arbitrary device B, device A will fully lose the control over file. In other words, device B could further share it to anyone without device A’s agreements. To conquer the challenge, we propose a comprehensive design for securing the distributed IoT storage systems, named StorSec. Specifically, we design a new heterogeneous framework using an improved attribute encryption algorithm to eliminate the single-point performance bottleneck problem, which not only realizes fine-grained access control and ensures the security of data during transmission, but also improves the performance of key generation. Secondly, we design an anomaly detection algorithm, which is based on hashchain technology and combines the user privacy metadata stored on the blockchain to complete the verification process, effectively protecting the file hash identifier, ensuring access control to the file, and thus providing protection for the security and integrity of data storage. Furthermore, we design an auditing algorithm that helps the system in tracking malicious entities. Ultimately, the security and efficiency of the proposed scheme are evaluated by both security analysis and experimental results.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 6","pages":"6215-6228"},"PeriodicalIF":5.4,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145665747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-11DOI: 10.1109/TNSM.2025.3597417
B. D. Deebak;Seong Oun Hwang
Forensic Aware Cyber-Physical System (FA-CPS) is an evolving core of digital forensic systems that discovers the integrity of biometric service platforms. Most forensic agencies use emerging technologies such as IoT, Cloud, etc., to integrate a few core elements (networking, communication, and distributed computing) to achieve sustainable memory forensics. This systematic process brings additional capabilities to the physical systems that capture device memories to discover the evidence of malicious tools. Therefore, this paper deals with the Internet of Things (IoT) to form an effective and economical interaction with evolving technologies, including B5G/6G, edge, and cloud computing, to uncover the context of security implications. Most precisely, to sense, collect, share, and analyze numerical data from information systems, the application domain, like healthcare, utilizes computing methods and communications technologies to collect and analyze physiological data from patients in a haphazard way. Since an insecure network has security issues such as information leakage, secret key loss, and fraudulent authentication in Telehealth and remote monitoring, this work applies elliptic curve cryptography (ECC) and a physical unclonable function (PUF) to construct an AI-driven privacy-preserving key authentication framework (AID-PPKAF). In the proposed AID-PPKAF, the PUF generates key information, and ECC encrypts the parameters generated by the system to establish session key agreement and proper mutual authentication. The security analyses (both formal and informal) prove that AID-PPKAF has greater security efficiency than other state-of-the-art approaches. Lastly, a performance analysis using NS3 and a pragmatic study using SVM demonstrate the significance of identity protection in designing a more reliable authentication model.
{"title":"Privacy-Preserving Authentication With Service Analytics for Forensic-Aware Cyber-Physical Systems","authors":"B. D. Deebak;Seong Oun Hwang","doi":"10.1109/TNSM.2025.3597417","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3597417","url":null,"abstract":"Forensic Aware Cyber-Physical System (FA-CPS) is an evolving core of digital forensic systems that discovers the integrity of biometric service platforms. Most forensic agencies use emerging technologies such as IoT, Cloud, etc., to integrate a few core elements (networking, communication, and distributed computing) to achieve sustainable memory forensics. This systematic process brings additional capabilities to the physical systems that capture device memories to discover the evidence of malicious tools. Therefore, this paper deals with the Internet of Things (IoT) to form an effective and economical interaction with evolving technologies, including B5G/6G, edge, and cloud computing, to uncover the context of security implications. Most precisely, to sense, collect, share, and analyze numerical data from information systems, the application domain, like healthcare, utilizes computing methods and communications technologies to collect and analyze physiological data from patients in a haphazard way. Since an insecure network has security issues such as information leakage, secret key loss, and fraudulent authentication in Telehealth and remote monitoring, this work applies elliptic curve cryptography (ECC) and a physical unclonable function (PUF) to construct an AI-driven privacy-preserving key authentication framework (AID-PPKAF). In the proposed AID-PPKAF, the PUF generates key information, and ECC encrypts the parameters generated by the system to establish session key agreement and proper mutual authentication. The security analyses (both formal and informal) prove that AID-PPKAF has greater security efficiency than other state-of-the-art approaches. Lastly, a performance analysis using NS3 and a pragmatic study using SVM demonstrate the significance of identity protection in designing a more reliable authentication model.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 6","pages":"6001-6020"},"PeriodicalIF":5.4,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145665746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Web services, as the most ubiquitous form of online services, have consistently attracted research attention due to privacy concerns. Although VPNs and anonymous communication methods can partially protect users’ online privacy, advancements in website fingerprinting (WF) attacks still exploit the spatio-temporal characteristics of Web resource transmission to identify Web services. The challenge lies in defending against WF attacks efficiently, with limited bandwidth costs. Server-side WF defenses, deployed on Web servers, can achieve end-to-end obfuscation across both clients and servers. However, existing defenses often consume significant bandwidth and require additional removal operations on the client side. Given the growing use of QUIC with HTTP/3 and the need for robust privacy protections, this paper introduces an asymmetric server-side WF defense scheme using State-Transition Adversarial Perturbations (STAP). STAP introduces the concept of latent resource-state transitions, which represent hidden patterns in resource transmission. Utilizing perturbation models containing these transitions, STAP subtly alters traffic through packet padding and insertion, with inherent transport layer encryption enhancing the concealment. STAP can operate independently, removing the necessity for user involvement. Experimental results demonstrate that STAP outperforms other schemes, achieving reductions in True Positive Rate (TPR) by up to 22% and reductions in bandwidth overhead by up to 30%.
{"title":"STAP: Leveraging State-Transition Adversarial Perturbations for Asymmetric Website Fingerprinting Defenses","authors":"Jia-Nan Huang;Weiwei Liu;Guangjie Liu;Bo Gao;Fengyuan Nie;Marco Mellia","doi":"10.1109/TNSM.2025.3597075","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3597075","url":null,"abstract":"Web services, as the most ubiquitous form of online services, have consistently attracted research attention due to privacy concerns. Although VPNs and anonymous communication methods can partially protect users’ online privacy, advancements in website fingerprinting (WF) attacks still exploit the spatio-temporal characteristics of Web resource transmission to identify Web services. The challenge lies in defending against WF attacks efficiently, with limited bandwidth costs. Server-side WF defenses, deployed on Web servers, can achieve end-to-end obfuscation across both clients and servers. However, existing defenses often consume significant bandwidth and require additional removal operations on the client side. Given the growing use of QUIC with HTTP/3 and the need for robust privacy protections, this paper introduces an asymmetric server-side WF defense scheme using State-Transition Adversarial Perturbations (STAP). STAP introduces the concept of latent resource-state transitions, which represent hidden patterns in resource transmission. Utilizing perturbation models containing these transitions, STAP subtly alters traffic through packet padding and insertion, with inherent transport layer encryption enhancing the concealment. STAP can operate independently, removing the necessity for user involvement. Experimental results demonstrate that STAP outperforms other schemes, achieving reductions in True Positive Rate (TPR) by up to 22% and reductions in bandwidth overhead by up to 30%.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 6","pages":"6200-6214"},"PeriodicalIF":5.4,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145665800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-05DOI: 10.1109/TNSM.2025.3595414
Wei Lin;Chunyan Ma;Jinong Li;Zhe Zhang;Hongping Gan
Efficient real-time communication in Time-Sensitive Networking (TSN) relies on precise flow scheduling to meet stringent latency and reliability requirements. However, under the Cyclic Queuing and Forwarding (CQF) model, existing scheduling algorithms face challenges in resource allocation efficiency and the scheduling of unstable flows, leading to inconsistent performance across complex network environments. To address these challenges, firstly, this article proposes a Formal Scheduling Architecture for CQF (CQF-FSA), which rigorously defines key scheduling elements and constraints, providing a basic, consistent, and reusable architecture for scheduling algorithms across diverse network environments; Secondly, based on CQF-FSA, we propose an optimized scheduling algorithm, NTOS (Neuro-Tabu Optimized Scheduler), which combines the global exploration capabilities of NEAT (NeuroEvolution of Augmenting Topologies) with the local optimization efficiency of Tabu search. NTOS effectively overcomes the limitations of existing methods by optimizing resource utilization and reducing scheduling conflicts; Finally experimental results demonstrate that NTOS improves the scheduling success rate by an average of 34.5% over the NV algorithm and 3.23% over the state-of-the-art MSS algorithm across various network topologies. This article provides a highly optimized solution for CQF scheduling in TSN, significantly enhancing scheduling efficiency and scalability.
{"title":"Optimized CQF Scheduling in TSN: A Formal Architecture-Based Neuro-Tabu Optimized Scheduling Algorithm","authors":"Wei Lin;Chunyan Ma;Jinong Li;Zhe Zhang;Hongping Gan","doi":"10.1109/TNSM.2025.3595414","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3595414","url":null,"abstract":"Efficient real-time communication in Time-Sensitive Networking (TSN) relies on precise flow scheduling to meet stringent latency and reliability requirements. However, under the Cyclic Queuing and Forwarding (CQF) model, existing scheduling algorithms face challenges in resource allocation efficiency and the scheduling of unstable flows, leading to inconsistent performance across complex network environments. To address these challenges, firstly, this article proposes a Formal Scheduling Architecture for CQF (CQF-FSA), which rigorously defines key scheduling elements and constraints, providing a basic, consistent, and reusable architecture for scheduling algorithms across diverse network environments; Secondly, based on CQF-FSA, we propose an optimized scheduling algorithm, NTOS (Neuro-Tabu Optimized Scheduler), which combines the global exploration capabilities of NEAT (NeuroEvolution of Augmenting Topologies) with the local optimization efficiency of Tabu search. NTOS effectively overcomes the limitations of existing methods by optimizing resource utilization and reducing scheduling conflicts; Finally experimental results demonstrate that NTOS improves the scheduling success rate by an average of 34.5% over the NV algorithm and 3.23% over the state-of-the-art MSS algorithm across various network topologies. This article provides a highly optimized solution for CQF scheduling in TSN, significantly enhancing scheduling efficiency and scalability.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 6","pages":"5987-6000"},"PeriodicalIF":5.4,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145665748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-05DOI: 10.1109/TNSM.2025.3596212
Beibei Li;Wei Hu;Yiwei Li;Lemei Da
Large infrastructure networks play a crucial role in modern society, supporting various aspects of our daily lives. Reliability of such networks is a pivotal research conundrum, which has attracted intensive research interests in recent years. However, most of them focus on protecting critical nodes or optimizing the network topology through linear models to measure reliability, while nonlinear models for improving network reliability are rarely investigated. The major challenges are the significant computational complexity and damage to the original network structure caused by nonlinear methods. Inspired by the similarity in dynamics between heat conduction systems and infrastructure networks, we propose a nonlinear model that maps an infrastructure network to a nonlinear heat conduction system for the purpose of measuring and enhancing network reliability. We introduce a new evaluating indicator of network reliability based on community irrelevance. Additionally, we propose a new Edge Addition (EA) method called Modularity Addition (MA) that maximizes network reliability by adding multiple edges during each iteration and substantially reduces computational overhead. Experimental results have demonstrated that our MA method outperforms existing algorithms. Specifically, in comparison to the widely used EA and Posteriorly Adding (PA) algorithms, the proposed MA method improves network reliability by up to 13.2%. It reduces the number of edges added to the network by 72%. Moreover, the MA method offers a 6.8-fold reduction in time complexity compared to existing methods, highlighting its efficiency and scalability. Our approach is validated on both synthetic and real-world networks, showcasing its significant value on enhancing the robustness of complex infrastructure systems.
{"title":"Modeling and Maximizing Network Reliability in Large Scale Infrastructure Networks: A Heat Conduction Model Perspective","authors":"Beibei Li;Wei Hu;Yiwei Li;Lemei Da","doi":"10.1109/TNSM.2025.3596212","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3596212","url":null,"abstract":"Large infrastructure networks play a crucial role in modern society, supporting various aspects of our daily lives. Reliability of such networks is a pivotal research conundrum, which has attracted intensive research interests in recent years. However, most of them focus on protecting critical nodes or optimizing the network topology through linear models to measure reliability, while nonlinear models for improving network reliability are rarely investigated. The major challenges are the significant computational complexity and damage to the original network structure caused by nonlinear methods. Inspired by the similarity in dynamics between heat conduction systems and infrastructure networks, we propose a nonlinear model that maps an infrastructure network to a nonlinear heat conduction system for the purpose of measuring and enhancing network reliability. We introduce a new evaluating indicator of network reliability based on community irrelevance. Additionally, we propose a new Edge Addition (EA) method called Modularity Addition (MA) that maximizes network reliability by adding multiple edges during each iteration and substantially reduces computational overhead. Experimental results have demonstrated that our MA method outperforms existing algorithms. Specifically, in comparison to the widely used EA and Posteriorly Adding (PA) algorithms, the proposed MA method improves network reliability by up to 13.2%. It reduces the number of edges added to the network by 72%. Moreover, the MA method offers a 6.8-fold reduction in time complexity compared to existing methods, highlighting its efficiency and scalability. Our approach is validated on both synthetic and real-world networks, showcasing its significant value on enhancing the robustness of complex infrastructure systems.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 6","pages":"5945-5958"},"PeriodicalIF":5.4,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145665784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-04DOI: 10.1109/TNSM.2025.3595142
Wen-Hsing Kuo;Ming-Chin Hsu;Hsiao-Chun Wu
Mobile video streaming is an intriguing application for next-generation networks. Wearing the goggles that render two-eye videos, users can enjoy the interactive multimedia experience. Providing high-quality video streams to multiple mobile devices in specific areas will become popular in future cinemas, theme parks, and museums. To ensure quality wireless coverage for a good streaming experience, the next-generation wireless technology (e.g., 5G/6G) employing massive MIMO schemes is a promising solution. Massive MIMO transmissions can improve bandwidth utilization while maintaining acceptable system complexity through numerous transceiving antennae. To incorporate massive MIMO transmissions with mobile video streaming, an innovative cross-layer scheme is needed to flexibly and efficiently manage the antenna array for serving multiple user devices. This allocation mechanism must have low computational complexity and operate stably to prevent demand fluctuations from affecting the quality of service experienced by other users. In this work, we introduce a new problem of provisioning layer-encoded streams to mobile devices (e.g., VR goggles) by allocating antennae in the base stations’ massive MIMO arrays. Given each user’s bitrate demand, the available antennae of each femtocell, and the channel characteristics, the system allocates transmitting antennae to maximize the total system utility. Our theoretical analysis shows that this allocation problem is NP-hard but our proposed scheme provides bounded performance with polynomial-time complexity. We also discuss and justify the stability of our proposed new allocation mechanism. Simulations demonstrate that our scheme outperforms simple heuristic methods. To the best of our knowledge, this is the first attempt to tackle antenna allocation for mobile user devices in immersive video streaming using massive MIMO schemes.
{"title":"Novel Downlink Multiuser Resource-Allocation Scheme for Providing Layer-Encoded Multimedia Streams Using Massive MIMO Transmissions","authors":"Wen-Hsing Kuo;Ming-Chin Hsu;Hsiao-Chun Wu","doi":"10.1109/TNSM.2025.3595142","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3595142","url":null,"abstract":"Mobile video streaming is an intriguing application for next-generation networks. Wearing the goggles that render two-eye videos, users can enjoy the interactive multimedia experience. Providing high-quality video streams to multiple mobile devices in specific areas will become popular in future cinemas, theme parks, and museums. To ensure quality wireless coverage for a good streaming experience, the next-generation wireless technology (e.g., 5G/6G) employing massive MIMO schemes is a promising solution. Massive MIMO transmissions can improve bandwidth utilization while maintaining acceptable system complexity through numerous transceiving antennae. To incorporate massive MIMO transmissions with mobile video streaming, an innovative cross-layer scheme is needed to flexibly and efficiently manage the antenna array for serving multiple user devices. This allocation mechanism must have low computational complexity and operate stably to prevent demand fluctuations from affecting the quality of service experienced by other users. In this work, we introduce a new problem of provisioning layer-encoded streams to mobile devices (e.g., VR goggles) by allocating antennae in the base stations’ massive MIMO arrays. Given each user’s bitrate demand, the available antennae of each femtocell, and the channel characteristics, the system allocates transmitting antennae to maximize the total system utility. Our theoretical analysis shows that this allocation problem is NP-hard but our proposed scheme provides bounded performance with polynomial-time complexity. We also discuss and justify the stability of our proposed new allocation mechanism. Simulations demonstrate that our scheme outperforms simple heuristic methods. To the best of our knowledge, this is the first attempt to tackle antenna allocation for mobile user devices in immersive video streaming using massive MIMO schemes.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 6","pages":"5959-5971"},"PeriodicalIF":5.4,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145665727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}