Pub Date : 2025-01-28DOI: 10.1109/TNSM.2025.3535708
Xiangshuo Zheng;Wenting Shen;Ye Su;Yuan Gao
Data integrity auditing with data deduplication allows the cloud to store only one copy of the identical file while ensuring the integrity of outsourced data. To facilitate flexible updates of outsourced data, data integrity auditing schemes supporting data dynamics and deduplication have been proposed. However, existing schemes either impose significant computation and communication burden to achieve data dynamics while ensuring data integrity and deduplication, or incur substantial computation overhead during the phases of authenticator generation and auditing. To address the above problems, in this paper, we construct a secure deduplication and efficient data integrity auditing scheme with data dynamics for cloud storage (DIADD). We design a lightweight authenticator structure to produce data authenticators for data integrity auditing, which can achieve authenticator deduplication and greatly reduce the computation overhead in the authenticator generation phase. Additionally, the time-consuming operations can be eliminated in the auditing phase. To enhance the efficiency of data dynamics, we employ the multi-set hash function technology to produce the file tags. This allows data owners to compute a new file tag without needing to recover the entire original file when performing dynamic operations. Furthermore, security analysis and experimental results demonstrate that DIADD is both secure and efficient.
{"title":"DIADD: Secure Deduplication and Efficient Data Integrity Auditing With Data Dynamics for Cloud Storage","authors":"Xiangshuo Zheng;Wenting Shen;Ye Su;Yuan Gao","doi":"10.1109/TNSM.2025.3535708","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3535708","url":null,"abstract":"Data integrity auditing with data deduplication allows the cloud to store only one copy of the identical file while ensuring the integrity of outsourced data. To facilitate flexible updates of outsourced data, data integrity auditing schemes supporting data dynamics and deduplication have been proposed. However, existing schemes either impose significant computation and communication burden to achieve data dynamics while ensuring data integrity and deduplication, or incur substantial computation overhead during the phases of authenticator generation and auditing. To address the above problems, in this paper, we construct a secure deduplication and efficient data integrity auditing scheme with data dynamics for cloud storage (DIADD). We design a lightweight authenticator structure to produce data authenticators for data integrity auditing, which can achieve authenticator deduplication and greatly reduce the computation overhead in the authenticator generation phase. Additionally, the time-consuming operations can be eliminated in the auditing phase. To enhance the efficiency of data dynamics, we employ the multi-set hash function technology to produce the file tags. This allows data owners to compute a new file tag without needing to recover the entire original file when performing dynamic operations. Furthermore, security analysis and experimental results demonstrate that DIADD is both secure and efficient.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"299-316"},"PeriodicalIF":4.7,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Encrypted network traffic classification has become a critical task with the widespread adoption of protocols such as HTTPS and QUIC. Deep learning-based methods have proven to be effective in identifying traffic patterns, even within encrypted data streams. However, these methods face significant challenges when confronted with new applications that were not part of the original training set. To address this issue, knowledge transfer from existing models is often employed to accommodate novel applications. As the complexity of network traffic increases, particularly at higher protocol layers, the transferability of learned features diminishes due to domain discrepancies. Recent studies have explored Deep Adaptation Networks (DAN) as a solution, which extends deep convolutional neural networks to better adapt to target domains by mitigating these discrepancies. Despite its potential, the computational complexity of discrepancy metrics, such as Maximum Mean Discrepancy, limits DAN’s scalability, especially when applied to large datasets. In this paper, we propose a novel DAN architecture that incorporates Smooth Characteristic Functions (SCFs), specifically SCF-unNorm (Unnormalized SCF) and SCF-pInverse (Pseudo-inverse SCF). These functions are designed to enhance feature transferability in task-specific layers, effectively addressing the limitations posed by domain discrepancies and computational complexity. The proposed mechanism provides a means to efficiently handle situations with limited labeled data or entirely unlabeled data for new applications. The aim is to limit the target error by incorporating a domain discrepancy between the source and target distributions along with the source error. Two statistics classes, SCF-unNorm and SCF-pInverse, are used to minimize this domain discrepancy in traffic classification. The experimental results demonstrate that our proposed mechanism outperforms existing benchmarks in terms of accuracy, enabling real-time traffic classification in network systems. Specifically, we achieve up to 99% accuracy with an execution time of only three milliseconds in the considered scenarios.
{"title":"Encrypted Traffic Classification Through Deep Domain Adaptation Network With Smooth Characteristic Function","authors":"Van Tong;Cuong Dao;Hai-Anh Tran;Duc Tran;Huynh Thi Thanh Binh;Thang Hoang-Nam;Truong X. Tran","doi":"10.1109/TNSM.2025.3534791","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3534791","url":null,"abstract":"Encrypted network traffic classification has become a critical task with the widespread adoption of protocols such as HTTPS and QUIC. Deep learning-based methods have proven to be effective in identifying traffic patterns, even within encrypted data streams. However, these methods face significant challenges when confronted with new applications that were not part of the original training set. To address this issue, knowledge transfer from existing models is often employed to accommodate novel applications. As the complexity of network traffic increases, particularly at higher protocol layers, the transferability of learned features diminishes due to domain discrepancies. Recent studies have explored Deep Adaptation Networks (DAN) as a solution, which extends deep convolutional neural networks to better adapt to target domains by mitigating these discrepancies. Despite its potential, the computational complexity of discrepancy metrics, such as Maximum Mean Discrepancy, limits DAN’s scalability, especially when applied to large datasets. In this paper, we propose a novel DAN architecture that incorporates Smooth Characteristic Functions (SCFs), specifically SCF-unNorm (Unnormalized SCF) and SCF-pInverse (Pseudo-inverse SCF). These functions are designed to enhance feature transferability in task-specific layers, effectively addressing the limitations posed by domain discrepancies and computational complexity. The proposed mechanism provides a means to efficiently handle situations with limited labeled data or entirely unlabeled data for new applications. The aim is to limit the target error by incorporating a domain discrepancy between the source and target distributions along with the source error. Two statistics classes, SCF-unNorm and SCF-pInverse, are used to minimize this domain discrepancy in traffic classification. The experimental results demonstrate that our proposed mechanism outperforms existing benchmarks in terms of accuracy, enabling real-time traffic classification in network systems. Specifically, we achieve up to 99% accuracy with an execution time of only three milliseconds in the considered scenarios.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"331-343"},"PeriodicalIF":4.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143619081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Network slicing has been proposed as a paradigm for 5G+ networks. The operators slice physical resources from the edge all the way to the datacenter, and are responsible to micro-manage the allocation of these resources among tenants bound by predefined Service Level Agreements (SLAs). A key task, for which recent works have advocated the use of Deep Neural Networks (DNNs), is tracking the tenant demand and scaling its resources. Nevertheless, for the edge resources (e.g., RAN), a question arises on whether operators can: (a) scale them fast enough (often in the order of ms) and (b) afford to transmit huge amounts of data towards a remote cloud where such a DNN model might operate. We propose a Distributed DNN (DDNN) architecture for a class of such problems: a small subset of the DNN layers at the edge attempt to act as fast, standalone resource allocator; this is complemented by a mechanism to intelligently offload a percentage of (harder) decisions to additional DNN layers running at a remote cloud. To implement the offloading, we propose: (i) a Bayes-inspired method, using dropout during inference, to estimate the confidence in the local prediction; (ii) a learnable function which automatically classifies samples as “remote” (to be offloaded) or “local”. Using the public Milano dataset, we investigate how such a DDNN should be trained and operated to address (a) and (b). In some cases, our offloading methods are near-optimal, resolving up to 50% of decisions locally with little or no penalty on the allocation cost.
{"title":"Fast Edge Resource Scaling With Distributed DNN","authors":"Theodoros Giannakas;Dimitrios Tsilimantos;Apostolos Destounis;Thrasyvoulos Spyropoulos","doi":"10.1109/TNSM.2025.3532365","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3532365","url":null,"abstract":"Network slicing has been proposed as a paradigm for 5G+ networks. The operators slice physical resources from the edge all the way to the datacenter, and are responsible to micro-manage the allocation of these resources among tenants bound by predefined Service Level Agreements (SLAs). A key task, for which recent works have advocated the use of Deep Neural Networks (DNNs), is tracking the tenant demand and scaling its resources. Nevertheless, for the edge resources (e.g., RAN), a question arises on whether operators can: (a) scale them fast enough (often in the order of ms) and (b) afford to transmit huge amounts of data towards a remote cloud where such a DNN model might operate. We propose a Distributed DNN (DDNN) architecture for a class of such problems: a small subset of the DNN layers at the edge attempt to act as fast, standalone resource allocator; this is complemented by a mechanism to intelligently offload a percentage of (harder) decisions to additional DNN layers running at a remote cloud. To implement the offloading, we propose: (i) a Bayes-inspired method, using dropout during inference, to estimate the confidence in the local prediction; (ii) a learnable function which automatically classifies samples as “remote” (to be offloaded) or “local”. Using the public Milano dataset, we investigate how such a DDNN should be trained and operated to address (a) and (b). In some cases, our offloading methods are near-optimal, resolving up to 50% of decisions locally with little or no penalty on the allocation cost.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"557-571"},"PeriodicalIF":4.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-27DOI: 10.1109/TNSM.2025.3535094
Subhrajit Barick;Chetna Singhal
Mobile edge computing (MEC) is a promising technology to meet the increasing demands and computing limitations of complex Internet of Things (IoT) devices. However, implementing MEC in urban environments can be challenging due to factors like high device density, complex infrastructure, and limited network coverage. Network congestion and connectivity issues can adversely affect user satisfaction. Hence, in this article, we use uncrewed aerial vehicle (UAV)-assisted collaborative MEC architecture to facilitate task offloading of IoT devices in urban environments. We utilize the combined capabilities of UAVs and ground edge servers (ESs) to maximize user satisfaction and thereby also maximize the service provider’s (SP) profit. We design IoT task-offloading as joint IoT-UAV-ES association and UAV-network topology optimization problem. Due to NP-hard nature, we break the problem into two subproblems: offload strategy optimization and UAV topology optimization. We develop a Three-sided Matching with Size and Cyclic preference (TMSC) based task offloading algorithm to find stable association between IoTs, UAVs, and ESs to achieve system objective. We also propose a K-means based iterative algorithm to decide the minimum number of UAVs and their positions to provide offloading services to maximum IoTs in the system. Finally, we demonstrate the efficacy of the proposed task offloading scheme over benchmark schemes through simulation-based evaluation. The proposed scheme outperforms by 19%, 12%, and 25% on average in terms of percentage of served IoTs, average user satisfaction, and SP profit, respectively, with 25% lesser UAVs, making it an effective solution to support IoT task requirements in urban environments using UAV-assisted MEC architecture.
{"title":"UAV-Assisted MEC Architecture for Collaborative Task Offloading in Urban IoT Environment","authors":"Subhrajit Barick;Chetna Singhal","doi":"10.1109/TNSM.2025.3535094","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3535094","url":null,"abstract":"Mobile edge computing (MEC) is a promising technology to meet the increasing demands and computing limitations of complex Internet of Things (IoT) devices. However, implementing MEC in urban environments can be challenging due to factors like high device density, complex infrastructure, and limited network coverage. Network congestion and connectivity issues can adversely affect user satisfaction. Hence, in this article, we use uncrewed aerial vehicle (UAV)-assisted collaborative MEC architecture to facilitate task offloading of IoT devices in urban environments. We utilize the combined capabilities of UAVs and ground edge servers (ESs) to maximize user satisfaction and thereby also maximize the service provider’s (SP) profit. We design IoT task-offloading as joint IoT-UAV-ES association and UAV-network topology optimization problem. Due to NP-hard nature, we break the problem into two subproblems: offload strategy optimization and UAV topology optimization. We develop a Three-sided Matching with Size and Cyclic preference (TMSC) based task offloading algorithm to find stable association between IoTs, UAVs, and ESs to achieve system objective. We also propose a K-means based iterative algorithm to decide the minimum number of UAVs and their positions to provide offloading services to maximum IoTs in the system. Finally, we demonstrate the efficacy of the proposed task offloading scheme over benchmark schemes through simulation-based evaluation. The proposed scheme outperforms by 19%, 12%, and 25% on average in terms of percentage of served IoTs, average user satisfaction, and SP profit, respectively, with 25% lesser UAVs, making it an effective solution to support IoT task requirements in urban environments using UAV-assisted MEC architecture.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"732-743"},"PeriodicalIF":4.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-27DOI: 10.1109/TNSM.2025.3535082
Yang Gao;Jun Tao;Zuyan Wang;Yifan Xu
Selfishness detection offers an effective way to mitigate the routing performance degradation caused by selfish behaviors in Opportunistic Networks but leads to extra network traffic and computational burden. Most existing efforts focus on designing the selfishness detection scheme by exploiting the behavioral records of nodes. In this paper, we investigate the scheduling strategy of selfishness detection during the message lifespan with the game theory. Specifically, the Long-term Selfishness Detection Game (LSDG) is proposed based on the differential game and the payoff in the integral form. LSDG formulates the selfishness detection and the node’s selfishness with the Ordinary Differential Equations (ODEs). Then, we prove the existence of the Nash equilibrium in LSDG and deduce the necessary conditions of the equilibrium strategy based on Pontryagin’s maximum principle. The recursion-based algorithm is designed in this paper to compute the numerical solution of the equilibrium strategy via Euler’s method. Both the soundness of our modeling approach and solution properties are verified by extensive experiments. The simulations also show that the obtained solution can achieve the Nash equilibrium, where neither the source node nor relay nodes can benefit more by solely changing their own strategies.
{"title":"Analytical Scheduling for Selfishness Detection in OppNets Based on Differential Game","authors":"Yang Gao;Jun Tao;Zuyan Wang;Yifan Xu","doi":"10.1109/TNSM.2025.3535082","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3535082","url":null,"abstract":"Selfishness detection offers an effective way to mitigate the routing performance degradation caused by selfish behaviors in Opportunistic Networks but leads to extra network traffic and computational burden. Most existing efforts focus on designing the selfishness detection scheme by exploiting the behavioral records of nodes. In this paper, we investigate the scheduling strategy of selfishness detection during the message lifespan with the game theory. Specifically, the Long-term Selfishness Detection Game (LSDG) is proposed based on the differential game and the payoff in the integral form. LSDG formulates the selfishness detection and the node’s selfishness with the Ordinary Differential Equations (ODEs). Then, we prove the existence of the Nash equilibrium in LSDG and deduce the necessary conditions of the equilibrium strategy based on Pontryagin’s maximum principle. The recursion-based algorithm is designed in this paper to compute the numerical solution of the equilibrium strategy via Euler’s method. Both the soundness of our modeling approach and solution properties are verified by extensive experiments. The simulations also show that the obtained solution can achieve the Nash equilibrium, where neither the source node nor relay nodes can benefit more by solely changing their own strategies.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"270-283"},"PeriodicalIF":4.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-20DOI: 10.1109/TNSM.2025.3531989
Tamás Lévai;Balázs Vass;Gábor Rétvári
Novel telecommunication systems build on a cloudified architecture running softwarized network services as disaggregated virtual network functions (VNFs) on commercial off-the-shelf (COTS) hardware to improve costs and flexibility. Given the stringent processing deadlines of modern applications, these systems are critically dependent on a closed-loop control algorithm to orchestrate the execution of the disaggregated components. At the moment, however, the formal model for implementing such real-time control loops is mostly missing. In this paper, we introduce a new real-time VNF execution environment that runs entirely on COTS hardware. First, we define a comprehensive formal model that enables us to reason about packet processing delays across disaggregated VNF processing chains analytically. Then we integrate the model into a gradient-optimization control algorithm to provide optimal scheduling for real-time infocommunication services in a programmable way. We present experimental evidence that our model gives a proper delay estimation on a real software switch. We evaluate our control algorithm on multiple representative use cases using a software switch simulator. Our results show the algorithm drives the system to a real-time capable state in just a few control periods even in case of complex services.
{"title":"Programmable Real-Time Scheduling of Disaggregated Network Functions: A Theoretical Model","authors":"Tamás Lévai;Balázs Vass;Gábor Rétvári","doi":"10.1109/TNSM.2025.3531989","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3531989","url":null,"abstract":"Novel telecommunication systems build on a cloudified architecture running softwarized network services as disaggregated virtual network functions (VNFs) on commercial off-the-shelf (COTS) hardware to improve costs and flexibility. Given the stringent processing deadlines of modern applications, these systems are critically dependent on a closed-loop control algorithm to orchestrate the execution of the disaggregated components. At the moment, however, the formal model for implementing such real-time control loops is mostly missing. In this paper, we introduce a new real-time VNF execution environment that runs entirely on COTS hardware. First, we define a comprehensive formal model that enables us to reason about packet processing delays across disaggregated VNF processing chains analytically. Then we integrate the model into a gradient-optimization control algorithm to provide optimal scheduling for real-time infocommunication services in a programmable way. We present experimental evidence that our model gives a proper delay estimation on a real software switch. We evaluate our control algorithm on multiple representative use cases using a software switch simulator. Our results show the algorithm drives the system to a real-time capable state in just a few control periods even in case of complex services.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"485-498"},"PeriodicalIF":4.7,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Time-Sensitive Networking (TSN) employs shaping mechanisms such as Time-Aware Shaping (TAS) and Cyclic Queuing and Forwarding (CQF), which depend heavily on precise time synchronization and complex Gate Control Lists (GCL) configurations, limiting their effectiveness in large-scale mixed traffic networks like those in vehicular systems. In response, IEEE 802.1Qcr protocol introduces the Asynchronous Traffic Shaping (ATS) mechanism, based on Urgency-Based Schedulers (UBS), to asynchronously address diverse traffic needs and ensure low and predictable latency. Nonetheless, no traffic scheduling algorithm exists that can be directly applied to ATS shapers in generic large-scale traffic scenarios to solve for fixed end-to-end (E2E) delay constraints and the number of priority queues.In this paper, we propose an urgency-based fast flow scheduling algorithm (UBFS) to address the issue. UBFS leverages domain-specific optimizing strategies with a focus on traffic delay urgency inspired by greedy algorithm for priority allocation across hops and flows, complemented by preprocessing for scenario solvability and dynamic verification to ensure scheduling feasibility. We benchmark UBFS against the method with both scalability and solution quality in typical network topology and demonstrate that UBFS achieves more rapid scheduling within seconds across linear, ring, and star topologies. Notably, UBFS significantly outperforms the baseline algorithm in scheduling efficiency in mixed and large-scale traffic environments, scheduling a larger number of flows. UBFS also reduces time costs by 2-10 times in delay-sensitive environments and by more than 10 times in large-scale scenarios, effectively balancing time efficiency, performance and scalability, thereby enhancing its applicability in real-world industrial settings.
{"title":"Priority-Dominated Traffic Scheduling Enabled ATS in Time-Sensitive Networking","authors":"Lihui Zhang;Gang Sun;Rulin Liu;Wei Quan;Hongfang Yu;Dusit Niyato","doi":"10.1109/TNSM.2025.3532080","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3532080","url":null,"abstract":"Time-Sensitive Networking (TSN) employs shaping mechanisms such as Time-Aware Shaping (TAS) and Cyclic Queuing and Forwarding (CQF), which depend heavily on precise time synchronization and complex Gate Control Lists (GCL) configurations, limiting their effectiveness in large-scale mixed traffic networks like those in vehicular systems. In response, IEEE 802.1Qcr protocol introduces the Asynchronous Traffic Shaping (ATS) mechanism, based on Urgency-Based Schedulers (UBS), to asynchronously address diverse traffic needs and ensure low and predictable latency. Nonetheless, no traffic scheduling algorithm exists that can be directly applied to ATS shapers in generic large-scale traffic scenarios to solve for fixed end-to-end (E2E) delay constraints and the number of priority queues.In this paper, we propose an urgency-based fast flow scheduling algorithm (UBFS) to address the issue. UBFS leverages domain-specific optimizing strategies with a focus on traffic delay urgency inspired by greedy algorithm for priority allocation across hops and flows, complemented by preprocessing for scenario solvability and dynamic verification to ensure scheduling feasibility. We benchmark UBFS against the method with both scalability and solution quality in typical network topology and demonstrate that UBFS achieves more rapid scheduling within seconds across linear, ring, and star topologies. Notably, UBFS significantly outperforms the baseline algorithm in scheduling efficiency in mixed and large-scale traffic environments, scheduling a larger number of flows. UBFS also reduces time costs by 2-10 times in delay-sensitive environments and by more than 10 times in large-scale scenarios, effectively balancing time efficiency, performance and scalability, thereby enhancing its applicability in real-world industrial settings.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"470-484"},"PeriodicalIF":4.7,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-14DOI: 10.1109/TNSM.2025.3529471
Zhengran Tian;Hao Wang;Zhi Li;Ziyu Niu;Xiaochao Wei;Ye Su
As data continues to grow at an unprecedented rate and informationization accelerates, concerns over data privacy have become more prominent. In image classification tasks, the challenge of insufficient labeled data is common. Transfer learning, an effective and important machine learning method, can address this issue by leveraging knowledge from the source domain to enhance performance in the target domain. However, existing privacy-preserving transfer learning schemes continue to face challenges related to low security and multiple rounds of communication. In the following works, we design a three-party privacy-preserving transfer learning protocol based on the Joint Distributed Adaptation (JDA) algorithm, which ensures malicious security under an honest majority model. To realize this protocol, we designed a series of sub-protocols for constant-round communication, including distributed solving of eigenvalues and eigenvectors based on replicated secret sharing techniques. Compared to existing work, our protocol requires fewer rounds and satisfies malicious security. We provide formal security proofs for the designed protocol and assess its performance using real datasets. Our protocol for computing the eigenvalues of matrices in a given dimension is approximately 2.5 times faster than existing methods. The results of the experiments demonstrate both the security and effectiveness of the proposed approach.
{"title":"MDTL: Maliciously Secure Distributed Transfer Learning Based on Replicated Secret Sharing","authors":"Zhengran Tian;Hao Wang;Zhi Li;Ziyu Niu;Xiaochao Wei;Ye Su","doi":"10.1109/TNSM.2025.3529471","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3529471","url":null,"abstract":"As data continues to grow at an unprecedented rate and informationization accelerates, concerns over data privacy have become more prominent. In image classification tasks, the challenge of insufficient labeled data is common. Transfer learning, an effective and important machine learning method, can address this issue by leveraging knowledge from the source domain to enhance performance in the target domain. However, existing privacy-preserving transfer learning schemes continue to face challenges related to low security and multiple rounds of communication. In the following works, we design a three-party privacy-preserving transfer learning protocol based on the Joint Distributed Adaptation (JDA) algorithm, which ensures malicious security under an honest majority model. To realize this protocol, we designed a series of sub-protocols for constant-round communication, including distributed solving of eigenvalues and eigenvectors based on replicated secret sharing techniques. Compared to existing work, our protocol requires fewer rounds and satisfies malicious security. We provide formal security proofs for the designed protocol and assess its performance using real datasets. Our protocol for computing the eigenvalues of matrices in a given dimension is approximately 2.5 times faster than existing methods. The results of the experiments demonstrate both the security and effectiveness of the proposed approach.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"877-891"},"PeriodicalIF":4.7,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federated Learning is an approach that enables multiple devices to collectively train a shared model without sharing raw data, thereby preserving data privacy. However, federated learning systems are vulnerable to data-poisoning attacks during the training and updating stages. Three data-poisoning attacks—label flipping, feature poisoning, and VagueGAN—are tested on FL models across one out of ten clients using the CIC and UNSW datasets. For label flipping, we randomly modify labels of benign data; for feature poisoning, we alter highly influential features identified by the Random Forest technique; and for VagueGAN, we generate adversarial examples using Generative Adversarial Networks. Adversarial samples constitute a small portion of each dataset. In this study, we vary the percentages by which adversaries can modify datasets to observe their impact on the Client and Server sides. Experimental findings indicate that label flipping and VagueGAN attacks do not significantly affect server accuracy, as they are easily detectable by the Server. In contrast, feature poisoning attacks subtly undermine model performance while maintaining high accuracy and attack success rates, highlighting their subtlety and effectiveness. Therefore, feature poisoning attacks manipulate the server without causing a significant decrease in model accuracy, underscoring the vulnerability of federated learning systems to such sophisticated attacks. To mitigate these vulnerabilities, we explore a recent defensive approach known as Random Deep Feature Selection, which randomizes server features with varying sizes (e.g., 50 and 400) during training. This strategy has proven highly effective in minimizing the impact of such attacks, particularly on feature poisoning.
{"title":"Federated Learning Under Attack: Exposing Vulnerabilities Through Data Poisoning Attacks in Computer Networks","authors":"Ehsan Nowroozi;Imran Haider;Rahim Taheri;Mauro Conti","doi":"10.1109/TNSM.2025.3525554","DOIUrl":"https://doi.org/10.1109/TNSM.2025.3525554","url":null,"abstract":"Federated Learning is an approach that enables multiple devices to collectively train a shared model without sharing raw data, thereby preserving data privacy. However, federated learning systems are vulnerable to data-poisoning attacks during the training and updating stages. Three data-poisoning attacks—label flipping, feature poisoning, and VagueGAN—are tested on FL models across one out of ten clients using the CIC and UNSW datasets. For label flipping, we randomly modify labels of benign data; for feature poisoning, we alter highly influential features identified by the Random Forest technique; and for VagueGAN, we generate adversarial examples using Generative Adversarial Networks. Adversarial samples constitute a small portion of each dataset. In this study, we vary the percentages by which adversaries can modify datasets to observe their impact on the Client and Server sides. Experimental findings indicate that label flipping and VagueGAN attacks do not significantly affect server accuracy, as they are easily detectable by the Server. In contrast, feature poisoning attacks subtly undermine model performance while maintaining high accuracy and attack success rates, highlighting their subtlety and effectiveness. Therefore, feature poisoning attacks manipulate the server without causing a significant decrease in model accuracy, underscoring the vulnerability of federated learning systems to such sophisticated attacks. To mitigate these vulnerabilities, we explore a recent defensive approach known as Random Deep Feature Selection, which randomizes server features with varying sizes (e.g., 50 and 400) during training. This strategy has proven highly effective in minimizing the impact of such attacks, particularly on feature poisoning.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"822-831"},"PeriodicalIF":4.7,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, Computing Force Networks (CFNs) have emerged to deeply integrate and flexibly schedule multi-layer, multi-domain, distributed, and heterogeneous computing force resources. CFNs build a resources trading platform between consumers and providers, facilitating efficient resource sharing. Therefore, resources trading is an important issue but it faces some challenges. Firstly, because all kinds of large-scale and small-scale resource providers are distributed in a wide area and the number of consumers is larger compared with edge/cloud computing scenarios, the credibility of consumers and providers is hard to guarantee. Secondly, due to market monopolies by large resource providers, fixed pricing strategies, and information asymmetry, both consumers and providers exhibit a low willingness to engage in resources trading. To solve these challenges, the paper proposes an incentive mechanism for trust-driven resources trading to guarantee trusted and efficient resources trading. We first design a trust guarantee scheme based on reputation evaluation, blockchain, and trust threshold setting. Then, the proposed incentive scheme can dynamically adjust prices and enable the platform to provide appropriate rewards based on providers’ classified types and contributions. We formulate an optimization problem aiming at maximizing the trading platform’s utility and obtaining an optimal contract based on individual rationality and incentive compatible constraints. Simulation results verify the feasibility and effectiveness of our scheme, highlighting its potential to reshape the future of computing resource management, increase overall economic efficiency, and foster innovation and competitiveness in the digital economy.
{"title":"Incentive Mechanism Design for Trust-Driven Resources Trading in Computing Force Networks: Contract Theory Approach","authors":"Renchao Xie;Wen Wen;Wenzheng Wang;Qinqin Tang;Xiaodong Duan;Lu Lu;Tao Sun;Tao Huang;Fei Richard Yu","doi":"10.1109/TNSM.2024.3490734","DOIUrl":"https://doi.org/10.1109/TNSM.2024.3490734","url":null,"abstract":"Recently, Computing Force Networks (CFNs) have emerged to deeply integrate and flexibly schedule multi-layer, multi-domain, distributed, and heterogeneous computing force resources. CFNs build a resources trading platform between consumers and providers, facilitating efficient resource sharing. Therefore, resources trading is an important issue but it faces some challenges. Firstly, because all kinds of large-scale and small-scale resource providers are distributed in a wide area and the number of consumers is larger compared with edge/cloud computing scenarios, the credibility of consumers and providers is hard to guarantee. Secondly, due to market monopolies by large resource providers, fixed pricing strategies, and information asymmetry, both consumers and providers exhibit a low willingness to engage in resources trading. To solve these challenges, the paper proposes an incentive mechanism for trust-driven resources trading to guarantee trusted and efficient resources trading. We first design a trust guarantee scheme based on reputation evaluation, blockchain, and trust threshold setting. Then, the proposed incentive scheme can dynamically adjust prices and enable the platform to provide appropriate rewards based on providers’ classified types and contributions. We formulate an optimization problem aiming at maximizing the trading platform’s utility and obtaining an optimal contract based on individual rationality and incentive compatible constraints. Simulation results verify the feasibility and effectiveness of our scheme, highlighting its potential to reshape the future of computing resource management, increase overall economic efficiency, and foster innovation and competitiveness in the digital economy.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 1","pages":"618-634"},"PeriodicalIF":4.7,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}