Pub Date : 2025-12-18DOI: 10.1109/TNSE.2025.3645935
Anh-Tien Tran;Thanh Phung Truong;Dongwook Won;Nhu-Ngoc Dao;Sungrae Cho
Rate-splitting multiple access (RSMA) and successive interference cancellation (SIC) are essential approaches in the next-generation communication systems that boost spectrum efficiency by effectively managing and mitigating interference between multiple signals. However, a challenge arises in ensuring that users can distinguish the common message from the remaining non-decoded private messages without considering a separate SIC constraint per user. This imperfection cancellation leads to residual interference from the common stream that remains in the received signal. This work investigates the maximization of the weighted sum-rate (WSR) in single-layer RSMA multiple input single output (MISO) downlink network by proposing explicit SIC constraints. In particular, we suggest an approach that initially addresses the critical problem of allocating power and precoding vectors for streams using a deep reinforcement learning (DRL) method, and then determines the user-specific allocations within the common rate to meet the criteria of users’ minimum rate by solving a linear programming problem. Simulation results exhibit the supremacy of the proposed DRL framework over SDMA and other DRL approaches in terms of spectral efficiency leading to an improvement of approximately 30% of WSR in several scenarios.
{"title":"Weighted Sum-Rate Maximization in Rate-Splitting MISO Downlink Systems","authors":"Anh-Tien Tran;Thanh Phung Truong;Dongwook Won;Nhu-Ngoc Dao;Sungrae Cho","doi":"10.1109/TNSE.2025.3645935","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3645935","url":null,"abstract":"Rate-splitting multiple access (RSMA) and successive interference cancellation (SIC) are essential approaches in the next-generation communication systems that boost spectrum efficiency by effectively managing and mitigating interference between multiple signals. However, a challenge arises in ensuring that users can distinguish the common message from the remaining non-decoded private messages without considering a separate SIC constraint per user. This imperfection cancellation leads to residual interference from the common stream that remains in the received signal. This work investigates the maximization of the weighted sum-rate (WSR) in single-layer RSMA multiple input single output (MISO) downlink network by proposing explicit SIC constraints. In particular, we suggest an approach that initially addresses the critical problem of allocating power and precoding vectors for streams using a deep reinforcement learning (DRL) method, and then determines the user-specific allocations within the common rate to meet the criteria of users’ minimum rate by solving a linear programming problem. Simulation results exhibit the supremacy of the proposed DRL framework over SDMA and other DRL approaches in terms of spectral efficiency leading to an improvement of approximately 30% of WSR in several scenarios.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"5522-5538"},"PeriodicalIF":7.9,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18DOI: 10.1109/TNSE.2025.3645802
Zeyu Liu;Shuai Wang;Rui Zhang;Zhe Song;Gaofeng Pan
With the deep evolution of satellite communication technologies and hierarchical hybrid networks (HHSNs), modern communication satellites have transformed from single-function relay nodes into core hubs enabling global interconnectivity. The dynamic topology, open-channel environment, and resource limitations inherent to HHSN expose satellite routing protocols to the challenges of the Reliability-Security-Efficiency (RSE) trilemma. In this paper, we provide a systematic review of advancements in HHSN routing research, analyzing core technical challenges through the lens of typical application scenarios while highlighting the divergent performance of various solutions under the RSE trilemma. To the best of our knowledge, we are the first to analyze the performance of HHSN routing protocols within the framework of the RSE theory. Existing reviews either treat routing merely as a component of broader surveys or lack analysis based on the RSE trilemma framework. Building on our review of HHSN routing protocols, we discuss the topology description and security aspects of HHSN and propose potential directions for future HHSN routing research.
{"title":"Routing in Hierarchical Hybrid Satellite Networks: A Survey","authors":"Zeyu Liu;Shuai Wang;Rui Zhang;Zhe Song;Gaofeng Pan","doi":"10.1109/TNSE.2025.3645802","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3645802","url":null,"abstract":"With the deep evolution of satellite communication technologies and hierarchical hybrid networks (HHSNs), modern communication satellites have transformed from single-function relay nodes into core hubs enabling global interconnectivity. The dynamic topology, open-channel environment, and resource limitations inherent to HHSN expose satellite routing protocols to the challenges of the Reliability-Security-Efficiency (RSE) trilemma. In this paper, we provide a systematic review of advancements in HHSN routing research, analyzing core technical challenges through the lens of typical application scenarios while highlighting the divergent performance of various solutions under the RSE trilemma. To the best of our knowledge, we are the first to analyze the performance of HHSN routing protocols within the framework of the RSE theory. Existing reviews either treat routing merely as a component of broader surveys or lack analysis based on the RSE trilemma framework. Building on our review of HHSN routing protocols, we discuss the topology description and security aspects of HHSN and propose potential directions for future HHSN routing research.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"4883-4911"},"PeriodicalIF":7.9,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Medical Things (IoMT) consists of many resource-constrained medical devices that provide patients with medical services anytime and anywhere. In such an environment, the collection and sharing of medical records raise serious security concerns. Although various cryptographic schemes have been proposed, most fail to address two critical threats simultaneously: (i) sensitive data exposure caused by external cloud servers and/or open network environments; (ii) algorithm substitution attacks (ASAs) by internal adversaries. Furthermore, when data owners (e.g., delegators) are inconvenient to process their data, it is desirable to establish a more fine-grained way to delegate encryption rights. To tackle these issues, we propose a subversion-resistant autonomous path proxy re-encryption with an equality test function (SRAP-PRET). Specifically, our scheme allows the delegator to create a multi-hop delegation path in advance with the delegator's preferences, effectively preventing unauthorized access. By deploying a cryptographic reverse firewall zone, SRAP-PRET addresses the problem of information leakage caused by adversaries initiating ASAs. Additionally, SRAP-PRET also supports secure deduplication and efficient data decryption. Security analysis shows that SRAP-PRET provides resistance against ASAs and security against chosen plaintext attacks. Performance evaluations demonstrate that SRAP-PRET offers enhanced security properties without sacrificing efficiency.
{"title":"Subversion-Resistant Autonomous Path Proxy Re-Encryption With Secure Deduplication for IoMT","authors":"Jiasheng Chen;Zhenfu Cao;Lulu Wang;Jiachen Shen;Zehui Xiong;Xiaolei Dong","doi":"10.1109/TNSE.2025.3645991","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3645991","url":null,"abstract":"The Internet of Medical Things (IoMT) consists of many resource-constrained medical devices that provide patients with medical services anytime and anywhere. In such an environment, the collection and sharing of medical records raise serious security concerns. Although various cryptographic schemes have been proposed, most fail to address two critical threats simultaneously: (i) sensitive data exposure caused by external cloud servers and/or open network environments; (ii) algorithm substitution attacks (ASAs) by internal adversaries. Furthermore, when data owners (e.g., delegators) are inconvenient to process their data, it is desirable to establish a more fine-grained way to delegate encryption rights. To tackle these issues, we propose a subversion-resistant autonomous path proxy re-encryption with an equality test function (SRAP-PRET). Specifically, our scheme allows the delegator to create a multi-hop delegation path in advance with the delegator's preferences, effectively preventing unauthorized access. By deploying a cryptographic reverse firewall zone, SRAP-PRET addresses the problem of information leakage caused by adversaries initiating ASAs. Additionally, SRAP-PRET also supports secure deduplication and efficient data decryption. Security analysis shows that SRAP-PRET provides resistance against ASAs and security against chosen plaintext attacks. Performance evaluations demonstrate that SRAP-PRET offers enhanced security properties without sacrificing efficiency.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"5551-5567"},"PeriodicalIF":7.9,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing deployment of environment-aware services in the Internet of Vehicles (IoV), vehicles are required to execute multiple computational tasks in real time. However, resource allocation and task offloading in unmanned aerial vehicles (UAVs)-assisted IoV systems remain challenging due tothe growing number of vehicle terminals (VTs), potential privacy leakage, and resource-constrained edge devices. This paper proposes a digital twin (DT) and generative artificial intelligence (GAI)-powered hierarchical aerial-ground cooperative architecture (DTG-HACA) that achieves dynamic resource optimization through a three-layer framework. The DT layer enables real-time synchronization of vehicle/UAV states and simulated trajectory planning. The high altitude platforms (HAPs) layer provides low-latency offloading channels through stratospheric wide-area coverage and solar-powered endurance, while the physical entity layer executes energy-efficient edge computing via UAV-vehicle-roadside units (RSUs) collaboration. For UAV trajectory optimization, we introduce the multi-agent deep deterministic policy gradient (MADDPG)-improved prioritized experience replay (MADDPG-IPER) algorithm that minimizes communication overhead and energy consumption while integrating DT-simulated trajectory planning. For the joint challenge of edge caching and task offloading under privacy preservation constraints, we develop a federated deep reinforcement learning (FDRL) based generative adversarial network (FDRL-GAN) algorithm. This solution addresses critical challenges in dynamic task offloading and resource allocation for UAV-assisted IoV by leveraging GAI to predict task demands for cache hit rate optimization, while implementing FDRL for distributed privacy-preserving decision-making without raw data sharing, thereby achieving global resource allocation optimality. Extensive simulation experiments confirm that our proposed scheme demonstrates significant advantages over existing benchmark algorithms across five critical performance metrics, including training stability, computational capacity, task offloading efficiency, cache hit rate, and energy consumption.
{"title":"UAV-Assisted Task Offloading and Resource Allocation in Internet of Vehicles: An Integration of Digital Twin and Generative AI","authors":"Xing Wang;Chao He;Wenhui Jiang;Wanting Wang;Leida Li;Xin Xie","doi":"10.1109/TNSE.2025.3645844","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3645844","url":null,"abstract":"With the increasing deployment of environment-aware services in the Internet of Vehicles (IoV), vehicles are required to execute multiple computational tasks in real time. However, resource allocation and task offloading in unmanned aerial vehicles (UAVs)-assisted IoV systems remain challenging due tothe growing number of vehicle terminals (VTs), potential privacy leakage, and resource-constrained edge devices. This paper proposes a digital twin (DT) and generative artificial intelligence (GAI)-powered hierarchical aerial-ground cooperative architecture (DTG-HACA) that achieves dynamic resource optimization through a three-layer framework. The DT layer enables real-time synchronization of vehicle/UAV states and simulated trajectory planning. The high altitude platforms (HAPs) layer provides low-latency offloading channels through stratospheric wide-area coverage and solar-powered endurance, while the physical entity layer executes energy-efficient edge computing via UAV-vehicle-roadside units (RSUs) collaboration. For UAV trajectory optimization, we introduce the multi-agent deep deterministic policy gradient (MADDPG)-improved prioritized experience replay (MADDPG-IPER) algorithm that minimizes communication overhead and energy consumption while integrating DT-simulated trajectory planning. For the joint challenge of edge caching and task offloading under privacy preservation constraints, we develop a federated deep reinforcement learning (FDRL) based generative adversarial network (FDRL-GAN) algorithm. This solution addresses critical challenges in dynamic task offloading and resource allocation for UAV-assisted IoV by leveraging GAI to predict task demands for cache hit rate optimization, while implementing FDRL for distributed privacy-preserving decision-making without raw data sharing, thereby achieving global resource allocation optimality. Extensive simulation experiments confirm that our proposed scheme demonstrates significant advantages over existing benchmark algorithms across five critical performance metrics, including training stability, computational capacity, task offloading efficiency, cache hit rate, and energy consumption.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"5038-5055"},"PeriodicalIF":7.9,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1109/TNSE.2025.3645282
Lijun He;Zheyuan Li;Juncheng Wang;Ziye Jia;Yanting Wang;Chau Yuen;Zhu Han
The rapid expansion of Low Earth Orbit (LEO) satellites inescapably leads to the explosive growth of space data in LEO Satellite Networks (LSNs). The stochastic nature of space data arrivals and the intrinsically time-varying satellite-ground links in LSNs pose significant challenges for offloading substantial volumes of space data from LSNs to the ground stations. To overcome these challenges, we systematically study the joint online optimization of power allocation and task scheduling for data offloading in LSNs. Firstly, we remove the constraint of mean rate queue stability from the formulated joint online optimization problem and leverage Lyapunov optimization to decouple it into a set of per-time-slot subproblems. Each subproblem is then divided into a task scheduling problem and a power allocation problem. Subsequently, we derive a closed-form optimal solution for the power allocation problem, and a multi-armed bandit-based quasi-optimal solution for the task scheduling problem. Furthermore, we extend the aforementioned solutions to address the original joint online optimization problem. Through theoretical analyses, we show that the proposed algorithms consistently attain a sublinear time-averaged regret. Extensive simulation results demonstrate that our proposed algorithms exhibit superior performance over other benchmarks.
{"title":"Joint Online Optimization of Power Allocation and Task Scheduling for Data Offloading in LEO Satellite Networks","authors":"Lijun He;Zheyuan Li;Juncheng Wang;Ziye Jia;Yanting Wang;Chau Yuen;Zhu Han","doi":"10.1109/TNSE.2025.3645282","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3645282","url":null,"abstract":"The rapid expansion of Low Earth Orbit (LEO) satellites inescapably leads to the explosive growth of space data in LEO Satellite Networks (LSNs). The stochastic nature of space data arrivals and the intrinsically time-varying satellite-ground links in LSNs pose significant challenges for offloading substantial volumes of space data from LSNs to the ground stations. To overcome these challenges, we systematically study the joint online optimization of power allocation and task scheduling for data offloading in LSNs. Firstly, we remove the constraint of mean rate queue stability from the formulated joint online optimization problem and leverage Lyapunov optimization to decouple it into a set of per-time-slot subproblems. Each subproblem is then divided into a task scheduling problem and a power allocation problem. Subsequently, we derive a closed-form optimal solution for the power allocation problem, and a multi-armed bandit-based quasi-optimal solution for the task scheduling problem. Furthermore, we extend the aforementioned solutions to address the original joint online optimization problem. Through theoretical analyses, we show that the proposed algorithms consistently attain a sublinear time-averaged regret. Extensive simulation results demonstrate that our proposed algorithms exhibit superior performance over other benchmarks.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"5018-5037"},"PeriodicalIF":7.9,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16DOI: 10.1109/TNSE.2025.3644931
Zhangfei Zhou;Youguo Wang;Qiqing Zhai;Jun Yan
Source localization, the inverse problem of diffusion processes, is crucial for tracking social rumors, identifying epidemic spreaders, and detecting computer viruses. Multi-source localization based on snapshot observation has garnered significant attention due to its low cost and ease of acquisition. However, challenges such as ill-posedness and heavy dependence on diffusion models hinder effective solutions. Existing methods often rely on deterministic techniques that require searching the entire graph space, struggle to effectively encode topological information, and are limited to a single diffusion model. To address these limitations, we propose Source Localization based on Representation Learning and Bayesian Optimization (SL-RLBO), a generic framework that quantifies source uncertainty via Monte Carlo simulation. Specifically, we first develop a novel algorithm to simultaneously estimate diffusion parameters and time from a single snapshot. Then, we utilize a multi-source reverse infection algorithm to identify candidate sources and leverage graph representation learning techniques to capture latent topological features. Finally, we formulate an objective function applicable to various diffusion models and efficiently optimize it using Bayesian optimization. Extensive experiments and case studies conducted on two synthetic and six real-world datasets show that SL-RLBO consistently outperforms four state-of-the-art baselines across different diffusion models, reducing error distance by an average of 18.94%.
{"title":"Multi-Source Localization Based on Graph Representation Learning and Bayesian Optimization","authors":"Zhangfei Zhou;Youguo Wang;Qiqing Zhai;Jun Yan","doi":"10.1109/TNSE.2025.3644931","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3644931","url":null,"abstract":"Source localization, the inverse problem of diffusion processes, is crucial for tracking social rumors, identifying epidemic spreaders, and detecting computer viruses. Multi-source localization based on snapshot observation has garnered significant attention due to its low cost and ease of acquisition. However, challenges such as ill-posedness and heavy dependence on diffusion models hinder effective solutions. Existing methods often rely on deterministic techniques that require searching the entire graph space, struggle to effectively encode topological information, and are limited to a single diffusion model. To address these limitations, we propose <underline>S</u>ource <underline>L</u>ocalization based on <underline>R</u>epresentation <underline>L</u>earning and <underline>B</u>ayesian <underline>O</u>ptimization (SL-RLBO), a generic framework that quantifies source uncertainty via Monte Carlo simulation. Specifically, we first develop a novel algorithm to simultaneously estimate diffusion parameters and time from a single snapshot. Then, we utilize a multi-source reverse infection algorithm to identify candidate sources and leverage graph representation learning techniques to capture latent topological features. Finally, we formulate an objective function applicable to various diffusion models and efficiently optimize it using Bayesian optimization. Extensive experiments and case studies conducted on two synthetic and six real-world datasets show that SL-RLBO consistently outperforms four state-of-the-art baselines across different diffusion models, reducing error distance by an average of 18.94%.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"4815-4832"},"PeriodicalIF":7.9,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1109/TNSE.2025.3644352
Wang Li;Fengxiao Tang;Ming Zhao;Masako Omachi;Nei Kato
With the rapid deployment of 5G and the advancement of 6G research, traditional network architectures face challenges in meeting the demands of massive data transmission and low-latency computing. Computing Power Networks (CPN) integrate communication and computation resources to support emerging applications efficiently. Meanwhile, the Space-Air-Ground Integrated Networks (SAGIN) provides global coverage and multi-layer coordination as a core 6G architecture. This paper proposes SAGIN-CPN, a heterogeneous network architecture that combines SAGIN and CPN, and introduces TOLLM, an Adaptive Large Language Model (LLM)-Based Task Orchestration (TOLLM) scheme. TOLLM exploits the advantages of LLMs in dynamic environment perception, reasoning, and decision making. By incorporating a multi-objective optimization strategy, it enables intelligent scheduling of heterogeneous nodes in SAGIN-CPN and achieves efficient joint optimization of task latency and energy consumption. Simulation results validate the effectiveness of the proposed method in enhancing Quality of Experience (QoE). This work presents a generalizable and intelligent solution for large-scale task management in future 6G networks.
{"title":"Adaptive Large Language Model for Task Orchestration in 6G Space-Air-Ground Integrated Computing Power Networks","authors":"Wang Li;Fengxiao Tang;Ming Zhao;Masako Omachi;Nei Kato","doi":"10.1109/TNSE.2025.3644352","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3644352","url":null,"abstract":"With the rapid deployment of 5G and the advancement of 6G research, traditional network architectures face challenges in meeting the demands of massive data transmission and low-latency computing. Computing Power Networks (CPN) integrate communication and computation resources to support emerging applications efficiently. Meanwhile, the Space-Air-Ground Integrated Networks (SAGIN) provides global coverage and multi-layer coordination as a core 6G architecture. This paper proposes SAGIN-CPN, a heterogeneous network architecture that combines SAGIN and CPN, and introduces TOLLM, an Adaptive Large Language Model (LLM)-Based Task Orchestration (TOLLM) scheme. TOLLM exploits the advantages of LLMs in dynamic environment perception, reasoning, and decision making. By incorporating a multi-objective optimization strategy, it enables intelligent scheduling of heterogeneous nodes in SAGIN-CPN and achieves efficient joint optimization of task latency and energy consumption. Simulation results validate the effectiveness of the proposed method in enhancing Quality of Experience (QoE). This work presents a generalizable and intelligent solution for large-scale task management in future 6G networks.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"4847-4862"},"PeriodicalIF":7.9,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1109/TNSE.2025.3644304
Yanheng Liu;Ruichen Xu;Dalin Li;Jinliang Gao;Rui Ma;Hao Wu;Zemin Sun;Jiahui Li;Geng Sun
In this paper, we consider the cognitive radio satellite-UAV downlink communication system, which leverages cognitive radio technology to optimize spectrum utilization. Specifically, this scenario involves a low Earth orbit (LEO) satellite sharing the spectrum with a UAV swarm, where both systems communicate with ground users simultaneously. The co-existence of satellite and UAV downlink channels introduces significant interference, leading to challenges in maintaining communication efficiency and energy efficiency. We formulate this as a multi-objective optimization problem (MOP), which aims to maximize the total transmission rates of both satellite and UAV users while minimizing the energy consumption of the UAV swarm. However, this MOP is NP-hard due to its complex nature involving large-scale decision variables and conflicting objectives such as interference mitigation and energy efficiency. To tackle these challenges, we propose a generative chaotic hybrid multi-objective hiking optimization algorithm (GCHMHOA). The algorithm includes several enhancements, which are chaos-based population initialization for better global exploration, a generative population evolution using diffusion models to maintain diversity, and genetic operators to handle sequentially encoded decision variables. Simulation results demonstrate that the proposed GCHMHOA outperforms various state-of-the-art benchmark algorithms and achieves superior convergence and solution diversity. Specifically, the proposed GCHMHOA achieves approximately 48% higher satellite transmission rate, 11% higher UAV swarm transmission rate, and 4% lower energy consumption compared to the best baseline algorithm.
{"title":"Generative Chaotic Hybrid Multi-Objective Optimization Approach for Satellite-UAV Cognitive Radio Networks","authors":"Yanheng Liu;Ruichen Xu;Dalin Li;Jinliang Gao;Rui Ma;Hao Wu;Zemin Sun;Jiahui Li;Geng Sun","doi":"10.1109/TNSE.2025.3644304","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3644304","url":null,"abstract":"In this paper, we consider the cognitive radio satellite-UAV downlink communication system, which leverages cognitive radio technology to optimize spectrum utilization. Specifically, this scenario involves a low Earth orbit (LEO) satellite sharing the spectrum with a UAV swarm, where both systems communicate with ground users simultaneously. The co-existence of satellite and UAV downlink channels introduces significant interference, leading to challenges in maintaining communication efficiency and energy efficiency. We formulate this as a multi-objective optimization problem (MOP), which aims to maximize the total transmission rates of both satellite and UAV users while minimizing the energy consumption of the UAV swarm. However, this MOP is NP-hard due to its complex nature involving large-scale decision variables and conflicting objectives such as interference mitigation and energy efficiency. To tackle these challenges, we propose a generative chaotic hybrid multi-objective hiking optimization algorithm (GCHMHOA). The algorithm includes several enhancements, which are chaos-based population initialization for better global exploration, a generative population evolution using diffusion models to maintain diversity, and genetic operators to handle sequentially encoded decision variables. Simulation results demonstrate that the proposed GCHMHOA outperforms various state-of-the-art benchmark algorithms and achieves superior convergence and solution diversity. Specifically, the proposed GCHMHOA achieves approximately 48% higher satellite transmission rate, 11% higher UAV swarm transmission rate, and 4% lower energy consumption compared to the best baseline algorithm.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"4760-4778"},"PeriodicalIF":7.9,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
After natural disasters, such as earthquakes or tsunamis, terrestrial communication networks often become inoperative due to infrastructure collapse. Simultaneously, damage to roads and transportation systems inevitably isolates different parts of the affected area, making it challenging for emergency vehicles to reach critical locations and deploy mobile Base Stations (BSs). In such scenarios, UnmannedAerial Vehicles (UAVs) serve as a flexible and efficient solution. With the capability to establish temporary communication links, UAVs can provide emergency coverage for ground entities. In this paper, we propose a Dynamic Priority-based UAV-assisted Vehicular Ad-hoc Network (VANET) Routing (DPUVR) protocol for post-disaster message transmission. Specifically, DPUVR is a trajectory-based method for controlling the direction of message forwarding. DPUVR utilizes a multi-attribute decision-making method to adaptively evaluate the message delivery capability of candidate nodes (in this paper, nodes refer to both UAVs and vehicles), taking into account trajectory similarity, surplus energy, link survival time, remaining distance cost and queuing delay. In addition, we propose a dynamic prioritization delivery model. It evaluates the priority of messages in node buffers, selects appropriate candidate nodes and then chooses the best relay for message forwarding to trigger timely and efficient message delivery. Extensive simulation results show that DPUVR significantly outperforms other baseline methods in terms of delivery ratio, overhead, average delivery latency and average buffering time.
{"title":"A Novel UAV-Assisted VANET Routing Protocol for Post-Disaster Emergency Communications","authors":"Zhijie Fan;Mansi Zhang;Yue Cao;Zilong Liu;Omprakash Kaiwartya;Yasir Javed;Faisal Bashir Hussain","doi":"10.1109/TNSE.2025.3644432","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3644432","url":null,"abstract":"After natural disasters, such as earthquakes or tsunamis, terrestrial communication networks often become inoperative due to infrastructure collapse. Simultaneously, damage to roads and transportation systems inevitably isolates different parts of the affected area, making it challenging for emergency vehicles to reach critical locations and deploy mobile Base Stations (BSs). In such scenarios, UnmannedAerial Vehicles (UAVs) serve as a flexible and efficient solution. With the capability to establish temporary communication links, UAVs can provide emergency coverage for ground entities. In this paper, we propose a Dynamic Priority-based UAV-assisted Vehicular Ad-hoc Network (VANET) Routing (DPUVR) protocol for post-disaster message transmission. Specifically, DPUVR is a trajectory-based method for controlling the direction of message forwarding. DPUVR utilizes a multi-attribute decision-making method to adaptively evaluate the message delivery capability of candidate nodes (in this paper, nodes refer to both UAVs and vehicles), taking into account trajectory similarity, surplus energy, link survival time, remaining distance cost and queuing delay. In addition, we propose a dynamic prioritization delivery model. It evaluates the priority of messages in node buffers, selects appropriate candidate nodes and then chooses the best relay for message forwarding to trigger timely and efficient message delivery. Extensive simulation results show that DPUVR significantly outperforms other baseline methods in terms of delivery ratio, overhead, average delivery latency and average buffering time.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"4863-4882"},"PeriodicalIF":7.9,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1109/TNSE.2025.3644438
Ziqi Chen;Jun Du;Chunxiao Jiang;Xiangwang Hou;Zhu Han;H. Vincent Poor
With the rapid development of the low-altitude economy, privacy protection has become a significant challenge in the unmanned aerial vehicles (UAV) networks. Federated learning (FL) provides a concrete framework for addressing privacy concerns in the low-altitude networks by enabling training without exposing raw data. However, there remains a risk of data leakage during aggregation of parameter updates from local models in the FL framework. Existing approaches have introduced differential privacy (DP) to mitigate this issue, but adding DP noise can degrade the performance of the training process. To further enhance the efficiency and accuracy of model training, we propose a novel framework based on DP and adaptive sparsity for FL, named DP-FedAS. On the one hand, this framework reduces communication and training overhead through an adaptive sparsity module. On the other hand, it mitigates privacy errors caused by DP noise by reducing the noise introduced during global aggregation via sparsity, thereby alleviating the performance degradation. Furthermore, we provide detailed theoretical proofs for the convergence of the proposed algorithm and the privacy guarantees it offers. Simulation results validate that DP-FedAS improves global model accuracy by 20%, and reduces communication cost by 23%, while maintaining a robust level of privacy protection. The proposed framework strikes an optimal balance among communication efficiency, privacy preservation, and model performance.
{"title":"Differential Privacy-Based Adaptive Sparse Federated Learning in UAV Networks","authors":"Ziqi Chen;Jun Du;Chunxiao Jiang;Xiangwang Hou;Zhu Han;H. Vincent Poor","doi":"10.1109/TNSE.2025.3644438","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3644438","url":null,"abstract":"With the rapid development of the low-altitude economy, privacy protection has become a significant challenge in the unmanned aerial vehicles (UAV) networks. Federated learning (FL) provides a concrete framework for addressing privacy concerns in the low-altitude networks by enabling training without exposing raw data. However, there remains a risk of data leakage during aggregation of parameter updates from local models in the FL framework. Existing approaches have introduced differential privacy (DP) to mitigate this issue, but adding DP noise can degrade the performance of the training process. To further enhance the efficiency and accuracy of model training, we propose a novel framework based on DP and adaptive sparsity for FL, named DP-FedAS. On the one hand, this framework reduces communication and training overhead through an adaptive sparsity module. On the other hand, it mitigates privacy errors caused by DP noise by reducing the noise introduced during global aggregation via sparsity, thereby alleviating the performance degradation. Furthermore, we provide detailed theoretical proofs for the convergence of the proposed algorithm and the privacy guarantees it offers. Simulation results validate that DP-FedAS improves global model accuracy by 20%, and reduces communication cost by 23%, while maintaining a robust level of privacy protection. The proposed framework strikes an optimal balance among communication efficiency, privacy preservation, and model performance.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"5128-5144"},"PeriodicalIF":7.9,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}