Pub Date : 2025-12-03DOI: 10.1109/TNSE.2025.3639629
Wanyu Xiang;Chen Han;Zhi Lin;Yusheng Li;Yifu Sun;Xin Lin
Joint communication and computing power network (JCCPN) has emerged as a promising architecture of the 6G wireless networks due to low latency communication and efficient computing services. However, the existing works have not fully considered the spatiotemporal mismatch between computing power supply and practical traffic distribution, leading to failure risks or resource waste. Specifically, the mismatch arises from two aspects, i.e., 1) the competition for the data links between collaborative inference tasks and communication transmission tasks; 2) the fixed computing power allocation struggles to meet dynamic computing demand. This paper focuses on the mismatch problem in JCCPNs, and formulated a joint optimization model for inference links and computing nodes. The joint optimization model was theoretically decoupled into two submodels, efficiently addressing interdependencies between links and nodes. Then, we proposed an inference-subgraph driven multi-agent deep reinforcement learning (IsMADRL) algorithm for JCCPN, consisting of two stages. At the first stage, we formulated an inference subgraph based on ordinal potential game (OPG) to separate computing and transmission data flows, ensuring collaborative inference tasks. At the second one, multi-agent deep reinforcement learning (MADRL) framework is employed on the inference-subgraph to allocate computing power dynamically, meeting the varying traffic distribution. Simulation results show that several MADRL architectures all exhibit excellent adaptability and effectiveness in complex JCCPNs.
联合通信与计算能力网络(Joint communication and computing power network, JCCPN)因其低延迟通信和高效的计算服务而成为6G无线网络的一种很有前途的架构。然而,现有的工作没有充分考虑计算能力供应与实际流量分配之间的时空不匹配,导致故障风险或资源浪费。具体来说,这种不匹配产生于两个方面,即:1)协同推理任务和通信传输任务之间对数据链路的竞争;2)固定的计算能力分配难以满足动态的计算需求。本文针对JCCPNs中的不匹配问题,建立了推理链路和计算节点的联合优化模型。该联合优化模型从理论上解耦为两个子模型,有效地解决了链路和节点之间的相互依赖关系。然后,我们提出了一种基于推理子图驱动的JCCPN多智能体深度强化学习(IsMADRL)算法,该算法分为两个阶段。在第一阶段,我们建立了一个基于有序势博弈(OPG)的推理子图来分离计算和传输数据流,保证协同推理任务。第二,在推理子图上采用多智能体深度强化学习(MADRL)框架来动态分配计算能力,以满足不同的流量分布;仿真结果表明,几种MADRL结构在复杂jccpn中均表现出良好的适应性和有效性。
{"title":"Inference-Subgraph Driven Multi-Agent DRL for Joint Resource Orchestration in Communication and Computing Power Network","authors":"Wanyu Xiang;Chen Han;Zhi Lin;Yusheng Li;Yifu Sun;Xin Lin","doi":"10.1109/TNSE.2025.3639629","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3639629","url":null,"abstract":"Joint communication and computing power network (JCCPN) has emerged as a promising architecture of the 6G wireless networks due to low latency communication and efficient computing services. However, the existing works have not fully considered the spatiotemporal mismatch between computing power supply and practical traffic distribution, leading to failure risks or resource waste. Specifically, the mismatch arises from two aspects, i.e., 1) the competition for the data links between collaborative inference tasks and communication transmission tasks; 2) the fixed computing power allocation struggles to meet dynamic computing demand. This paper focuses on the mismatch problem in JCCPNs, and formulated a joint optimization model for inference links and computing nodes. The joint optimization model was theoretically decoupled into two submodels, efficiently addressing interdependencies between links and nodes. Then, we proposed an inference-subgraph driven multi-agent deep reinforcement learning (IsMADRL) algorithm for JCCPN, consisting of two stages. At the first stage, we formulated an inference subgraph based on ordinal potential game (OPG) to separate computing and transmission data flows, ensuring collaborative inference tasks. At the second one, multi-agent deep reinforcement learning (MADRL) framework is employed on the inference-subgraph to allocate computing power dynamically, meeting the varying traffic distribution. Simulation results show that several MADRL architectures all exhibit excellent adaptability and effectiveness in complex JCCPNs.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"4616-4635"},"PeriodicalIF":7.9,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Integrated sensing and communication (ISAC) enables simultaneous sensing and data transmission with the assistance of unmanned aerial vehicles (UAVs) in emergency disaster relief and inspects scenarios. However, the impact of sensing uncertainty on communication performance has not been systematically investigated. In this paper, we propose a novel UAV-aided ISAC framework that explicitly accounts for the uncertainty location sensing error (LSE). To characterize LSE more realistically, we derive the Cramér-Rao bound (CRB) and use it as the variance parameter for the considered uncertainty LSE models, instead of adopting the conventional unit-variance assumption. Then, we analytically reveal the inherent coupling relationship between LSE and achievable communication rate. Considering three practical LSE distributions, namely, ellipsoidal, Gaussian, and arbitrary distributions, we formulate three robust communication and sensing power allocation problems and develop tractable solutions using the ${mathcal {S}}$-Procedure with alternating optimization (${mathcal {S}}$-AO) method, Bernstein-type inequality with successive convex approximation (BI-SCA) method, and conditional value-at-risk (CVaR) with AO (CVaR-AO) method. Simulation results validate the theoretical coupling, demonstrate the robustness of the proposed schemes, and reveal sensing-communication trade-offs, providing valuable insights for robust UAV-aided ISAC system design.
{"title":"Robust Power Allocation for UAV-Aided ISAC Systems With Uncertain Location Sensing Errors","authors":"Junchang Sun;Shuai Ma;Ruixin Yang;Hang Li;Youlong Wu;Tingting Yang;Gang Xu;Shiyin Li;Chengjie Gu","doi":"10.1109/TNSE.2025.3639596","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3639596","url":null,"abstract":"Integrated sensing and communication (ISAC) enables simultaneous sensing and data transmission with the assistance of unmanned aerial vehicles (UAVs) in emergency disaster relief and inspects scenarios. However, the impact of sensing uncertainty on communication performance has not been systematically investigated. In this paper, we propose a novel UAV-aided ISAC framework that explicitly accounts for the uncertainty location sensing error (LSE). To characterize LSE more realistically, we derive the Cramér-Rao bound (CRB) and use it as the variance parameter for the considered uncertainty LSE models, instead of adopting the conventional unit-variance assumption. Then, we analytically reveal the inherent coupling relationship between LSE and achievable communication rate. Considering three practical LSE distributions, namely, ellipsoidal, Gaussian, and arbitrary distributions, we formulate three robust communication and sensing power allocation problems and develop tractable solutions using the <inline-formula><tex-math>${mathcal {S}}$</tex-math></inline-formula>-Procedure with alternating optimization (<inline-formula><tex-math>${mathcal {S}}$</tex-math></inline-formula>-AO) method, Bernstein-type inequality with successive convex approximation (BI-SCA) method, and conditional value-at-risk (CVaR) with AO (CVaR-AO) method. Simulation results validate the theoretical coupling, demonstrate the robustness of the proposed schemes, and reveal sensing-communication trade-offs, providing valuable insights for robust UAV-aided ISAC system design.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"6402-6417"},"PeriodicalIF":7.9,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1109/TNSE.2025.3638895
Yanyi Nie;Fengyi Wang;Lingjie Fan;Yu Chen;Sheng Su;Yanbing Liu;Tao Lin;Chun Yang;Wei Wang
Achieving an optimal balance between public health and economic interests by accurately capturing the relationship between lockdown policies, epidemic outcomes, and economic costs is a significant challenge. Existing methods lack detailed simulation of individual behaviors, fail to respond promptly to unforeseen circumstances, and cannot ensure the long-term effectiveness of strategies, resulting in poor precision and adaptability. To address these issues, we propose an epidemic-evolutionary game co-evolution model. This model employs evolutionary game theory to describe the dynamic adjustments of individual mobility and regional management policies based on infection and economic costs, and utilizes a metapopulation to capture population movement and epidemic spread. The microscopic Markov chain approach is utilized to describe epidemic spread induced by population movement and analyze Nash equilibrium and evolutionarily stable strategies. Experimental results show that our model can intuitively reflect the complex relationship between individual mobility, regional management policies, infection rates, and economic costs. We find that the interests of governing agencies and individuals are aligned. Influenced by economic costs, individuals are instead inclined to work outside in the face of high infection rates. Additionally, the model can identify stable optimal mobility travel strategies under different economic costs and determine the balance point between lockdown and opening, without predefining optimisation objectives.
{"title":"The Health-Economy Trade-Off During the Global Pandemic","authors":"Yanyi Nie;Fengyi Wang;Lingjie Fan;Yu Chen;Sheng Su;Yanbing Liu;Tao Lin;Chun Yang;Wei Wang","doi":"10.1109/TNSE.2025.3638895","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3638895","url":null,"abstract":"Achieving an optimal balance between public health and economic interests by accurately capturing the relationship between lockdown policies, epidemic outcomes, and economic costs is a significant challenge. Existing methods lack detailed simulation of individual behaviors, fail to respond promptly to unforeseen circumstances, and cannot ensure the long-term effectiveness of strategies, resulting in poor precision and adaptability. To address these issues, we propose an epidemic-evolutionary game co-evolution model. This model employs evolutionary game theory to describe the dynamic adjustments of individual mobility and regional management policies based on infection and economic costs, and utilizes a metapopulation to capture population movement and epidemic spread. The microscopic Markov chain approach is utilized to describe epidemic spread induced by population movement and analyze Nash equilibrium and evolutionarily stable strategies. Experimental results show that our model can intuitively reflect the complex relationship between individual mobility, regional management policies, infection rates, and economic costs. We find that the interests of governing agencies and individuals are aligned. Influenced by economic costs, individuals are instead inclined to work outside in the face of high infection rates. Additionally, the model can identify stable optimal mobility travel strategies under different economic costs and determine the balance point between lockdown and opening, without predefining optimisation objectives.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"3611-3624"},"PeriodicalIF":7.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1109/TNSE.2025.3638854
Xuecai Bao;Fugui Liu;Fenghui Zhang;Kun Yang
Uncrewed aerial vehicles (UAVs) enhance mobile edge computing (MEC) coverage, but in remote emergency scenarios limited battery life and scarce spectrum exacerbate interference, link instability, and end-to-end delay. To address these issues, we propose a joint trajectory and delay-minimization framework that integrates laser-beamed wireless power transfer (LWPT) with UAV-enabled non-orthogonal multiple access (NOMA) MEC. First, we present a practical system architecture where a ground laser-powered beacon (PB) continuously recharges the UAV during flight, enabling persistent aerial patrols that concurrently offer wireless charging and computation services to ground users. Second, we formulate a unified mixed-integer nonconvex optimization problem that jointly optimizes the UAV trajectory, task offloading ratios, PB power distribution, and user-scheduling policy under energy-causality, NOMA interference, and flight-dynamics constraints. Third, to address the resulting non-convexity, we develop a hierarchical decomposition and alternating-optimization method: the original problem is decomposed into trajectory and resource-allocation subproblems and solved using convex approximations and efficient scheduling algorithms to obtain practical solutions. Fourth, extensive simulations demonstrate that the proposed LWPT-assisted NOMA UAV-MEC scheme substantially reduces total system delay while improving energy efficiency and throughput compared with conventional OMA-MEC baselines and five recent heuristic algorithms.
{"title":"Joint Trajectory and Resource Optimization for Delay Minimization of UAV-Enabled NOMA-MEC System With LWPT","authors":"Xuecai Bao;Fugui Liu;Fenghui Zhang;Kun Yang","doi":"10.1109/TNSE.2025.3638854","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3638854","url":null,"abstract":"Uncrewed aerial vehicles (UAVs) enhance mobile edge computing (MEC) coverage, but in remote emergency scenarios limited battery life and scarce spectrum exacerbate interference, link instability, and end-to-end delay. To address these issues, we propose a joint trajectory and delay-minimization framework that integrates laser-beamed wireless power transfer (LWPT) with UAV-enabled non-orthogonal multiple access (NOMA) MEC. First, we present a practical system architecture where a ground laser-powered beacon (PB) continuously recharges the UAV during flight, enabling persistent aerial patrols that concurrently offer wireless charging and computation services to ground users. Second, we formulate a unified mixed-integer nonconvex optimization problem that jointly optimizes the UAV trajectory, task offloading ratios, PB power distribution, and user-scheduling policy under energy-causality, NOMA interference, and flight-dynamics constraints. Third, to address the resulting non-convexity, we develop a hierarchical decomposition and alternating-optimization method: the original problem is decomposed into trajectory and resource-allocation subproblems and solved using convex approximations and efficient scheduling algorithms to obtain practical solutions. Fourth, extensive simulations demonstrate that the proposed LWPT-assisted NOMA UAV-MEC scheme substantially reduces total system delay while improving energy efficiency and throughput compared with conventional OMA-MEC baselines and five recent heuristic algorithms.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"4669-4688"},"PeriodicalIF":7.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-28DOI: 10.1109/TNSE.2025.3638785
Yukai Wang;Qisheng Huang;Long Shi;Zhe Wang;Shaoyong Guo;Hao Wang
Distributed renewable energy (DRE) systems, such as solar panels, wind turbines, and small-scale hydroelectric systems, are increasingly participating in electricity markets. The unpredictable nature of renewable energy imposes a significant impact on the strategic offering decisions of DRE producers in two-settlement electricity markets. Furthermore, small-scale DRE producers face challenges, such as minimum size threshold requirements, that prevent them from participating in wholesale electricity markets. Driven by these issues, this work proposes a blockchain-aided coalitional game framework to enable the cooperative renewable offering strategies of distributed producers, wherein these producers are incentivized to form a grand coalition to participate in electricity markets and share real-time balancing risks. Moreover, it is verified that the grand coalition is optimal for maximizing the total profit of the producers, indicating the benefit of cooperation. It is challenging to obtain the core of the coalition due to the huge computational complexity. Nevertheless, a closed-form profit allocation mechanism is constructed and proved to be in the core of the coalition. This indicates that none of these producers has an incentive to leave the grand coalition. Furthermore, we design a smart contract to automate the coalition formation and profit allocation processes of DRE producers on the blockchain. Finally, numerical studies are conducted to validate the established theoretical results. Simulation results show that the proposed approach increases individual utility for all participants and improves the system's overall profit by up to 9.4% compared with the independent baseline.
{"title":"Blockchain-Aided Cooperative Market Offering for Distributed Renewable Energy Producers","authors":"Yukai Wang;Qisheng Huang;Long Shi;Zhe Wang;Shaoyong Guo;Hao Wang","doi":"10.1109/TNSE.2025.3638785","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3638785","url":null,"abstract":"Distributed renewable energy (DRE) systems, such as solar panels, wind turbines, and small-scale hydroelectric systems, are increasingly participating in electricity markets. The unpredictable nature of renewable energy imposes a significant impact on the strategic offering decisions of DRE producers in two-settlement electricity markets. Furthermore, small-scale DRE producers face challenges, such as minimum size threshold requirements, that prevent them from participating in wholesale electricity markets. Driven by these issues, this work proposes a blockchain-aided coalitional game framework to enable the cooperative renewable offering strategies of distributed producers, wherein these producers are incentivized to form a grand coalition to participate in electricity markets and share real-time balancing risks. Moreover, it is verified that the grand coalition is optimal for maximizing the total profit of the producers, indicating the benefit of cooperation. It is challenging to obtain the core of the coalition due to the huge computational complexity. Nevertheless, a closed-form profit allocation mechanism is constructed and proved to be in the core of the coalition. This indicates that none of these producers has an incentive to leave the grand coalition. Furthermore, we design a smart contract to automate the coalition formation and profit allocation processes of DRE producers on the blockchain. Finally, numerical studies are conducted to validate the established theoretical results. Simulation results show that the proposed approach increases individual utility for all participants and improves the system's overall profit by up to 9.4% compared with the independent baseline.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"4578-4595"},"PeriodicalIF":7.9,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.1109/TNSE.2025.3637030
Jingjie Xin;Xin Li;Daniel Kilper;Shanguo Huang
In multi-access edge computing (MEC) networks interconnected by metro optical networks, distributed inference is a promising technique to guarantee user experience for deep neural network (DNN) inference tasks while balancing the load of edge servers. It can partition an entire DNN model into multiple sequentially connected DNN blocks and offload them to distributed edge servers for processing. However, since the number and location of partitioning points are uncertain, the inference delay may be unacceptable due to long transmission delay if DNN inference tasks are divided into too many DNN blocks. Moreover, the computing capacity of edge servers is limited. The inference delay may also be unacceptable due to inadequate computing resources if target edge servers for DNN blocks are heavily loaded or overloaded. In order to accept more DNN inference tasks using limited computing resources, this paper proposes a load-balance-guaranteed DNN distributed inference offloading (LBG-DDIO) scheme to achieve flexible partitioning and offloading, where the partitioning and offloading decisions are determined by jointly considering the inference delay and the imbalanced degree of load (IDL). An efficient heuristic algorithm is developed to determine each DNN block according to the corresponding finish time and IDL, and the selection of target edge servers for DNN blocks is also optimized. LBG-DDIO is compared with four benchmarks, and the simulation results prove that LBG-DDIO can achieve a high acceptance ratio while keeping the load balanced.
{"title":"Load-Balance-Guaranteed DNN Distributed Inference Offloading in MEC Networks Interconnected by Metro Optical Networks","authors":"Jingjie Xin;Xin Li;Daniel Kilper;Shanguo Huang","doi":"10.1109/TNSE.2025.3637030","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3637030","url":null,"abstract":"In multi-access edge computing (MEC) networks interconnected by metro optical networks, distributed inference is a promising technique to guarantee user experience for deep neural network (DNN) inference tasks while balancing the load of edge servers. It can partition an entire DNN model into multiple sequentially connected DNN blocks and offload them to distributed edge servers for processing. However, since the number and location of partitioning points are uncertain, the inference delay may be unacceptable due to long transmission delay if DNN inference tasks are divided into too many DNN blocks. Moreover, the computing capacity of edge servers is limited. The inference delay may also be unacceptable due to inadequate computing resources if target edge servers for DNN blocks are heavily loaded or overloaded. In order to accept more DNN inference tasks using limited computing resources, this paper proposes a load-balance-guaranteed DNN distributed inference offloading (LBG-DDIO) scheme to achieve flexible partitioning and offloading, where the partitioning and offloading decisions are determined by jointly considering the inference delay and the imbalanced degree of load (IDL). An efficient heuristic algorithm is developed to determine each DNN block according to the corresponding finish time and IDL, and the selection of target edge servers for DNN blocks is also optimized. LBG-DDIO is compared with four benchmarks, and the simulation results prove that LBG-DDIO can achieve a high acceptance ratio while keeping the load balanced.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"3391-3408"},"PeriodicalIF":7.9,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence (AI) is expected to serve as a foundational capability across the entire lifecycle of 6G networks, spanning design, deployment, and operation. This article proposes a native AI-driven air interface architecture built around two core characteristics: compression and adaptation. On one hand, compression enables the system to understand and extract essential semantic information from the source data, focusing on task relevance rather than symbol-level accuracy. On the other hand, adaptation allows the air interface to dynamically transmit semantic information across diverse tasks, data types, and channel conditions, ensuring scalability and robustness. This article first introduces the native AI-driven air interface architecture, then discusses representative enabling methodologies, followed by a case study on semantic communication in 6G non-terrestrial networks. Finally, it presents a forward-looking discussion on the future of native AI in 6G, outlining key challenges and research opportunities.
{"title":"Way to Build Native AI-Driven 6G Air Interface: Principles, Roadmap, and Outlook","authors":"Ping Zhang;Kai Niu;Yiming Liu;Zijian Liang;Nan Ma;Xiaodong Xu;Wenjun Xu;Mengying Sun;Yinqiu Liu;Xiaoyun Wang;Ruichen Zhang","doi":"10.1109/TNSE.2025.3636923","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3636923","url":null,"abstract":"Artificial intelligence (AI) is expected to serve as a foundational capability across the entire lifecycle of 6G networks, spanning design, deployment, and operation. This article proposes a native AI-driven air interface architecture built around two core characteristics: compression and adaptation. On one hand, compression enables the system to understand and extract essential semantic information from the source data, focusing on task relevance rather than symbol-level accuracy. On the other hand, adaptation allows the air interface to dynamically transmit semantic information across diverse tasks, data types, and channel conditions, ensuring scalability and robustness. This article first introduces the native AI-driven air interface architecture, then discusses representative enabling methodologies, followed by a case study on semantic communication in 6G non-terrestrial networks. Finally, it presents a forward-looking discussion on the future of native AI in 6G, outlining key challenges and research opportunities.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"3551-3565"},"PeriodicalIF":7.9,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.1109/TNSE.2025.3637123
Jiacheng Hou;Mourad Elhadef;Amiya Nayak
With the proliferation of mobile users and wireless devices, networks are faced with a significant burden due to the explosion of data traffic. The high volume and short lifetime of data pose unique challenges for efficient data management and delivery. To address these challenges, we introduce a proactive caching placement strategy. Specifically, we propose a “spatial-temporal graph attention network-soft actor-critic” (STGAN-SAC)-based caching placement algorithm. This algorithm is developed to optimize edge caching efficiency in a decentralized manner and enable caching decisions without the need for prior knowledge of content popularities. In addition, our approach jointly considers content popularity and freshness. Our experimental evaluations consistently demonstrate the superior performance of STGAN-SAC compared to two state-of-the-art caching strategies, DDRQN and DDGARQN. STGAN-SAC consistently achieves cache hit ratios that exceed existing solutions by a noteworthy margin.
{"title":"Intelligent Edge Caching Strategies for Optimized Content Delivery","authors":"Jiacheng Hou;Mourad Elhadef;Amiya Nayak","doi":"10.1109/TNSE.2025.3637123","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3637123","url":null,"abstract":"With the proliferation of mobile users and wireless devices, networks are faced with a significant burden due to the explosion of data traffic. The high volume and short lifetime of data pose unique challenges for efficient data management and delivery. To address these challenges, we introduce a proactive caching placement strategy. Specifically, we propose a “spatial-temporal graph attention network-soft actor-critic” (STGAN-SAC)-based caching placement algorithm. This algorithm is developed to optimize edge caching efficiency in a decentralized manner and enable caching decisions without the need for prior knowledge of content popularities. In addition, our approach jointly considers content popularity and freshness. Our experimental evaluations consistently demonstrate the superior performance of STGAN-SAC compared to two state-of-the-art caching strategies, DDRQN and DDGARQN. STGAN-SAC consistently achieves cache hit ratios that exceed existing solutions by a noteworthy margin.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"3580-3595"},"PeriodicalIF":7.9,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-24DOI: 10.1109/TNSE.2025.3636073
Rashika Raina;Nidhi Simmons;David E. Simmons;Michel Daoud Yacoub;Trung Q. Duong
In the next generation communications and networks, machine learning (ML) models are expected to deliver not only highly accurate predictions, but also well-calibrated confidence scores that reflect the true likelihood of correct decisions. In this paper, we study the calibration performance of an ML-based outage predictor within a single-user, multi-resource allocation framework. We begin by establishing key theoretical properties of this system’s outage probability (OP) under perfect calibration. Importantly, we show that as the number of resources grows, the OP of a perfectly calibrated predictor approaches the expected output conditioned on it being below the classification threshold. In contrast, when only a single resource is available, the system’s OP equals the model’s overall expected output. We then derive the OP conditions for a perfectly calibrated predictor. These findings guide the choice of the classification threshold to achieve a desired OP, helping system designers meet specific reliability requirements. We further demonstrate that post-processing calibration cannot improve the system’s minimum achievable OP, as it does not introduce additional information about future channel states. Additionally, we show that well-calibrated models are part of a broader class of predictors that necessarily improve OP. In particular, we establish a monotonicity condition that the accuracy-confidence function must satisfy for such improvement to occur. To demonstrate these theoretical properties, we conduct a rigorous simulation-based analysis using post-processing calibration techniques, namely, Platt scaling and isotonic regression. As part of this framework, the predictor is trained using an outage loss function specifically designed for this system. Furthermore, this analysis is performed on Rayleigh fading channels with temporal correlation captured by Clarke’s 2D model, which accounts for receiver mobility. Notably, the outage investigated refers to the required resource failing to achieve the transmission capacity requested by the user.
{"title":"To Trust or Not to Trust: On Calibration in ML-Based Resource Allocation for Wireless Networks","authors":"Rashika Raina;Nidhi Simmons;David E. Simmons;Michel Daoud Yacoub;Trung Q. Duong","doi":"10.1109/TNSE.2025.3636073","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3636073","url":null,"abstract":"In the next generation communications and networks, machine learning (ML) models are expected to deliver not only highly accurate predictions, but also well-calibrated confidence scores that reflect the true likelihood of correct decisions. In this paper, we study the calibration performance of an ML-based outage predictor within a single-user, multi-resource allocation framework. We begin by establishing key theoretical properties of this system’s outage probability (OP) under perfect calibration. Importantly, we show that as the number of resources grows, the OP of a perfectly calibrated predictor approaches the expected output conditioned on it being below the classification threshold. In contrast, when only a single resource is available, the system’s OP equals the model’s overall expected output. We then derive the OP conditions for a perfectly calibrated predictor. These findings guide the choice of the classification threshold to achieve a desired OP, helping system designers meet specific reliability requirements. We further demonstrate that post-processing calibration cannot improve the system’s minimum achievable OP, as it does not introduce additional information about future channel states. Additionally, we show that well-calibrated models are part of a broader class of predictors that necessarily improve OP. In particular, we establish a monotonicity condition that the accuracy-confidence function must satisfy for such improvement to occur. To demonstrate these theoretical properties, we conduct a rigorous simulation-based analysis using post-processing calibration techniques, namely, Platt scaling and isotonic regression. As part of this framework, the predictor is trained using an outage loss function specifically designed for this system. Furthermore, this analysis is performed on Rayleigh fading channels with temporal correlation captured by Clarke’s 2D model, which accounts for receiver mobility. Notably, the outage investigated refers to the required resource failing to achieve the transmission capacity requested by the user.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"5961-5977"},"PeriodicalIF":7.9,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we investigate a dependency-aware task scheduling problem in connected autonomous vehicle (CAV) networks. Specifically, each CAV task consists of multiple dependent subtasks, which can be distributed to nearby vehicles or roadside unit for processing. Since frequent subtasks scheduling may increase communication overhead, a scheduling scheme that simplifies task dependencies is designed, incorporating a subtask merging mechanism to reduce the complexity of dependent task scheduling. We formulate a long-term joint subtask scheduling and resource allocation optimization problem to minimize the average tasks completion delay while guaranteeing system stability. Therefore, Lyapunov optimization is utilized to decouple the long-term problem as a multiple instantaneous deterministic problem. To capture the dynamics of vehicular environment and randomness of task arrivals, the problem is reformulated as a parameterized action Markov decision process. To overcome the issue that inefficient exploration of single-step deterministic policies in sparse reward, we propose a novel diffusion-based hybrid proximal policy optimization algorithm, integrating the diffusion model with deep reinforcement learning. Instead of relying on the original policy network, diffusion policy is used to generate continuous actions, which aims to improve the expressiveness of the policy in capturing multimodal action distributions and enhancing decision-making over long horizons through multi-step refinement. Extensive simulation results demonstrate that the proposed algorithm can reduce task completion delay by 6.9%–12.1% compared to state-of-the-art benchmarks.
{"title":"Dependency-Aware CAV Task Scheduling via Diffusion-Based Reinforcement Learning","authors":"Xiang Cheng;Wen Wu;Ying Wang;Zhi Mao;Yongguang Lu;Ping Dong","doi":"10.1109/TNSE.2025.3636287","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3636287","url":null,"abstract":"In this paper, we investigate a dependency-aware task scheduling problem in connected autonomous vehicle (CAV) networks. Specifically, each CAV task consists of multiple dependent subtasks, which can be distributed to nearby vehicles or roadside unit for processing. Since frequent subtasks scheduling may increase communication overhead, a scheduling scheme that simplifies task dependencies is designed, incorporating a subtask merging mechanism to reduce the complexity of dependent task scheduling. We formulate a long-term joint subtask scheduling and resource allocation optimization problem to minimize the average tasks completion delay while guaranteeing system stability. Therefore, Lyapunov optimization is utilized to decouple the long-term problem as a multiple instantaneous deterministic problem. To capture the dynamics of vehicular environment and randomness of task arrivals, the problem is reformulated as a parameterized action Markov decision process. To overcome the issue that inefficient exploration of single-step deterministic policies in sparse reward, we propose a novel diffusion-based hybrid proximal policy optimization algorithm, integrating the diffusion model with deep reinforcement learning. Instead of relying on the original policy network, diffusion policy is used to generate continuous actions, which aims to improve the expressiveness of the policy in capturing multimodal action distributions and enhancing decision-making over long horizons through multi-step refinement. Extensive simulation results demonstrate that the proposed algorithm can reduce task completion delay by 6.9%–12.1% compared to state-of-the-art benchmarks.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"4797-4814"},"PeriodicalIF":7.9,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}