首页 > 最新文献

IEEE Transactions on Network Science and Engineering最新文献

英文 中文
Inference-Subgraph Driven Multi-Agent DRL for Joint Resource Orchestration in Communication and Computing Power Network 通信与计算能力网络中联合资源编排的推理子图驱动多智能体DRL
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-12-03 DOI: 10.1109/TNSE.2025.3639629
Wanyu Xiang;Chen Han;Zhi Lin;Yusheng Li;Yifu Sun;Xin Lin
Joint communication and computing power network (JCCPN) has emerged as a promising architecture of the 6G wireless networks due to low latency communication and efficient computing services. However, the existing works have not fully considered the spatiotemporal mismatch between computing power supply and practical traffic distribution, leading to failure risks or resource waste. Specifically, the mismatch arises from two aspects, i.e., 1) the competition for the data links between collaborative inference tasks and communication transmission tasks; 2) the fixed computing power allocation struggles to meet dynamic computing demand. This paper focuses on the mismatch problem in JCCPNs, and formulated a joint optimization model for inference links and computing nodes. The joint optimization model was theoretically decoupled into two submodels, efficiently addressing interdependencies between links and nodes. Then, we proposed an inference-subgraph driven multi-agent deep reinforcement learning (IsMADRL) algorithm for JCCPN, consisting of two stages. At the first stage, we formulated an inference subgraph based on ordinal potential game (OPG) to separate computing and transmission data flows, ensuring collaborative inference tasks. At the second one, multi-agent deep reinforcement learning (MADRL) framework is employed on the inference-subgraph to allocate computing power dynamically, meeting the varying traffic distribution. Simulation results show that several MADRL architectures all exhibit excellent adaptability and effectiveness in complex JCCPNs.
联合通信与计算能力网络(Joint communication and computing power network, JCCPN)因其低延迟通信和高效的计算服务而成为6G无线网络的一种很有前途的架构。然而,现有的工作没有充分考虑计算能力供应与实际流量分配之间的时空不匹配,导致故障风险或资源浪费。具体来说,这种不匹配产生于两个方面,即:1)协同推理任务和通信传输任务之间对数据链路的竞争;2)固定的计算能力分配难以满足动态的计算需求。本文针对JCCPNs中的不匹配问题,建立了推理链路和计算节点的联合优化模型。该联合优化模型从理论上解耦为两个子模型,有效地解决了链路和节点之间的相互依赖关系。然后,我们提出了一种基于推理子图驱动的JCCPN多智能体深度强化学习(IsMADRL)算法,该算法分为两个阶段。在第一阶段,我们建立了一个基于有序势博弈(OPG)的推理子图来分离计算和传输数据流,保证协同推理任务。第二,在推理子图上采用多智能体深度强化学习(MADRL)框架来动态分配计算能力,以满足不同的流量分布;仿真结果表明,几种MADRL结构在复杂jccpn中均表现出良好的适应性和有效性。
{"title":"Inference-Subgraph Driven Multi-Agent DRL for Joint Resource Orchestration in Communication and Computing Power Network","authors":"Wanyu Xiang;Chen Han;Zhi Lin;Yusheng Li;Yifu Sun;Xin Lin","doi":"10.1109/TNSE.2025.3639629","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3639629","url":null,"abstract":"Joint communication and computing power network (JCCPN) has emerged as a promising architecture of the 6G wireless networks due to low latency communication and efficient computing services. However, the existing works have not fully considered the spatiotemporal mismatch between computing power supply and practical traffic distribution, leading to failure risks or resource waste. Specifically, the mismatch arises from two aspects, i.e., 1) the competition for the data links between collaborative inference tasks and communication transmission tasks; 2) the fixed computing power allocation struggles to meet dynamic computing demand. This paper focuses on the mismatch problem in JCCPNs, and formulated a joint optimization model for inference links and computing nodes. The joint optimization model was theoretically decoupled into two submodels, efficiently addressing interdependencies between links and nodes. Then, we proposed an inference-subgraph driven multi-agent deep reinforcement learning (IsMADRL) algorithm for JCCPN, consisting of two stages. At the first stage, we formulated an inference subgraph based on ordinal potential game (OPG) to separate computing and transmission data flows, ensuring collaborative inference tasks. At the second one, multi-agent deep reinforcement learning (MADRL) framework is employed on the inference-subgraph to allocate computing power dynamically, meeting the varying traffic distribution. Simulation results show that several MADRL architectures all exhibit excellent adaptability and effectiveness in complex JCCPNs.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"4616-4635"},"PeriodicalIF":7.9,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Power Allocation for UAV-Aided ISAC Systems With Uncertain Location Sensing Errors 具有不确定位置感知误差的无人机辅助ISAC系统鲁棒功率分配
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-12-03 DOI: 10.1109/TNSE.2025.3639596
Junchang Sun;Shuai Ma;Ruixin Yang;Hang Li;Youlong Wu;Tingting Yang;Gang Xu;Shiyin Li;Chengjie Gu
Integrated sensing and communication (ISAC) enables simultaneous sensing and data transmission with the assistance of unmanned aerial vehicles (UAVs) in emergency disaster relief and inspects scenarios. However, the impact of sensing uncertainty on communication performance has not been systematically investigated. In this paper, we propose a novel UAV-aided ISAC framework that explicitly accounts for the uncertainty location sensing error (LSE). To characterize LSE more realistically, we derive the Cramér-Rao bound (CRB) and use it as the variance parameter for the considered uncertainty LSE models, instead of adopting the conventional unit-variance assumption. Then, we analytically reveal the inherent coupling relationship between LSE and achievable communication rate. Considering three practical LSE distributions, namely, ellipsoidal, Gaussian, and arbitrary distributions, we formulate three robust communication and sensing power allocation problems and develop tractable solutions using the ${mathcal {S}}$-Procedure with alternating optimization (${mathcal {S}}$-AO) method, Bernstein-type inequality with successive convex approximation (BI-SCA) method, and conditional value-at-risk (CVaR) with AO (CVaR-AO) method. Simulation results validate the theoretical coupling, demonstrate the robustness of the proposed schemes, and reveal sensing-communication trade-offs, providing valuable insights for robust UAV-aided ISAC system design.
综合传感和通信(ISAC)能够在紧急救灾和检查场景中,在无人驾驶飞行器(uav)的帮助下实现同步传感和数据传输。然而,感知不确定性对通信性能的影响尚未得到系统的研究。在本文中,我们提出了一种新的无人机辅助ISAC框架,该框架明确地考虑了不确定性位置感知误差(LSE)。为了更真实地描述LSE,我们推导了cram r- rao界(CRB),并将其作为考虑不确定性LSE模型的方差参数,而不是采用传统的单位方差假设。然后,我们分析了LSE与可实现通信速率之间的内在耦合关系。考虑到三种实际的LSE分布,即椭球分布、高斯分布和任意分布,我们提出了三个鲁棒通信和传感功率分配问题,并使用${mathcal {S}}$-交替优化过程(${mathcal {S}}$-AO)方法、连续凸逼近(BI-SCA)方法和AO条件风险值(CVaR) (CVaR-AO)方法开发了可处理的解决方案。仿真结果验证了理论耦合,证明了所提方案的鲁棒性,并揭示了传感-通信权衡,为鲁棒无人机辅助ISAC系统设计提供了有价值的见解。
{"title":"Robust Power Allocation for UAV-Aided ISAC Systems With Uncertain Location Sensing Errors","authors":"Junchang Sun;Shuai Ma;Ruixin Yang;Hang Li;Youlong Wu;Tingting Yang;Gang Xu;Shiyin Li;Chengjie Gu","doi":"10.1109/TNSE.2025.3639596","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3639596","url":null,"abstract":"Integrated sensing and communication (ISAC) enables simultaneous sensing and data transmission with the assistance of unmanned aerial vehicles (UAVs) in emergency disaster relief and inspects scenarios. However, the impact of sensing uncertainty on communication performance has not been systematically investigated. In this paper, we propose a novel UAV-aided ISAC framework that explicitly accounts for the uncertainty location sensing error (LSE). To characterize LSE more realistically, we derive the Cramér-Rao bound (CRB) and use it as the variance parameter for the considered uncertainty LSE models, instead of adopting the conventional unit-variance assumption. Then, we analytically reveal the inherent coupling relationship between LSE and achievable communication rate. Considering three practical LSE distributions, namely, ellipsoidal, Gaussian, and arbitrary distributions, we formulate three robust communication and sensing power allocation problems and develop tractable solutions using the <inline-formula><tex-math>${mathcal {S}}$</tex-math></inline-formula>-Procedure with alternating optimization (<inline-formula><tex-math>${mathcal {S}}$</tex-math></inline-formula>-AO) method, Bernstein-type inequality with successive convex approximation (BI-SCA) method, and conditional value-at-risk (CVaR) with AO (CVaR-AO) method. Simulation results validate the theoretical coupling, demonstrate the robustness of the proposed schemes, and reveal sensing-communication trade-offs, providing valuable insights for robust UAV-aided ISAC system design.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"6402-6417"},"PeriodicalIF":7.9,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Health-Economy Trade-Off During the Global Pandemic 全球大流行期间的健康-经济权衡
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-12-01 DOI: 10.1109/TNSE.2025.3638895
Yanyi Nie;Fengyi Wang;Lingjie Fan;Yu Chen;Sheng Su;Yanbing Liu;Tao Lin;Chun Yang;Wei Wang
Achieving an optimal balance between public health and economic interests by accurately capturing the relationship between lockdown policies, epidemic outcomes, and economic costs is a significant challenge. Existing methods lack detailed simulation of individual behaviors, fail to respond promptly to unforeseen circumstances, and cannot ensure the long-term effectiveness of strategies, resulting in poor precision and adaptability. To address these issues, we propose an epidemic-evolutionary game co-evolution model. This model employs evolutionary game theory to describe the dynamic adjustments of individual mobility and regional management policies based on infection and economic costs, and utilizes a metapopulation to capture population movement and epidemic spread. The microscopic Markov chain approach is utilized to describe epidemic spread induced by population movement and analyze Nash equilibrium and evolutionarily stable strategies. Experimental results show that our model can intuitively reflect the complex relationship between individual mobility, regional management policies, infection rates, and economic costs. We find that the interests of governing agencies and individuals are aligned. Influenced by economic costs, individuals are instead inclined to work outside in the face of high infection rates. Additionally, the model can identify stable optimal mobility travel strategies under different economic costs and determine the balance point between lockdown and opening, without predefining optimisation objectives.
通过准确把握封锁政策、疫情结果和经济成本之间的关系,实现公共卫生和经济利益之间的最佳平衡,是一项重大挑战。现有方法缺乏对个体行为的详细模拟,不能及时应对不可预见的情况,不能保证策略的长期有效性,导致精度和适应性较差。为了解决这些问题,我们提出了一个流行病-进化博弈协同进化模型。该模型采用进化博弈论来描述基于感染成本和经济成本的个体流动和区域管理政策的动态调整,并利用元人口来捕捉人口流动和流行病传播。利用微观马尔可夫链方法描述了人口迁移引起的流行病传播,分析了纳什均衡和进化稳定策略。实验结果表明,该模型能够直观地反映个体流动性、区域管理政策、感染率和经济成本之间的复杂关系。我们发现,管理机构和个人的利益是一致的。受经济成本的影响,面对高感染率,个人反而倾向于在外面工作。此外,该模型可以在不预先定义优化目标的情况下,识别不同经济成本下稳定的最优移动出行策略,确定封锁与开放之间的平衡点。
{"title":"The Health-Economy Trade-Off During the Global Pandemic","authors":"Yanyi Nie;Fengyi Wang;Lingjie Fan;Yu Chen;Sheng Su;Yanbing Liu;Tao Lin;Chun Yang;Wei Wang","doi":"10.1109/TNSE.2025.3638895","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3638895","url":null,"abstract":"Achieving an optimal balance between public health and economic interests by accurately capturing the relationship between lockdown policies, epidemic outcomes, and economic costs is a significant challenge. Existing methods lack detailed simulation of individual behaviors, fail to respond promptly to unforeseen circumstances, and cannot ensure the long-term effectiveness of strategies, resulting in poor precision and adaptability. To address these issues, we propose an epidemic-evolutionary game co-evolution model. This model employs evolutionary game theory to describe the dynamic adjustments of individual mobility and regional management policies based on infection and economic costs, and utilizes a metapopulation to capture population movement and epidemic spread. The microscopic Markov chain approach is utilized to describe epidemic spread induced by population movement and analyze Nash equilibrium and evolutionarily stable strategies. Experimental results show that our model can intuitively reflect the complex relationship between individual mobility, regional management policies, infection rates, and economic costs. We find that the interests of governing agencies and individuals are aligned. Influenced by economic costs, individuals are instead inclined to work outside in the face of high infection rates. Additionally, the model can identify stable optimal mobility travel strategies under different economic costs and determine the balance point between lockdown and opening, without predefining optimisation objectives.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"3611-3624"},"PeriodicalIF":7.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Trajectory and Resource Optimization for Delay Minimization of UAV-Enabled NOMA-MEC System With LWPT 基于LWPT的无人机NOMA-MEC系统延迟最小化联合轨迹与资源优化
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-12-01 DOI: 10.1109/TNSE.2025.3638854
Xuecai Bao;Fugui Liu;Fenghui Zhang;Kun Yang
Uncrewed aerial vehicles (UAVs) enhance mobile edge computing (MEC) coverage, but in remote emergency scenarios limited battery life and scarce spectrum exacerbate interference, link instability, and end-to-end delay. To address these issues, we propose a joint trajectory and delay-minimization framework that integrates laser-beamed wireless power transfer (LWPT) with UAV-enabled non-orthogonal multiple access (NOMA) MEC. First, we present a practical system architecture where a ground laser-powered beacon (PB) continuously recharges the UAV during flight, enabling persistent aerial patrols that concurrently offer wireless charging and computation services to ground users. Second, we formulate a unified mixed-integer nonconvex optimization problem that jointly optimizes the UAV trajectory, task offloading ratios, PB power distribution, and user-scheduling policy under energy-causality, NOMA interference, and flight-dynamics constraints. Third, to address the resulting non-convexity, we develop a hierarchical decomposition and alternating-optimization method: the original problem is decomposed into trajectory and resource-allocation subproblems and solved using convex approximations and efficient scheduling algorithms to obtain practical solutions. Fourth, extensive simulations demonstrate that the proposed LWPT-assisted NOMA UAV-MEC scheme substantially reduces total system delay while improving energy efficiency and throughput compared with conventional OMA-MEC baselines and five recent heuristic algorithms.
无人驾驶飞行器(uav)增强了移动边缘计算(MEC)的覆盖范围,但在远程紧急情况下,有限的电池寿命和稀缺的频谱加剧了干扰、链路不稳定和端到端延迟。为了解决这些问题,我们提出了一个联合轨迹和延迟最小化框架,该框架将激光束无线电力传输(LWPT)与无人机支持的非正交多址(NOMA) MEC集成在一起。首先,我们提出了一种实用的系统架构,其中地面激光动力信标(PB)在飞行过程中不断为无人机充电,从而实现持续的空中巡逻,同时为地面用户提供无线充电和计算服务。其次,提出了一个统一的混合整数非凸优化问题,在能量因果关系、NOMA干扰和飞行动力学约束下,对无人机轨迹、任务卸载比例、PB功率分配和用户调度策略进行了联合优化。第三,为了解决由此产生的非凸性,我们开发了一种分层分解和交替优化方法:将原始问题分解为轨迹和资源分配子问题,并使用凸逼近和高效调度算法进行求解,以获得实际解。第四,大量的仿真表明,与传统的OMA-MEC基线和最近的五种启发式算法相比,所提出的lwpt辅助NOMA无人机- mec方案大大降低了系统总延迟,同时提高了能源效率和吞吐量。
{"title":"Joint Trajectory and Resource Optimization for Delay Minimization of UAV-Enabled NOMA-MEC System With LWPT","authors":"Xuecai Bao;Fugui Liu;Fenghui Zhang;Kun Yang","doi":"10.1109/TNSE.2025.3638854","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3638854","url":null,"abstract":"Uncrewed aerial vehicles (UAVs) enhance mobile edge computing (MEC) coverage, but in remote emergency scenarios limited battery life and scarce spectrum exacerbate interference, link instability, and end-to-end delay. To address these issues, we propose a joint trajectory and delay-minimization framework that integrates laser-beamed wireless power transfer (LWPT) with UAV-enabled non-orthogonal multiple access (NOMA) MEC. First, we present a practical system architecture where a ground laser-powered beacon (PB) continuously recharges the UAV during flight, enabling persistent aerial patrols that concurrently offer wireless charging and computation services to ground users. Second, we formulate a unified mixed-integer nonconvex optimization problem that jointly optimizes the UAV trajectory, task offloading ratios, PB power distribution, and user-scheduling policy under energy-causality, NOMA interference, and flight-dynamics constraints. Third, to address the resulting non-convexity, we develop a hierarchical decomposition and alternating-optimization method: the original problem is decomposed into trajectory and resource-allocation subproblems and solved using convex approximations and efficient scheduling algorithms to obtain practical solutions. Fourth, extensive simulations demonstrate that the proposed LWPT-assisted NOMA UAV-MEC scheme substantially reduces total system delay while improving energy efficiency and throughput compared with conventional OMA-MEC baselines and five recent heuristic algorithms.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"4669-4688"},"PeriodicalIF":7.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockchain-Aided Cooperative Market Offering for Distributed Renewable Energy Producers 分布式可再生能源生产商的区块链辅助合作市场提供
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-11-28 DOI: 10.1109/TNSE.2025.3638785
Yukai Wang;Qisheng Huang;Long Shi;Zhe Wang;Shaoyong Guo;Hao Wang
Distributed renewable energy (DRE) systems, such as solar panels, wind turbines, and small-scale hydroelectric systems, are increasingly participating in electricity markets. The unpredictable nature of renewable energy imposes a significant impact on the strategic offering decisions of DRE producers in two-settlement electricity markets. Furthermore, small-scale DRE producers face challenges, such as minimum size threshold requirements, that prevent them from participating in wholesale electricity markets. Driven by these issues, this work proposes a blockchain-aided coalitional game framework to enable the cooperative renewable offering strategies of distributed producers, wherein these producers are incentivized to form a grand coalition to participate in electricity markets and share real-time balancing risks. Moreover, it is verified that the grand coalition is optimal for maximizing the total profit of the producers, indicating the benefit of cooperation. It is challenging to obtain the core of the coalition due to the huge computational complexity. Nevertheless, a closed-form profit allocation mechanism is constructed and proved to be in the core of the coalition. This indicates that none of these producers has an incentive to leave the grand coalition. Furthermore, we design a smart contract to automate the coalition formation and profit allocation processes of DRE producers on the blockchain. Finally, numerical studies are conducted to validate the established theoretical results. Simulation results show that the proposed approach increases individual utility for all participants and improves the system's overall profit by up to 9.4% compared with the independent baseline.
分布式可再生能源(DRE)系统,如太阳能电池板、风力涡轮机和小型水力发电系统,正越来越多地参与电力市场。在双结算电力市场中,可再生能源的不可预测性对DRE生产商的战略供应决策产生了重大影响。此外,小型DRE生产商面临挑战,例如最低规模门槛要求,这使他们无法参与批发电力市场。在这些问题的推动下,本研究提出了一个区块链辅助的联盟博弈框架,以实现分布式生产者的合作可再生能源供应策略,其中激励这些生产者形成一个大联盟,参与电力市场并共享实时平衡风险。验证了大联盟对于生产者总利润最大化是最优的,表明了合作的效益。由于计算量巨大,很难得到联盟的核心。构建了一种封闭型的利益分配机制,并证明其是联盟的核心。这表明这些生产商都没有离开大联合政府的动机。此外,我们设计了一个智能合约来自动化区块链上DRE生产商的联盟形成和利润分配过程。最后,通过数值研究验证了所建立的理论结果。仿真结果表明,与独立基线相比,该方法提高了所有参与者的个人效用,并使系统的总体利润提高了9.4%。
{"title":"Blockchain-Aided Cooperative Market Offering for Distributed Renewable Energy Producers","authors":"Yukai Wang;Qisheng Huang;Long Shi;Zhe Wang;Shaoyong Guo;Hao Wang","doi":"10.1109/TNSE.2025.3638785","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3638785","url":null,"abstract":"Distributed renewable energy (DRE) systems, such as solar panels, wind turbines, and small-scale hydroelectric systems, are increasingly participating in electricity markets. The unpredictable nature of renewable energy imposes a significant impact on the strategic offering decisions of DRE producers in two-settlement electricity markets. Furthermore, small-scale DRE producers face challenges, such as minimum size threshold requirements, that prevent them from participating in wholesale electricity markets. Driven by these issues, this work proposes a blockchain-aided coalitional game framework to enable the cooperative renewable offering strategies of distributed producers, wherein these producers are incentivized to form a grand coalition to participate in electricity markets and share real-time balancing risks. Moreover, it is verified that the grand coalition is optimal for maximizing the total profit of the producers, indicating the benefit of cooperation. It is challenging to obtain the core of the coalition due to the huge computational complexity. Nevertheless, a closed-form profit allocation mechanism is constructed and proved to be in the core of the coalition. This indicates that none of these producers has an incentive to leave the grand coalition. Furthermore, we design a smart contract to automate the coalition formation and profit allocation processes of DRE producers on the blockchain. Finally, numerical studies are conducted to validate the established theoretical results. Simulation results show that the proposed approach increases individual utility for all participants and improves the system's overall profit by up to 9.4% compared with the independent baseline.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"4578-4595"},"PeriodicalIF":7.9,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Load-Balance-Guaranteed DNN Distributed Inference Offloading in MEC Networks Interconnected by Metro Optical Networks 城域光网络互联MEC网络中保证负载平衡的DNN分布式推理卸载
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-11-25 DOI: 10.1109/TNSE.2025.3637030
Jingjie Xin;Xin Li;Daniel Kilper;Shanguo Huang
In multi-access edge computing (MEC) networks interconnected by metro optical networks, distributed inference is a promising technique to guarantee user experience for deep neural network (DNN) inference tasks while balancing the load of edge servers. It can partition an entire DNN model into multiple sequentially connected DNN blocks and offload them to distributed edge servers for processing. However, since the number and location of partitioning points are uncertain, the inference delay may be unacceptable due to long transmission delay if DNN inference tasks are divided into too many DNN blocks. Moreover, the computing capacity of edge servers is limited. The inference delay may also be unacceptable due to inadequate computing resources if target edge servers for DNN blocks are heavily loaded or overloaded. In order to accept more DNN inference tasks using limited computing resources, this paper proposes a load-balance-guaranteed DNN distributed inference offloading (LBG-DDIO) scheme to achieve flexible partitioning and offloading, where the partitioning and offloading decisions are determined by jointly considering the inference delay and the imbalanced degree of load (IDL). An efficient heuristic algorithm is developed to determine each DNN block according to the corresponding finish time and IDL, and the selection of target edge servers for DNN blocks is also optimized. LBG-DDIO is compared with four benchmarks, and the simulation results prove that LBG-DDIO can achieve a high acceptance ratio while keeping the load balanced.
在城域光网络互联的多址边缘计算(MEC)网络中,分布式推理是一种很有前途的技术,可以在平衡边缘服务器负载的同时保证深度神经网络(DNN)推理任务的用户体验。它可以将整个DNN模型划分为多个顺序连接的DNN块,并将它们卸载到分布式边缘服务器上进行处理。然而,由于分区点的数量和位置是不确定的,如果将DNN推理任务划分为太多的DNN块,则可能由于传输延迟太长而导致推理延迟无法接受。此外,边缘服务器的计算能力有限。如果DNN块的目标边缘服务器负载过重或过载,由于计算资源不足,推理延迟也可能是不可接受的。为了在有限的计算资源下接受更多的DNN推理任务,本文提出了一种保证负载平衡的DNN分布式推理卸载(LBG-DDIO)方案来实现灵活的分区和卸载,该方案通过综合考虑推理延迟和负载不平衡程度(IDL)来确定分区和卸载决策。提出了一种有效的启发式算法,根据相应的完成时间和IDL来确定每个DNN块,并优化了DNN块目标边缘服务器的选择。将LBG-DDIO与4个基准测试进行了比较,仿真结果证明LBG-DDIO能够在保持负载均衡的同时获得较高的接受率。
{"title":"Load-Balance-Guaranteed DNN Distributed Inference Offloading in MEC Networks Interconnected by Metro Optical Networks","authors":"Jingjie Xin;Xin Li;Daniel Kilper;Shanguo Huang","doi":"10.1109/TNSE.2025.3637030","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3637030","url":null,"abstract":"In multi-access edge computing (MEC) networks interconnected by metro optical networks, distributed inference is a promising technique to guarantee user experience for deep neural network (DNN) inference tasks while balancing the load of edge servers. It can partition an entire DNN model into multiple sequentially connected DNN blocks and offload them to distributed edge servers for processing. However, since the number and location of partitioning points are uncertain, the inference delay may be unacceptable due to long transmission delay if DNN inference tasks are divided into too many DNN blocks. Moreover, the computing capacity of edge servers is limited. The inference delay may also be unacceptable due to inadequate computing resources if target edge servers for DNN blocks are heavily loaded or overloaded. In order to accept more DNN inference tasks using limited computing resources, this paper proposes a load-balance-guaranteed DNN distributed inference offloading (LBG-DDIO) scheme to achieve flexible partitioning and offloading, where the partitioning and offloading decisions are determined by jointly considering the inference delay and the imbalanced degree of load (IDL). An efficient heuristic algorithm is developed to determine each DNN block according to the corresponding finish time and IDL, and the selection of target edge servers for DNN blocks is also optimized. LBG-DDIO is compared with four benchmarks, and the simulation results prove that LBG-DDIO can achieve a high acceptance ratio while keeping the load balanced.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"3391-3408"},"PeriodicalIF":7.9,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Way to Build Native AI-Driven 6G Air Interface: Principles, Roadmap, and Outlook 构建原生ai驱动的6G空中接口的方法:原理,路线图和前景
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-11-25 DOI: 10.1109/TNSE.2025.3636923
Ping Zhang;Kai Niu;Yiming Liu;Zijian Liang;Nan Ma;Xiaodong Xu;Wenjun Xu;Mengying Sun;Yinqiu Liu;Xiaoyun Wang;Ruichen Zhang
Artificial intelligence (AI) is expected to serve as a foundational capability across the entire lifecycle of 6G networks, spanning design, deployment, and operation. This article proposes a native AI-driven air interface architecture built around two core characteristics: compression and adaptation. On one hand, compression enables the system to understand and extract essential semantic information from the source data, focusing on task relevance rather than symbol-level accuracy. On the other hand, adaptation allows the air interface to dynamically transmit semantic information across diverse tasks, data types, and channel conditions, ensuring scalability and robustness. This article first introduces the native AI-driven air interface architecture, then discusses representative enabling methodologies, followed by a case study on semantic communication in 6G non-terrestrial networks. Finally, it presents a forward-looking discussion on the future of native AI in 6G, outlining key challenges and research opportunities.
预计人工智能(AI)将在6G网络的整个生命周期中作为基础能力,涵盖设计、部署和运营。本文提出了一个原生ai驱动的空中接口架构,该架构围绕两个核心特征:压缩和自适应。一方面,压缩使系统能够从源数据中理解和提取基本的语义信息,关注任务相关性而不是符号级准确性。另一方面,自适应允许空中接口跨不同的任务、数据类型和通道条件动态传输语义信息,从而确保可伸缩性和健壮性。本文首先介绍了本地ai驱动的空中接口架构,然后讨论了具有代表性的使能方法,然后对6G非地面网络中的语义通信进行了案例研究。最后,它对6G下原生人工智能的未来进行了前瞻性的讨论,概述了关键挑战和研究机会。
{"title":"Way to Build Native AI-Driven 6G Air Interface: Principles, Roadmap, and Outlook","authors":"Ping Zhang;Kai Niu;Yiming Liu;Zijian Liang;Nan Ma;Xiaodong Xu;Wenjun Xu;Mengying Sun;Yinqiu Liu;Xiaoyun Wang;Ruichen Zhang","doi":"10.1109/TNSE.2025.3636923","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3636923","url":null,"abstract":"Artificial intelligence (AI) is expected to serve as a foundational capability across the entire lifecycle of 6G networks, spanning design, deployment, and operation. This article proposes a native AI-driven air interface architecture built around two core characteristics: compression and adaptation. On one hand, compression enables the system to understand and extract essential semantic information from the source data, focusing on task relevance rather than symbol-level accuracy. On the other hand, adaptation allows the air interface to dynamically transmit semantic information across diverse tasks, data types, and channel conditions, ensuring scalability and robustness. This article first introduces the native AI-driven air interface architecture, then discusses representative enabling methodologies, followed by a case study on semantic communication in 6G non-terrestrial networks. Finally, it presents a forward-looking discussion on the future of native AI in 6G, outlining key challenges and research opportunities.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"3551-3565"},"PeriodicalIF":7.9,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent Edge Caching Strategies for Optimized Content Delivery 优化内容交付的智能边缘缓存策略
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-11-25 DOI: 10.1109/TNSE.2025.3637123
Jiacheng Hou;Mourad Elhadef;Amiya Nayak
With the proliferation of mobile users and wireless devices, networks are faced with a significant burden due to the explosion of data traffic. The high volume and short lifetime of data pose unique challenges for efficient data management and delivery. To address these challenges, we introduce a proactive caching placement strategy. Specifically, we propose a “spatial-temporal graph attention network-soft actor-critic” (STGAN-SAC)-based caching placement algorithm. This algorithm is developed to optimize edge caching efficiency in a decentralized manner and enable caching decisions without the need for prior knowledge of content popularities. In addition, our approach jointly considers content popularity and freshness. Our experimental evaluations consistently demonstrate the superior performance of STGAN-SAC compared to two state-of-the-art caching strategies, DDRQN and DDGARQN. STGAN-SAC consistently achieves cache hit ratios that exceed existing solutions by a noteworthy margin.
随着移动用户和无线设备的激增,由于数据流量的爆炸式增长,网络面临着巨大的负担。数据的高容量和短生命周期对有效的数据管理和交付提出了独特的挑战。为了应对这些挑战,我们引入了一种主动缓存放置策略。具体而言,我们提出了一种基于“时空图注意网络-软演员-评论家”(STGAN-SAC)的缓存放置算法。该算法旨在以分散的方式优化边缘缓存效率,并在不需要预先了解内容流行程度的情况下实现缓存决策。此外,我们的方法同时考虑了内容的受欢迎程度和新鲜度。我们的实验评估一致表明,与两种最先进的缓存策略DDRQN和DDGARQN相比,STGAN-SAC具有优越的性能。STGAN-SAC始终能够实现比现有解决方案显著更高的缓存命中率。
{"title":"Intelligent Edge Caching Strategies for Optimized Content Delivery","authors":"Jiacheng Hou;Mourad Elhadef;Amiya Nayak","doi":"10.1109/TNSE.2025.3637123","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3637123","url":null,"abstract":"With the proliferation of mobile users and wireless devices, networks are faced with a significant burden due to the explosion of data traffic. The high volume and short lifetime of data pose unique challenges for efficient data management and delivery. To address these challenges, we introduce a proactive caching placement strategy. Specifically, we propose a “spatial-temporal graph attention network-soft actor-critic” (STGAN-SAC)-based caching placement algorithm. This algorithm is developed to optimize edge caching efficiency in a decentralized manner and enable caching decisions without the need for prior knowledge of content popularities. In addition, our approach jointly considers content popularity and freshness. Our experimental evaluations consistently demonstrate the superior performance of STGAN-SAC compared to two state-of-the-art caching strategies, DDRQN and DDGARQN. STGAN-SAC consistently achieves cache hit ratios that exceed existing solutions by a noteworthy margin.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"3580-3595"},"PeriodicalIF":7.9,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
To Trust or Not to Trust: On Calibration in ML-Based Resource Allocation for Wireless Networks 信任还是不信任:基于机器学习的无线网络资源分配校准
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-11-24 DOI: 10.1109/TNSE.2025.3636073
Rashika Raina;Nidhi Simmons;David E. Simmons;Michel Daoud Yacoub;Trung Q. Duong
In the next generation communications and networks, machine learning (ML) models are expected to deliver not only highly accurate predictions, but also well-calibrated confidence scores that reflect the true likelihood of correct decisions. In this paper, we study the calibration performance of an ML-based outage predictor within a single-user, multi-resource allocation framework. We begin by establishing key theoretical properties of this system’s outage probability (OP) under perfect calibration. Importantly, we show that as the number of resources grows, the OP of a perfectly calibrated predictor approaches the expected output conditioned on it being below the classification threshold. In contrast, when only a single resource is available, the system’s OP equals the model’s overall expected output. We then derive the OP conditions for a perfectly calibrated predictor. These findings guide the choice of the classification threshold to achieve a desired OP, helping system designers meet specific reliability requirements. We further demonstrate that post-processing calibration cannot improve the system’s minimum achievable OP, as it does not introduce additional information about future channel states. Additionally, we show that well-calibrated models are part of a broader class of predictors that necessarily improve OP. In particular, we establish a monotonicity condition that the accuracy-confidence function must satisfy for such improvement to occur. To demonstrate these theoretical properties, we conduct a rigorous simulation-based analysis using post-processing calibration techniques, namely, Platt scaling and isotonic regression. As part of this framework, the predictor is trained using an outage loss function specifically designed for this system. Furthermore, this analysis is performed on Rayleigh fading channels with temporal correlation captured by Clarke’s 2D model, which accounts for receiver mobility. Notably, the outage investigated refers to the required resource failing to achieve the transmission capacity requested by the user.
在下一代通信和网络中,机器学习(ML)模型不仅可以提供高度准确的预测,还可以提供精确校准的置信度分数,以反映正确决策的真实可能性。在本文中,我们研究了基于机器学习的停机预测器在单用户、多资源分配框架下的校准性能。首先建立了该系统在完全校准下的停机概率(OP)的关键理论性质。重要的是,我们表明,随着资源数量的增长,一个完美校准的预测器的OP接近预期输出,条件是它低于分类阈值。相反,当只有一个资源可用时,系统的OP等于模型的总体预期输出。然后我们推导出一个完美校准的预测器的OP条件。这些发现指导了分类阈值的选择,以实现期望的OP,帮助系统设计者满足特定的可靠性要求。我们进一步证明,后处理校准不能提高系统的最小可实现OP,因为它没有引入关于未来信道状态的额外信息。此外,我们表明,校准良好的模型是必然改善op的更广泛的预测器的一部分。特别是,我们建立了一个单调性条件,精度-置信度函数必须满足该条件才能实现这种改进。为了证明这些理论性质,我们使用后处理校准技术(即普拉特标度和等渗回归)进行了严格的基于模拟的分析。作为该框架的一部分,预测器使用专门为该系统设计的中断损失函数进行训练。此外,该分析还对克拉克二维模型捕获的具有时间相关性的瑞利衰落信道进行了分析,该模型考虑了接收机的迁移率。值得注意的是,所调查的中断是指所需资源未能达到用户所要求的传输容量。
{"title":"To Trust or Not to Trust: On Calibration in ML-Based Resource Allocation for Wireless Networks","authors":"Rashika Raina;Nidhi Simmons;David E. Simmons;Michel Daoud Yacoub;Trung Q. Duong","doi":"10.1109/TNSE.2025.3636073","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3636073","url":null,"abstract":"In the next generation communications and networks, machine learning (ML) models are expected to deliver not only highly accurate predictions, but also well-calibrated confidence scores that reflect the true likelihood of correct decisions. In this paper, we study the calibration performance of an ML-based outage predictor within a single-user, multi-resource allocation framework. We begin by establishing key theoretical properties of this system’s outage probability (OP) under perfect calibration. Importantly, we show that as the number of resources grows, the OP of a perfectly calibrated predictor approaches the expected output conditioned on it being below the classification threshold. In contrast, when only a single resource is available, the system’s OP equals the model’s overall expected output. We then derive the OP conditions for a perfectly calibrated predictor. These findings guide the choice of the classification threshold to achieve a desired OP, helping system designers meet specific reliability requirements. We further demonstrate that post-processing calibration cannot improve the system’s minimum achievable OP, as it does not introduce additional information about future channel states. Additionally, we show that well-calibrated models are part of a broader class of predictors that necessarily improve OP. In particular, we establish a monotonicity condition that the accuracy-confidence function must satisfy for such improvement to occur. To demonstrate these theoretical properties, we conduct a rigorous simulation-based analysis using post-processing calibration techniques, namely, Platt scaling and isotonic regression. As part of this framework, the predictor is trained using an outage loss function specifically designed for this system. Furthermore, this analysis is performed on Rayleigh fading channels with temporal correlation captured by Clarke’s 2D model, which accounts for receiver mobility. Notably, the outage investigated refers to the required resource failing to achieve the transmission capacity requested by the user.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"5961-5977"},"PeriodicalIF":7.9,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dependency-Aware CAV Task Scheduling via Diffusion-Based Reinforcement Learning 基于扩散强化学习的依赖感知CAV任务调度
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-11-24 DOI: 10.1109/TNSE.2025.3636287
Xiang Cheng;Wen Wu;Ying Wang;Zhi Mao;Yongguang Lu;Ping Dong
In this paper, we investigate a dependency-aware task scheduling problem in connected autonomous vehicle (CAV) networks. Specifically, each CAV task consists of multiple dependent subtasks, which can be distributed to nearby vehicles or roadside unit for processing. Since frequent subtasks scheduling may increase communication overhead, a scheduling scheme that simplifies task dependencies is designed, incorporating a subtask merging mechanism to reduce the complexity of dependent task scheduling. We formulate a long-term joint subtask scheduling and resource allocation optimization problem to minimize the average tasks completion delay while guaranteeing system stability. Therefore, Lyapunov optimization is utilized to decouple the long-term problem as a multiple instantaneous deterministic problem. To capture the dynamics of vehicular environment and randomness of task arrivals, the problem is reformulated as a parameterized action Markov decision process. To overcome the issue that inefficient exploration of single-step deterministic policies in sparse reward, we propose a novel diffusion-based hybrid proximal policy optimization algorithm, integrating the diffusion model with deep reinforcement learning. Instead of relying on the original policy network, diffusion policy is used to generate continuous actions, which aims to improve the expressiveness of the policy in capturing multimodal action distributions and enhancing decision-making over long horizons through multi-step refinement. Extensive simulation results demonstrate that the proposed algorithm can reduce task completion delay by 6.9%–12.1% compared to state-of-the-art benchmarks.
在本文中,我们研究了连接自动驾驶汽车(CAV)网络中的依赖感知任务调度问题。具体来说,每个CAV任务由多个相互依赖的子任务组成,这些子任务可以分配给附近的车辆或路边单元进行处理。由于频繁的子任务调度可能会增加通信开销,因此设计了一种简化任务依赖性的调度方案,并结合子任务合并机制来降低依赖任务调度的复杂性。为了在保证系统稳定性的同时最小化任务平均完成延迟,我们提出了一个长期联合子任务调度和资源分配优化问题。因此,利用Lyapunov优化将长期问题解耦为多个瞬时确定性问题。为了捕捉车辆环境的动态性和任务到达的随机性,将问题重新表述为一个参数化的动作马尔可夫决策过程。为了克服稀疏奖励中单步确定性策略探索效率低下的问题,我们提出了一种新的基于扩散的混合近端策略优化算法,将扩散模型与深度强化学习相结合。利用扩散策略代替原有的策略网络,生成连续的动作,旨在通过多步细化,提高策略在捕获多模态动作分布方面的表现力,增强长期决策能力。大量的仿真结果表明,与最先进的基准相比,所提出的算法可以将任务完成延迟减少6.9%-12.1%。
{"title":"Dependency-Aware CAV Task Scheduling via Diffusion-Based Reinforcement Learning","authors":"Xiang Cheng;Wen Wu;Ying Wang;Zhi Mao;Yongguang Lu;Ping Dong","doi":"10.1109/TNSE.2025.3636287","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3636287","url":null,"abstract":"In this paper, we investigate a dependency-aware task scheduling problem in connected autonomous vehicle (CAV) networks. Specifically, each CAV task consists of multiple dependent subtasks, which can be distributed to nearby vehicles or roadside unit for processing. Since frequent subtasks scheduling may increase communication overhead, a scheduling scheme that simplifies task dependencies is designed, incorporating a subtask merging mechanism to reduce the complexity of dependent task scheduling. We formulate a long-term joint subtask scheduling and resource allocation optimization problem to minimize the average tasks completion delay while guaranteeing system stability. Therefore, Lyapunov optimization is utilized to decouple the long-term problem as a multiple instantaneous deterministic problem. To capture the dynamics of vehicular environment and randomness of task arrivals, the problem is reformulated as a parameterized action Markov decision process. To overcome the issue that inefficient exploration of single-step deterministic policies in sparse reward, we propose a novel diffusion-based hybrid proximal policy optimization algorithm, integrating the diffusion model with deep reinforcement learning. Instead of relying on the original policy network, diffusion policy is used to generate continuous actions, which aims to improve the expressiveness of the policy in capturing multimodal action distributions and enhancing decision-making over long horizons through multi-step refinement. Extensive simulation results demonstrate that the proposed algorithm can reduce task completion delay by 6.9%–12.1% compared to state-of-the-art benchmarks.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"4797-4814"},"PeriodicalIF":7.9,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Network Science and Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1