Weiwei Bai , Guoqiang Zheng , Weibing Xia , Yu Mu , Yujun Xue
{"title":"Multi-agent deep reinforcement learning-based joint channel selection and power control method","authors":"Weiwei Bai , Guoqiang Zheng , Weibing Xia , Yu Mu , Yujun Xue","doi":"10.1016/j.compeleceng.2025.110147","DOIUrl":null,"url":null,"abstract":"<div><div>Aiming at the problem of system performance degradation caused by dynamic spectrum access in underlay mode within cognitive radio networks, we propose a multi-agent deep reinforcement learning-based joint channel selection and power control (MA-JCSPC) method. This method formulates the spectrum access problem in underlay mode as an optimization problem of joint channel selection and power control, and transforms this optimization problem into a multi-agent Markov decision process. By designing a multi-agent deep reinforcement learning framework with centralized training and decentralized execution, the channel selection and power control strategies for secondary users are optimized. In this process, a nonlinear reward function is designed by introducing a penalty term, and a novel initial action selection strategy based on a action guidance term is employed to solve the sparse rewards and ineffective exploration problems. The simulation results demonstrate that the MA-JCSPC method surpasses the compared methods in convergence, resource allocation rationality, and throughput. Compared to the centralized deep reinforcement learning (C-DRL) method, the proposed method achieves an average improvement of 6.7% and 9.1% in the sum throughput of secondary users, respectively, under variations in the throughput requirements of the primary user and the number of secondary users.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"123 ","pages":"Article 110147"},"PeriodicalIF":4.0000,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Electrical Engineering","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0045790625000904","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Aiming at the problem of system performance degradation caused by dynamic spectrum access in underlay mode within cognitive radio networks, we propose a multi-agent deep reinforcement learning-based joint channel selection and power control (MA-JCSPC) method. This method formulates the spectrum access problem in underlay mode as an optimization problem of joint channel selection and power control, and transforms this optimization problem into a multi-agent Markov decision process. By designing a multi-agent deep reinforcement learning framework with centralized training and decentralized execution, the channel selection and power control strategies for secondary users are optimized. In this process, a nonlinear reward function is designed by introducing a penalty term, and a novel initial action selection strategy based on a action guidance term is employed to solve the sparse rewards and ineffective exploration problems. The simulation results demonstrate that the MA-JCSPC method surpasses the compared methods in convergence, resource allocation rationality, and throughput. Compared to the centralized deep reinforcement learning (C-DRL) method, the proposed method achieves an average improvement of 6.7% and 9.1% in the sum throughput of secondary users, respectively, under variations in the throughput requirements of the primary user and the number of secondary users.
期刊介绍:
The impact of computers has nowhere been more revolutionary than in electrical engineering. The design, analysis, and operation of electrical and electronic systems are now dominated by computers, a transformation that has been motivated by the natural ease of interface between computers and electrical systems, and the promise of spectacular improvements in speed and efficiency.
Published since 1973, Computers & Electrical Engineering provides rapid publication of topical research into the integration of computer technology and computational techniques with electrical and electronic systems. The journal publishes papers featuring novel implementations of computers and computational techniques in areas like signal and image processing, high-performance computing, parallel processing, and communications. Special attention will be paid to papers describing innovative architectures, algorithms, and software tools.