A deep reinforcement learning-based power control scheme for the 5G wireless systems

IF 3.1 3区 计算机科学 Q2 TELECOMMUNICATIONS China Communications Pub Date : 2023-10-01 DOI:10.23919/jcc.ea.2021-0523.202302
Renjie Liang, Haiyang Lyu, Jiancun Fan
{"title":"A deep reinforcement learning-based power control scheme for the 5G wireless systems","authors":"Renjie Liang, Haiyang Lyu, Jiancun Fan","doi":"10.23919/jcc.ea.2021-0523.202302","DOIUrl":null,"url":null,"abstract":"In the fifth generation (5G) wireless system, a closed-loop power control (CLPC) scheme based on deep Q learning network (DQN) is introduced to intelligently adjust the transmit power of the base station (BS), which can improve the user equipment (UE) received signal to interference plus noise ratio (SINR) to a target threshold range. However, the selected power control (PC) action in DQN is not accurately matched the fluctuations of the wireless environment. Since the experience replay characteristic of the conventional DQN scheme leads to a possibility of insufficient training in the target deep neural network (DNN). As a result, the Q-value of the sub-optimal PC action exceed the optimal one. To solve this problem, we propose the improved DQN scheme. In the proposed scheme, we add an additional DNN to the conventional DQN, and set a shorter training interval to speed up the training of the DNN in order to fully train it. Finally, the proposed scheme can ensure that the Q value of the optimal action remains maximum. After multiple episodes of training, the proposed scheme can generate more accurate PC actions to match the fluctuations of the wireless environment. As a result, the UE received SINR can achieve the target threshold range faster and keep more stable. The simulation results prove that the proposed scheme outperforms the conventional schemes.","PeriodicalId":9814,"journal":{"name":"China Communications","volume":"63 1","pages":"0"},"PeriodicalIF":3.1000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"China Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/jcc.ea.2021-0523.202302","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

In the fifth generation (5G) wireless system, a closed-loop power control (CLPC) scheme based on deep Q learning network (DQN) is introduced to intelligently adjust the transmit power of the base station (BS), which can improve the user equipment (UE) received signal to interference plus noise ratio (SINR) to a target threshold range. However, the selected power control (PC) action in DQN is not accurately matched the fluctuations of the wireless environment. Since the experience replay characteristic of the conventional DQN scheme leads to a possibility of insufficient training in the target deep neural network (DNN). As a result, the Q-value of the sub-optimal PC action exceed the optimal one. To solve this problem, we propose the improved DQN scheme. In the proposed scheme, we add an additional DNN to the conventional DQN, and set a shorter training interval to speed up the training of the DNN in order to fully train it. Finally, the proposed scheme can ensure that the Q value of the optimal action remains maximum. After multiple episodes of training, the proposed scheme can generate more accurate PC actions to match the fluctuations of the wireless environment. As a result, the UE received SINR can achieve the target threshold range faster and keep more stable. The simulation results prove that the proposed scheme outperforms the conventional schemes.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于深度强化学习的5G无线系统功率控制方案
在第五代(5G)无线系统中,提出了一种基于深度Q学习网络(DQN)的闭环功率控制(CLPC)方案,对基站(BS)的发射功率进行智能调节,从而将用户设备(UE)接收的信噪比(SINR)提高到目标阈值范围。然而,DQN中选择的功率控制(PC)动作不能准确匹配无线环境的波动。由于传统DQN方案的经验重放特性,导致目标深度神经网络(DNN)可能训练不足。因此,次优PC动作的q值超过了最优PC动作的q值。为了解决这一问题,我们提出了改进的DQN方案。在该方案中,我们在常规DQN的基础上增加了一个DNN,并设置了更短的训练间隔来加快DNN的训练速度,以达到充分训练的目的。最后,该方案能保证最优动作的Q值保持最大。经过多次训练,该方案可以生成更精确的PC动作,以匹配无线环境的波动。因此,终端接收到的信噪比可以更快地达到目标阈值范围,并且保持更稳定。仿真结果表明,该方案优于传统方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
China Communications
China Communications 工程技术-电信学
CiteScore
8.00
自引率
12.20%
发文量
2868
审稿时长
8.6 months
期刊介绍: China Communications (ISSN 1673-5447) is an English-language monthly journal cosponsored by the China Institute of Communications (CIC) and IEEE Communications Society (IEEE ComSoc). It is aimed at readers in industry, universities, research and development organizations, and government agencies in the field of Information and Communications Technologies (ICTs) worldwide. The journal's main objective is to promote academic exchange in the ICTs sector and publish high-quality papers to contribute to the global ICTs industry. It provides instant access to the latest articles and papers, presenting leading-edge research achievements, tutorial overviews, and descriptions of significant practical applications of technology. China Communications has been indexed in SCIE (Science Citation Index-Expanded) since January 2007. Additionally, all articles have been available in the IEEE Xplore digital library since January 2013.
期刊最新文献
Secure short-packet transmission in uplink massive MU-MIMO assisted URLLC under imperfect CSI IoV and blockchain-enabled driving guidance strategy in complex traffic environment Multi-source underwater DOA estimation using PSO-BP neural network based on high-order cumulant optimization An overview of interactive immersive services Performance analysis in SWIPT-based bidirectional D2D communications in cellular networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1