{"title":"强化学习用于 D2D 通信中的频谱预测和 EE 最大化","authors":"S. Maity, K. Sinha, B. Sinha, Reema Kumari","doi":"10.1109/SPCOM55316.2022.9840772","DOIUrl":null,"url":null,"abstract":"This paper proposes a reinforcement learning (RL) based Q-learning to address the issues of joint spectrum prediction (SP) and device-to-device (D2D) data communication in cognitive radio (CR) framework. An optimization problem is formulated that addresses energy efficiency (EE) maximization of D2D communications under the constraints of its total transmit power and a certain data transmission rate while meeting an interference threshold and cooperation rate in primary user (PU) transmission. The high accuracy in SP offers reward as an improvement on EE while a compulsion of meeting an interference threshold and a penalty on PU data transmission are made based on the relative degree of wrong prediction. A large set of simulation results shows that the proposed method offers 30% gain in EE while 20% reduction in data collision with PU over the existing works.","PeriodicalId":246982,"journal":{"name":"2022 IEEE International Conference on Signal Processing and Communications (SPCOM)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement Learning for Spectrum Prediction and EE Maximization in D2D Communication\",\"authors\":\"S. Maity, K. Sinha, B. Sinha, Reema Kumari\",\"doi\":\"10.1109/SPCOM55316.2022.9840772\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper proposes a reinforcement learning (RL) based Q-learning to address the issues of joint spectrum prediction (SP) and device-to-device (D2D) data communication in cognitive radio (CR) framework. An optimization problem is formulated that addresses energy efficiency (EE) maximization of D2D communications under the constraints of its total transmit power and a certain data transmission rate while meeting an interference threshold and cooperation rate in primary user (PU) transmission. The high accuracy in SP offers reward as an improvement on EE while a compulsion of meeting an interference threshold and a penalty on PU data transmission are made based on the relative degree of wrong prediction. A large set of simulation results shows that the proposed method offers 30% gain in EE while 20% reduction in data collision with PU over the existing works.\",\"PeriodicalId\":246982,\"journal\":{\"name\":\"2022 IEEE International Conference on Signal Processing and Communications (SPCOM)\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Signal Processing and Communications (SPCOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SPCOM55316.2022.9840772\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Signal Processing and Communications (SPCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPCOM55316.2022.9840772","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
本文提出了一种基于强化学习(RL)的 Q-learning,以解决认知无线电(CR)框架中联合频谱预测(SP)和设备到设备(D2D)数据通信的问题。本文提出了一个优化问题,即在满足干扰阈值和主用户(PU)传输合作率的同时,在总发射功率和一定数据传输速率的约束条件下实现 D2D 通信的能效(EE)最大化。SP 的高精确度可作为对 EE 的改进提供奖励,而满足干扰阈值的强制要求和对 PU 数据传输的惩罚则基于错误预测的相对程度。大量仿真结果表明,与现有方法相比,所提出的方法在 EE 方面提高了 30%,而在与 PU 的数据碰撞方面减少了 20%。
Reinforcement Learning for Spectrum Prediction and EE Maximization in D2D Communication
This paper proposes a reinforcement learning (RL) based Q-learning to address the issues of joint spectrum prediction (SP) and device-to-device (D2D) data communication in cognitive radio (CR) framework. An optimization problem is formulated that addresses energy efficiency (EE) maximization of D2D communications under the constraints of its total transmit power and a certain data transmission rate while meeting an interference threshold and cooperation rate in primary user (PU) transmission. The high accuracy in SP offers reward as an improvement on EE while a compulsion of meeting an interference threshold and a penalty on PU data transmission are made based on the relative degree of wrong prediction. A large set of simulation results shows that the proposed method offers 30% gain in EE while 20% reduction in data collision with PU over the existing works.