{"title":"Dynamic Channel Access and Power Control via Deep Reinforcement Learning","authors":"Ziyang Lu, M. C. Gursoy","doi":"10.1109/VTCFall.2019.8891391","DOIUrl":null,"url":null,"abstract":"Efficient use of spectral and energy resources is critical in wireless networks and has been extensively studied in recent years. In particular, dynamic spectrum access and power control have been addressed primarily via optimization and game-theoretic tools. In this paper, motivated by recent advances in machine learning and, more specifically, the success of reinforcement learning for addressing dynamic control problems, we consider deep reinforcement learning to jointly perform dynamic channel access and power control in wireless interference channels. We propose a deep Q-learning model, develop an algorithm, and evaluate the performance considering different utilities and reward mechanisms. We provide comparisons with the optimal centralized strategies that require complete information and use weighted minimum mean square error (WMMSE) based power control and exhaustive search over all channel selection policies. We highlight the performance improvements with power control.","PeriodicalId":6713,"journal":{"name":"2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall)","volume":"88 1","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VTCFall.2019.8891391","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
Efficient use of spectral and energy resources is critical in wireless networks and has been extensively studied in recent years. In particular, dynamic spectrum access and power control have been addressed primarily via optimization and game-theoretic tools. In this paper, motivated by recent advances in machine learning and, more specifically, the success of reinforcement learning for addressing dynamic control problems, we consider deep reinforcement learning to jointly perform dynamic channel access and power control in wireless interference channels. We propose a deep Q-learning model, develop an algorithm, and evaluate the performance considering different utilities and reward mechanisms. We provide comparisons with the optimal centralized strategies that require complete information and use weighted minimum mean square error (WMMSE) based power control and exhaustive search over all channel selection policies. We highlight the performance improvements with power control.