基于深度强化学习的动态通道访问和功率控制

Ziyang Lu, M. C. Gursoy
{"title":"基于深度强化学习的动态通道访问和功率控制","authors":"Ziyang Lu, M. C. Gursoy","doi":"10.1109/VTCFall.2019.8891391","DOIUrl":null,"url":null,"abstract":"Efficient use of spectral and energy resources is critical in wireless networks and has been extensively studied in recent years. In particular, dynamic spectrum access and power control have been addressed primarily via optimization and game-theoretic tools. In this paper, motivated by recent advances in machine learning and, more specifically, the success of reinforcement learning for addressing dynamic control problems, we consider deep reinforcement learning to jointly perform dynamic channel access and power control in wireless interference channels. We propose a deep Q-learning model, develop an algorithm, and evaluate the performance considering different utilities and reward mechanisms. We provide comparisons with the optimal centralized strategies that require complete information and use weighted minimum mean square error (WMMSE) based power control and exhaustive search over all channel selection policies. We highlight the performance improvements with power control.","PeriodicalId":6713,"journal":{"name":"2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall)","volume":"88 1","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Dynamic Channel Access and Power Control via Deep Reinforcement Learning\",\"authors\":\"Ziyang Lu, M. C. Gursoy\",\"doi\":\"10.1109/VTCFall.2019.8891391\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Efficient use of spectral and energy resources is critical in wireless networks and has been extensively studied in recent years. In particular, dynamic spectrum access and power control have been addressed primarily via optimization and game-theoretic tools. In this paper, motivated by recent advances in machine learning and, more specifically, the success of reinforcement learning for addressing dynamic control problems, we consider deep reinforcement learning to jointly perform dynamic channel access and power control in wireless interference channels. We propose a deep Q-learning model, develop an algorithm, and evaluate the performance considering different utilities and reward mechanisms. We provide comparisons with the optimal centralized strategies that require complete information and use weighted minimum mean square error (WMMSE) based power control and exhaustive search over all channel selection policies. We highlight the performance improvements with power control.\",\"PeriodicalId\":6713,\"journal\":{\"name\":\"2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall)\",\"volume\":\"88 1\",\"pages\":\"1-5\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VTCFall.2019.8891391\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VTCFall.2019.8891391","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13

摘要

频谱和能量资源的有效利用是无线网络的关键,近年来得到了广泛的研究。特别是,动态频谱接入和功率控制主要通过优化和博弈论工具来解决。在本文中,由于机器学习的最新进展,更具体地说,是强化学习解决动态控制问题的成功,我们考虑深度强化学习在无线干扰信道中联合执行动态信道访问和功率控制。我们提出了一个深度q -学习模型,开发了一个算法,并在考虑不同效用和奖励机制的情况下评估了性能。我们提供了与需要完整信息的最优集中式策略的比较,并使用基于加权最小均方误差(WMMSE)的功率控制和对所有信道选择策略的穷列搜索。我们强调了功率控制方面的性能改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Dynamic Channel Access and Power Control via Deep Reinforcement Learning
Efficient use of spectral and energy resources is critical in wireless networks and has been extensively studied in recent years. In particular, dynamic spectrum access and power control have been addressed primarily via optimization and game-theoretic tools. In this paper, motivated by recent advances in machine learning and, more specifically, the success of reinforcement learning for addressing dynamic control problems, we consider deep reinforcement learning to jointly perform dynamic channel access and power control in wireless interference channels. We propose a deep Q-learning model, develop an algorithm, and evaluate the performance considering different utilities and reward mechanisms. We provide comparisons with the optimal centralized strategies that require complete information and use weighted minimum mean square error (WMMSE) based power control and exhaustive search over all channel selection policies. We highlight the performance improvements with power control.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Towards Emergency Braking as a Fail-Safe State in Platooning: A Simulative Approach Online Task Offloading with Bandit Learning in Fog-Assisted IoT Systems Hybrid Localization: A Low Cost, Low Complexity Approach Based on Wi-Fi and Odometry Residual Energy Optimization for MIMO SWIPT Two-Way Relaying System Traffic Forecast in Mobile Networks: Classification System Using Machine Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1