基于频谱共存的NextG无线接入网切片的深度强化学习

Yi Shi;Maice Costa;Tugba Erpek;Yalin E. Sagduyu
{"title":"基于频谱共存的NextG无线接入网切片的深度强化学习","authors":"Yi Shi;Maice Costa;Tugba Erpek;Yalin E. Sagduyu","doi":"10.1109/LNET.2023.3284665","DOIUrl":null,"url":null,"abstract":"Reinforcement learning (RL) is applied for dynamic admission control and resource allocation in NextG radio access network slicing. When sharing the spectrum with an incumbent user (that dynamically occupies frequency-time blocks), communication and computational resources are allocated to slicing requests, each with priority (weight), throughput, latency, and computational requirements. RL maximizes the total weight of granted requests over time beyond myopic, greedy, random, and first come, first served solutions. As the state-action space grows, Deep Q-network effectively admits requests and allocates resources as a low-complexity solution that is robust to sensing errors in detecting the incumbent user activity.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"5 3","pages":"149-153"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Deep Reinforcement Learning for NextG Radio Access Network Slicing With Spectrum Coexistence\",\"authors\":\"Yi Shi;Maice Costa;Tugba Erpek;Yalin E. Sagduyu\",\"doi\":\"10.1109/LNET.2023.3284665\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reinforcement learning (RL) is applied for dynamic admission control and resource allocation in NextG radio access network slicing. When sharing the spectrum with an incumbent user (that dynamically occupies frequency-time blocks), communication and computational resources are allocated to slicing requests, each with priority (weight), throughput, latency, and computational requirements. RL maximizes the total weight of granted requests over time beyond myopic, greedy, random, and first come, first served solutions. As the state-action space grows, Deep Q-network effectively admits requests and allocates resources as a low-complexity solution that is robust to sensing errors in detecting the incumbent user activity.\",\"PeriodicalId\":100628,\"journal\":{\"name\":\"IEEE Networking Letters\",\"volume\":\"5 3\",\"pages\":\"149-153\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Networking Letters\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10147283/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Networking Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10147283/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

强化学习(RL)被应用于NextG无线接入网络切片中的动态准入控制和资源分配。当与现任用户(动态占用频率-时间块)共享频谱时,通信和计算资源被分配给切片请求,每个请求都有优先级(权重)、吞吐量、延迟和计算要求。RL在超过短视、贪婪、随机和先到先得的解决方案的情况下,随着时间的推移,使已授予请求的总权重最大化。随着状态动作空间的增长,深度Q网络作为一种低复杂度的解决方案,有效地接纳请求并分配资源,该解决方案对检测现有用户活动中的感知错误具有鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep Reinforcement Learning for NextG Radio Access Network Slicing With Spectrum Coexistence
Reinforcement learning (RL) is applied for dynamic admission control and resource allocation in NextG radio access network slicing. When sharing the spectrum with an incumbent user (that dynamically occupies frequency-time blocks), communication and computational resources are allocated to slicing requests, each with priority (weight), throughput, latency, and computational requirements. RL maximizes the total weight of granted requests over time beyond myopic, greedy, random, and first come, first served solutions. As the state-action space grows, Deep Q-network effectively admits requests and allocates resources as a low-complexity solution that is robust to sensing errors in detecting the incumbent user activity.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
2024 Index IEEE Networking Letters Vol. 6 Table of Contents IEEE Networking Letters Publication Information IEEE Networking Letters Society Information Editorial SI on Advances in AI for 6G Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1