Policy network-based dual-agent deep reinforcement learning for multi-resource task offloading in multi-access edge cloud networks

Feng Chuan, Zhang Xu, Pengchao Han, Tianchun Ma, Xiaoxue Gong
{"title":"Policy network-based dual-agent deep reinforcement learning for multi-resource task offloading in multi-access edge cloud networks","authors":"Feng Chuan, Zhang Xu, Pengchao Han, Tianchun Ma, Xiaoxue Gong","doi":"10.23919/JCC.fa.2023-0383.202404","DOIUrl":null,"url":null,"abstract":"The Multi-access Edge Cloud (MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC networks can support a wide range of applications. MEC networks can also leverage various types of resources, including computation resources, network resources, radio resources, and location-based resources, to provide multidimensional resources for intelligent applications in 5/6G. However, tasks generated by users often consist of multiple subtasks that require different types of resources. It is a challenging problem to offload multi-resource task requests to the edge cloud aiming at maximizing benefits due to the heterogeneity of resources provided by devices. To address this issue, we mathematically model the task requests with multiple subtasks. Then, the problem of task offloading of multi-resource task requests is proved to be NP-hard. Furthermore, we propose a novel Dual-Agent Deep Reinforcement Learning algorithm with Node First and Link features (NF_L_DA_DRL) based on the policy network, to optimize the benefits generated by offloading multi-resource task requests in MEC networks. Finally, simulation results show that the proposed algorithm can effectively improve the benefit of task offloading with higher resource utilization compared with baseline algorithms.","PeriodicalId":504777,"journal":{"name":"China Communications","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"China Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/JCC.fa.2023-0383.202404","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The Multi-access Edge Cloud (MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC networks can support a wide range of applications. MEC networks can also leverage various types of resources, including computation resources, network resources, radio resources, and location-based resources, to provide multidimensional resources for intelligent applications in 5/6G. However, tasks generated by users often consist of multiple subtasks that require different types of resources. It is a challenging problem to offload multi-resource task requests to the edge cloud aiming at maximizing benefits due to the heterogeneity of resources provided by devices. To address this issue, we mathematically model the task requests with multiple subtasks. Then, the problem of task offloading of multi-resource task requests is proved to be NP-hard. Furthermore, we propose a novel Dual-Agent Deep Reinforcement Learning algorithm with Node First and Link features (NF_L_DA_DRL) based on the policy network, to optimize the benefits generated by offloading multi-resource task requests in MEC networks. Finally, simulation results show that the proposed algorithm can effectively improve the benefit of task offloading with higher resource utilization compared with baseline algorithms.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于策略网络的双代理深度强化学习,用于多接入边缘云网络中的多资源任务卸载
多接入边缘云(MEC)网络将云计算服务和功能扩展到网络边缘。通过使计算和存储能力更接近终端用户和联网设备,MEC 网络可支持广泛的应用。MEC 网络还可以利用各种类型的资源,包括计算资源、网络资源、无线电资源和基于位置的资源,为 5/6G 智能应用提供多维资源。然而,用户生成的任务往往由多个子任务组成,需要不同类型的资源。由于设备提供资源的异质性,如何将多资源任务请求卸载到边缘云以实现效益最大化是一个具有挑战性的问题。为了解决这个问题,我们对包含多个子任务的任务请求进行了数学建模。然后,多资源任务请求的任务卸载问题被证明是 NP 难问题。此外,我们提出了一种基于策略网络的新型双代理深度强化学习算法(NF_L_DA_DRL),以优化 MEC 网络中卸载多资源任务请求所产生的效益。最后,仿真结果表明,与基线算法相比,所提出的算法能有效提高任务卸载的效益,并具有更高的资源利用率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Intellicise model transmission for semantic communication in intelligence-native 6G networks Variational learned talking-head semantic coded transmission system Physical-layer secret key generation for dual-task scenarios Intelligent dynamic heterogeneous redundancy architecture for IoT systems Joint optimization for on-demand deployment of UAVs and spectrum allocation in UAVs-assisted communication
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1