基于深度强化学习的车联网任务卸载研究

Yaoping Zeng, Yanwei Hu, Ting Yang
{"title":"基于深度强化学习的车联网任务卸载研究","authors":"Yaoping Zeng, Yanwei Hu, Ting Yang","doi":"10.1145/3573942.3573987","DOIUrl":null,"url":null,"abstract":"Mobile Edge Computing (MEC) is a promising technology that facilitates the computational offloading and resource allocation in the Internet of Vehicles (IoV) environment. When the mobile device is not capable enough to meet its own demands for data processing, the task will be offloaded to the MEC server, which can effectively relieve the network pressure, meet the multi-task computing requirements, and ensure the quality of service (QoS). Via multi-user and multi-MEC servers, this paper proposes the Q-Learning task offloading strategy based on the improved deep reinforcement learning policy(IDRLP) to obtain an optimal strategy for task offloading and resource allocation. Simulation results suggest that the proposed algorithm compared with other benchmark schemes has better performance in terms of delay, energy consumption and system weighted cost, even with different tasks, users and data sizes.","PeriodicalId":103293,"journal":{"name":"Proceedings of the 2022 5th International Conference on Artificial Intelligence and Pattern Recognition","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Research on Task Offloading Based on Deep Reinforcement Learning for Internet of Vehicles\",\"authors\":\"Yaoping Zeng, Yanwei Hu, Ting Yang\",\"doi\":\"10.1145/3573942.3573987\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Mobile Edge Computing (MEC) is a promising technology that facilitates the computational offloading and resource allocation in the Internet of Vehicles (IoV) environment. When the mobile device is not capable enough to meet its own demands for data processing, the task will be offloaded to the MEC server, which can effectively relieve the network pressure, meet the multi-task computing requirements, and ensure the quality of service (QoS). Via multi-user and multi-MEC servers, this paper proposes the Q-Learning task offloading strategy based on the improved deep reinforcement learning policy(IDRLP) to obtain an optimal strategy for task offloading and resource allocation. Simulation results suggest that the proposed algorithm compared with other benchmark schemes has better performance in terms of delay, energy consumption and system weighted cost, even with different tasks, users and data sizes.\",\"PeriodicalId\":103293,\"journal\":{\"name\":\"Proceedings of the 2022 5th International Conference on Artificial Intelligence and Pattern Recognition\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2022 5th International Conference on Artificial Intelligence and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3573942.3573987\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 5th International Conference on Artificial Intelligence and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3573942.3573987","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

移动边缘计算(MEC)是一项很有前途的技术,可以促进车联网(IoV)环境下的计算卸载和资源分配。当移动设备无法满足自身数据处理需求时,将任务卸载给MEC服务器,可以有效缓解网络压力,满足多任务计算需求,保证服务质量(QoS)。通过多用户和多mec服务器,提出了基于改进深度强化学习策略(IDRLP)的Q-Learning任务卸载策略,以获得任务卸载和资源分配的最优策略。仿真结果表明,即使在不同的任务、用户和数据大小下,与其他基准方案相比,所提出的算法在延迟、能耗和系统加权代价方面具有更好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Research on Task Offloading Based on Deep Reinforcement Learning for Internet of Vehicles
Mobile Edge Computing (MEC) is a promising technology that facilitates the computational offloading and resource allocation in the Internet of Vehicles (IoV) environment. When the mobile device is not capable enough to meet its own demands for data processing, the task will be offloaded to the MEC server, which can effectively relieve the network pressure, meet the multi-task computing requirements, and ensure the quality of service (QoS). Via multi-user and multi-MEC servers, this paper proposes the Q-Learning task offloading strategy based on the improved deep reinforcement learning policy(IDRLP) to obtain an optimal strategy for task offloading and resource allocation. Simulation results suggest that the proposed algorithm compared with other benchmark schemes has better performance in terms of delay, energy consumption and system weighted cost, even with different tasks, users and data sizes.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Model Lightweight Method for Object Detection Incremental Encoding Transformer Incorporating Common-sense Awareness for Conversational Sentiment Recognition Non-intrusive Automatic 3D Gaze Ground-truth System Fiber Optic Gyroscope Random Error Modeling Based on Improved Kalman Filtering Channel Modeling of Spaceborne Multiwavelet Packet OFDM System Based on CWGAN
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1