Dual-timescale resource management for multi-type caching placement and multi-user computation offloading in Internet of Vehicle

IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Simulation Modelling Practice and Theory Pub Date : 2024-10-23 DOI:10.1016/j.simpat.2024.103025
Dun Cao , Bo Peng , Yubin Wang , Fayez Alqahtani , Jinyu Zhang , Jin Wang
{"title":"Dual-timescale resource management for multi-type caching placement and multi-user computation offloading in Internet of Vehicle","authors":"Dun Cao ,&nbsp;Bo Peng ,&nbsp;Yubin Wang ,&nbsp;Fayez Alqahtani ,&nbsp;Jinyu Zhang ,&nbsp;Jin Wang","doi":"10.1016/j.simpat.2024.103025","DOIUrl":null,"url":null,"abstract":"<div><div>In Internet of Vehicle (IoV), edge computing can effectively reduce task processing delays and meet the real-time needs of connected-vehicle applications. However, since the requirements for caching and computing resources vary across heterogeneous vehicle requests, a new challenge is posed on the resource management in the three-tier cloud–edge–end architecture, particularly when multi users offload tasks in the same time. Our work comprehensively considers various scenarios involving the deployment of multiple caching types from multi-users and the distinct time scales of offloading and updating, then builds a joint optimization caching placement, computation offloading and computational resource allocation model, aiming to minimize overall latency. Meanwhile, to better solving the model, we propose the Multi-node Collaborative Caching, Offloading, and Resource Allocation Algorithm (MCCO-RAA). MCCO-RAA utilizes dual time scales to optimize the problem: employing a Bellman optimization idea-based multi-node collaborative greedy caching placement strategy at large time scales, and a computational offloading and resource allocation strategy based on a two-tier iterative Deep Deterministic Policy Gradient (DDPG) and cooperative game at small time scales. Experimental results demonstrate that our proposed scheme achieves a 28% reduction in overall system latency compared to the baseline scheme, with smoother latency variations under different parameters.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"138 ","pages":"Article 103025"},"PeriodicalIF":3.5000,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Simulation Modelling Practice and Theory","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1569190X24001394","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

In Internet of Vehicle (IoV), edge computing can effectively reduce task processing delays and meet the real-time needs of connected-vehicle applications. However, since the requirements for caching and computing resources vary across heterogeneous vehicle requests, a new challenge is posed on the resource management in the three-tier cloud–edge–end architecture, particularly when multi users offload tasks in the same time. Our work comprehensively considers various scenarios involving the deployment of multiple caching types from multi-users and the distinct time scales of offloading and updating, then builds a joint optimization caching placement, computation offloading and computational resource allocation model, aiming to minimize overall latency. Meanwhile, to better solving the model, we propose the Multi-node Collaborative Caching, Offloading, and Resource Allocation Algorithm (MCCO-RAA). MCCO-RAA utilizes dual time scales to optimize the problem: employing a Bellman optimization idea-based multi-node collaborative greedy caching placement strategy at large time scales, and a computational offloading and resource allocation strategy based on a two-tier iterative Deep Deterministic Policy Gradient (DDPG) and cooperative game at small time scales. Experimental results demonstrate that our proposed scheme achieves a 28% reduction in overall system latency compared to the baseline scheme, with smoother latency variations under different parameters.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
车联网中用于多类型缓存放置和多用户计算卸载的双时间尺度资源管理
在车联网(IoV)中,边缘计算可以有效减少任务处理延迟,满足车联网应用的实时需求。然而,由于异构车辆请求对缓存和计算资源的要求各不相同,这对三层云-边缘-端架构的资源管理提出了新的挑战,尤其是当多用户同时卸载任务时。我们的工作综合考虑了多用户部署多种缓存类型的各种场景,以及卸载和更新的不同时间尺度,然后建立了一个联合优化缓存放置、计算卸载和计算资源分配的模型,旨在最大限度地减少整体延迟。同时,为了更好地求解该模型,我们提出了多节点协同缓存、卸载和资源分配算法(MCCO-RAA)。MCCO-RAA 利用双时间尺度来优化问题:在大时间尺度上采用基于贝尔曼优化思想的多节点协作贪婪缓存放置策略,在小时间尺度上采用基于双层迭代深度确定性策略梯度(DDPG)和合作博弈的计算卸载和资源分配策略。实验结果表明,与基线方案相比,我们提出的方案使系统整体延迟时间减少了 28%,并且在不同参数下延迟时间的变化更加平滑。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Simulation Modelling Practice and Theory
Simulation Modelling Practice and Theory 工程技术-计算机:跨学科应用
CiteScore
9.80
自引率
4.80%
发文量
142
审稿时长
21 days
期刊介绍: The journal Simulation Modelling Practice and Theory provides a forum for original, high-quality papers dealing with any aspect of systems simulation and modelling. The journal aims at being a reference and a powerful tool to all those professionally active and/or interested in the methods and applications of simulation. Submitted papers will be peer reviewed and must significantly contribute to modelling and simulation in general or use modelling and simulation in application areas. Paper submission is solicited on: • theoretical aspects of modelling and simulation including formal modelling, model-checking, random number generators, sensitivity analysis, variance reduction techniques, experimental design, meta-modelling, methods and algorithms for validation and verification, selection and comparison procedures etc.; • methodology and application of modelling and simulation in any area, including computer systems, networks, real-time and embedded systems, mobile and intelligent agents, manufacturing and transportation systems, management, engineering, biomedical engineering, economics, ecology and environment, education, transaction handling, etc.; • simulation languages and environments including those, specific to distributed computing, grid computing, high performance computers or computer networks, etc.; • distributed and real-time simulation, simulation interoperability; • tools for high performance computing simulation, including dedicated architectures and parallel computing.
期刊最新文献
A mixed crowd movement model incorporating chasing behavior Quality matters: A comprehensive comparative study of edge computing simulators Advanced FOPoP technology in heterogeneous integration: Finite element analysis with element birth and death technique Improvement and performance analysis of constitutive model for rock blasting damage simulation Cost optimization and ANFIS computing for M/M/(R+c)/N queue under admission control policy and server breakdown
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1