车载边缘计算范式中基于多代理强化学习的多模型运行延迟优化

IF 4 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Systems Journal Pub Date : 2024-09-04 DOI:10.1109/JSYST.2024.3407213
Peisong Li;Ziren Xiao;Xinheng Wang;Muddesar Iqbal;Pablo Casaseca-de-la-Higuera
{"title":"车载边缘计算范式中基于多代理强化学习的多模型运行延迟优化","authors":"Peisong Li;Ziren Xiao;Xinheng Wang;Muddesar Iqbal;Pablo Casaseca-de-la-Higuera","doi":"10.1109/JSYST.2024.3407213","DOIUrl":null,"url":null,"abstract":"With the advancement of edge computing, more and more intelligent applications are being deployed at the edge in proximity to end devices to provide in-vehicle services. However, the implementation of some complex services requires the collaboration of multiple AI models to handle and analyze various types of sensory data. In this context, the simultaneous scheduling and execution of multiple model inference tasks is an emerging scenario and faces many challenges. One of the major challenges is to reduce the completion time of time-sensitive services. In order to solve this problem, a multiagent reinforcement learning-based multimodel inference task scheduling method was proposed in this article, with a newly designed reward function to jointly optimize the overall running time and load imbalance. First, the multiagent proximal policy optimization algorithm is utilized for designing the task scheduling method. Second, the designed method can generate near-optimal task scheduling decisions and then dynamically allocate inference tasks to different edge applications based on their status and task characteristics. Third, one assessment index, quality of method, is defined and the proposed method is compared with the other five benchmark methods. Experimental results reveal that the proposed method can reduce the running time of multimodel inference by at least 25% or more, closing to the optimal solution.","PeriodicalId":55017,"journal":{"name":"IEEE Systems Journal","volume":"18 4","pages":"1860-1870"},"PeriodicalIF":4.0000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multiagent Reinforcement Learning-Based Multimodel Running Latency Optimization in Vehicular Edge Computing Paradigm\",\"authors\":\"Peisong Li;Ziren Xiao;Xinheng Wang;Muddesar Iqbal;Pablo Casaseca-de-la-Higuera\",\"doi\":\"10.1109/JSYST.2024.3407213\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the advancement of edge computing, more and more intelligent applications are being deployed at the edge in proximity to end devices to provide in-vehicle services. However, the implementation of some complex services requires the collaboration of multiple AI models to handle and analyze various types of sensory data. In this context, the simultaneous scheduling and execution of multiple model inference tasks is an emerging scenario and faces many challenges. One of the major challenges is to reduce the completion time of time-sensitive services. In order to solve this problem, a multiagent reinforcement learning-based multimodel inference task scheduling method was proposed in this article, with a newly designed reward function to jointly optimize the overall running time and load imbalance. First, the multiagent proximal policy optimization algorithm is utilized for designing the task scheduling method. Second, the designed method can generate near-optimal task scheduling decisions and then dynamically allocate inference tasks to different edge applications based on their status and task characteristics. Third, one assessment index, quality of method, is defined and the proposed method is compared with the other five benchmark methods. Experimental results reveal that the proposed method can reduce the running time of multimodel inference by at least 25% or more, closing to the optimal solution.\",\"PeriodicalId\":55017,\"journal\":{\"name\":\"IEEE Systems Journal\",\"volume\":\"18 4\",\"pages\":\"1860-1870\"},\"PeriodicalIF\":4.0000,\"publicationDate\":\"2024-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Systems Journal\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10664612/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Systems Journal","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10664612/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

随着边缘计算的发展,越来越多的智能应用被部署在靠近终端设备的边缘,以提供车内服务。然而,一些复杂服务的实现需要多个AI模型的协作来处理和分析各种类型的感官数据。在此背景下,多个模型推理任务的同时调度和执行是一个新兴的场景,面临着许多挑战。主要挑战之一是缩短对时间敏感的服务的完成时间。为了解决这一问题,本文提出了一种基于多智能体强化学习的多模型推理任务调度方法,并设计了新的奖励函数来共同优化总体运行时间和负载不平衡。首先,利用多智能体近端策略优化算法设计任务调度方法。其次,设计的方法可以生成接近最优的任务调度决策,然后根据不同边缘应用的状态和任务特征动态分配推理任务。第三,定义了方法质量这一评价指标,并与其他五种基准方法进行了比较。实验结果表明,该方法可以将多模型推理的运行时间减少至少25%以上,接近最优解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Multiagent Reinforcement Learning-Based Multimodel Running Latency Optimization in Vehicular Edge Computing Paradigm
With the advancement of edge computing, more and more intelligent applications are being deployed at the edge in proximity to end devices to provide in-vehicle services. However, the implementation of some complex services requires the collaboration of multiple AI models to handle and analyze various types of sensory data. In this context, the simultaneous scheduling and execution of multiple model inference tasks is an emerging scenario and faces many challenges. One of the major challenges is to reduce the completion time of time-sensitive services. In order to solve this problem, a multiagent reinforcement learning-based multimodel inference task scheduling method was proposed in this article, with a newly designed reward function to jointly optimize the overall running time and load imbalance. First, the multiagent proximal policy optimization algorithm is utilized for designing the task scheduling method. Second, the designed method can generate near-optimal task scheduling decisions and then dynamically allocate inference tasks to different edge applications based on their status and task characteristics. Third, one assessment index, quality of method, is defined and the proposed method is compared with the other five benchmark methods. Experimental results reveal that the proposed method can reduce the running time of multimodel inference by at least 25% or more, closing to the optimal solution.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Systems Journal
IEEE Systems Journal 工程技术-电信学
CiteScore
9.80
自引率
6.80%
发文量
572
审稿时长
4.9 months
期刊介绍: This publication provides a systems-level, focused forum for application-oriented manuscripts that address complex systems and system-of-systems of national and global significance. It intends to encourage and facilitate cooperation and interaction among IEEE Societies with systems-level and systems engineering interest, and to attract non-IEEE contributors and readers from around the globe. Our IEEE Systems Council job is to address issues in new ways that are not solvable in the domains of the existing IEEE or other societies or global organizations. These problems do not fit within traditional hierarchical boundaries. For example, disaster response such as that triggered by Hurricane Katrina, tsunamis, or current volcanic eruptions is not solvable by pure engineering solutions. We need to think about changing and enlarging the paradigm to include systems issues.
期刊最新文献
2024 Index IEEE Systems Journal Vol. 18 Front Cover Editorial Table of Contents IEEE Systems Council Information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1