Graph Convolutional Reinforcement Learning for Dependent Task Allocation in Edge Computing

Shiyao Ding, Donghui Lin, Xingxuan Zhou
{"title":"Graph Convolutional Reinforcement Learning for Dependent Task Allocation in Edge Computing","authors":"Shiyao Ding, Donghui Lin, Xingxuan Zhou","doi":"10.1109/ICA54137.2021.00011","DOIUrl":null,"url":null,"abstract":"In edge computing, an important problem is how to allocate dependent tasks to resource-limited edge servers, where some tasks can only be performed after accomplishing some other tasks. Most related studies assume that server status remains unchanged, which might be invalid in some real-world scenarios. Thus, this paper studies the new problem of how to dynamically allocate dependent tasks in resource-limited edge computing. This problem poses two challenges: 1) how to cope with dynamic changes in server status and task arrival, and 2) how to handle the dependency information for decisionmaking in task allocation. Our solution is a graph convolutional reinforcement learning-based task-allocation agent consisting of an encoding part and a decision-making part. The encoding part represents the dependent tasks as directed acyclic graphs and employs a graph convolutional network (GCN) to embed the dependency information of the tasks. It can effectively deal with the dependency and so permit decision-making. The decision-making part formulates the task allocation problem as a Markov decision process to cope with the dynamic changes. Specially, the agent employs deep reinforcement learning to achieve dynamic decision-making for task allocation with the target of optimizing some metric (e.g., minimizing delay costs and energy cost). Experiments verify that our algorithm offers significantly better performance than the existing algorithms examined.","PeriodicalId":273320,"journal":{"name":"2021 IEEE International Conference on Agents (ICA)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Agents (ICA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICA54137.2021.00011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

In edge computing, an important problem is how to allocate dependent tasks to resource-limited edge servers, where some tasks can only be performed after accomplishing some other tasks. Most related studies assume that server status remains unchanged, which might be invalid in some real-world scenarios. Thus, this paper studies the new problem of how to dynamically allocate dependent tasks in resource-limited edge computing. This problem poses two challenges: 1) how to cope with dynamic changes in server status and task arrival, and 2) how to handle the dependency information for decisionmaking in task allocation. Our solution is a graph convolutional reinforcement learning-based task-allocation agent consisting of an encoding part and a decision-making part. The encoding part represents the dependent tasks as directed acyclic graphs and employs a graph convolutional network (GCN) to embed the dependency information of the tasks. It can effectively deal with the dependency and so permit decision-making. The decision-making part formulates the task allocation problem as a Markov decision process to cope with the dynamic changes. Specially, the agent employs deep reinforcement learning to achieve dynamic decision-making for task allocation with the target of optimizing some metric (e.g., minimizing delay costs and energy cost). Experiments verify that our algorithm offers significantly better performance than the existing algorithms examined.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
边缘计算中依赖任务分配的图卷积强化学习
在边缘计算中,一个重要的问题是如何将依赖任务分配给资源有限的边缘服务器,其中一些任务只有在完成其他任务后才能执行。大多数相关研究假设服务器状态保持不变,这在某些实际场景中可能是无效的。因此,本文研究了资源有限的边缘计算中如何动态分配依赖任务的新问题。这个问题提出了两个挑战:1)如何处理服务器状态和任务到达的动态变化;2)如何处理任务分配决策中的依赖信息。我们的解决方案是一个基于图卷积强化学习的任务分配代理,由编码部分和决策部分组成。编码部分将依赖任务表示为有向无环图,并采用图卷积网络(GCN)嵌入任务的依赖信息。它可以有效地处理依赖性,从而允许决策。决策部分将任务分配问题表述为一个马尔可夫决策过程,以应对动态变化。特别地,智能体采用深度强化学习来实现任务分配的动态决策,目标是优化某些指标(例如,最小化延迟成本和能量成本)。实验证明,该算法的性能明显优于现有算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Usage Coordination Utilizing Flexible Contracts in Free-floating Car Sharing Autonomous Bidding & Coordinated Acceptance in One-to-Many Negotiations Modeling Follow-Unfollow Mechanism in Social Networks with Evolutionary Game Decommissioning Robot Retrieves Fuel Debris from High Altitude Development of chatbot to support student learning strategies in design education
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1