DRLQ: A Deep Reinforcement Learning-based Task Placement for Quantum Cloud Computing

Hoa T. Nguyen, Muhammad Usman, Rajkumar Buyya
{"title":"DRLQ: A Deep Reinforcement Learning-based Task Placement for Quantum Cloud Computing","authors":"Hoa T. Nguyen, Muhammad Usman, Rajkumar Buyya","doi":"arxiv-2407.02748","DOIUrl":null,"url":null,"abstract":"The quantum cloud computing paradigm presents unique challenges in task\nplacement due to the dynamic and heterogeneous nature of quantum computation\nresources. Traditional heuristic approaches fall short in adapting to the\nrapidly evolving landscape of quantum computing. This paper proposes DRLQ, a\nnovel Deep Reinforcement Learning (DRL)-based technique for task placement in\nquantum cloud computing environments, addressing the optimization of task\ncompletion time and quantum task scheduling efficiency. It leverages the Deep Q\nNetwork (DQN) architecture, enhanced with the Rainbow DQN approach, to create a\ndynamic task placement strategy. This approach is one of the first in the field\nof quantum cloud resource management, enabling adaptive learning and\ndecision-making for quantum cloud environments and effectively optimizing task\nplacement based on changing conditions and resource availability. We conduct\nextensive experiments using the QSimPy simulation toolkit to evaluate the\nperformance of our method, demonstrating substantial improvements in task\nexecution efficiency and a reduction in the need to reschedule quantum tasks.\nOur results show that utilizing the DRLQ approach for task placement can\nsignificantly reduce total quantum task completion time by 37.81% to 72.93% and\nprevent task rescheduling attempts compared to other heuristic approaches.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"21 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Emerging Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.02748","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The quantum cloud computing paradigm presents unique challenges in task placement due to the dynamic and heterogeneous nature of quantum computation resources. Traditional heuristic approaches fall short in adapting to the rapidly evolving landscape of quantum computing. This paper proposes DRLQ, a novel Deep Reinforcement Learning (DRL)-based technique for task placement in quantum cloud computing environments, addressing the optimization of task completion time and quantum task scheduling efficiency. It leverages the Deep Q Network (DQN) architecture, enhanced with the Rainbow DQN approach, to create a dynamic task placement strategy. This approach is one of the first in the field of quantum cloud resource management, enabling adaptive learning and decision-making for quantum cloud environments and effectively optimizing task placement based on changing conditions and resource availability. We conduct extensive experiments using the QSimPy simulation toolkit to evaluate the performance of our method, demonstrating substantial improvements in task execution efficiency and a reduction in the need to reschedule quantum tasks. Our results show that utilizing the DRLQ approach for task placement can significantly reduce total quantum task completion time by 37.81% to 72.93% and prevent task rescheduling attempts compared to other heuristic approaches.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
DRLQ:基于深度强化学习的量子云计算任务分配
由于量子计算资源的动态性和异构性,量子云计算模式在任务置换方面提出了独特的挑战。传统的启发式方法无法适应量子计算的快速发展。本文提出的 DRLQ 是一种基于深度强化学习(DRL)的量子云计算环境任务分配技术,旨在优化任务完成时间和量子任务调度效率。它利用深度量子网络(DQN)架构,并采用 Rainbow DQN 方法进行增强,从而创建了一种动态任务分配策略。该方法是量子云资源管理领域的首创方法之一,可实现量子云环境的自适应学习和决策,并根据不断变化的条件和资源可用性有效优化任务分配。我们使用QSimPy仿真工具包进行了大量实验,评估了我们方法的性能,结果表明,与其他启发式方法相比,利用DRLQ方法进行任务放置可以显著缩短量子任务的总完成时间,缩短幅度为37.81%到72.93%,并避免了任务重新安排的尝试。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Pennsieve - A Collaborative Platform for Translational Neuroscience and Beyond Analysing Attacks on Blockchain Systems in a Layer-based Approach Exploring Utility in a Real-World Warehouse Optimization Problem: Formulation Based on Quantun Annealers and Preliminary Results High Definition Map Mapping and Update: A General Overview and Future Directions Detection Made Easy: Potentials of Large Language Models for Solidity Vulnerabilities
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1