Deep Reinforcement Learning Based Rendering Service Placement for Cloud Gaming in Mobile Edge Computing Systems

Yongqiang Gao, Zhihan Li
{"title":"Deep Reinforcement Learning Based Rendering Service Placement for Cloud Gaming in Mobile Edge Computing Systems","authors":"Yongqiang Gao, Zhihan Li","doi":"10.1109/COMPSAC57700.2023.00073","DOIUrl":null,"url":null,"abstract":"In recent years, the advancement of 4G/5G network technologies and smart devices has led to an increasing demand for smooth, massively multiplayer online games on mobile terminals. These games necessitate high performance and heavy workloads, often consuming substantial amounts of computing and storage resources while imposing strict latency requirements. However, due to the limited resources of end devices, such tasks cannot be efficiently and independently executed. The traditional solution typically involves processing gaming tasks at centralized cloud servers. However, this approach introduces issues such as bandwidth pressure, high latency, load imbalance, and elevated costs. Recently, mobile edge computing (MEC) has gained popularity, and its low-latency capabilities can be integrated with cloud gaming to enhance the gaming performance experience. In this paper, we explore the offloading and placement of rendering services in a scenario that combines MEC with cloud gaming. We propose a model-free algorithm based on deep reinforcement learning to learn the optimal task offloading and placement policy, which optimizes a combination of four metrics: latency, cost, bandwidth, and load balancing. Additionally, the algorithm predicts future bandwidth using LSTM, significantly improving the player's gaming experience and fairness. Simulation results demonstrate that our proposed task placement strategy outperforms state-of-the-art methods applied to similar problems.","PeriodicalId":296288,"journal":{"name":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COMPSAC57700.2023.00073","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, the advancement of 4G/5G network technologies and smart devices has led to an increasing demand for smooth, massively multiplayer online games on mobile terminals. These games necessitate high performance and heavy workloads, often consuming substantial amounts of computing and storage resources while imposing strict latency requirements. However, due to the limited resources of end devices, such tasks cannot be efficiently and independently executed. The traditional solution typically involves processing gaming tasks at centralized cloud servers. However, this approach introduces issues such as bandwidth pressure, high latency, load imbalance, and elevated costs. Recently, mobile edge computing (MEC) has gained popularity, and its low-latency capabilities can be integrated with cloud gaming to enhance the gaming performance experience. In this paper, we explore the offloading and placement of rendering services in a scenario that combines MEC with cloud gaming. We propose a model-free algorithm based on deep reinforcement learning to learn the optimal task offloading and placement policy, which optimizes a combination of four metrics: latency, cost, bandwidth, and load balancing. Additionally, the algorithm predicts future bandwidth using LSTM, significantly improving the player's gaming experience and fairness. Simulation results demonstrate that our proposed task placement strategy outperforms state-of-the-art methods applied to similar problems.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
移动边缘计算系统中基于深度强化学习的云游戏渲染服务布局
近年来,4G/5G网络技术和智能设备的进步,导致人们对移动端流畅、大型多人在线游戏的需求不断增加。这些游戏需要高性能和繁重的工作负载,通常消耗大量的计算和存储资源,同时施加严格的延迟要求。然而,由于终端设备资源有限,这些任务无法高效独立地执行。传统的解决方案通常涉及在集中式云服务器上处理游戏任务。然而,这种方法引入了带宽压力、高延迟、负载不平衡和成本升高等问题。最近,移动边缘计算(MEC)得到了普及,其低延迟功能可以与云游戏集成,以增强游戏性能体验。在本文中,我们探讨了将MEC与云游戏相结合的场景中渲染服务的卸载和放置。我们提出了一种基于深度强化学习的无模型算法来学习最优任务卸载和放置策略,该算法优化了四个指标的组合:延迟、成本、带宽和负载平衡。此外,该算法使用LSTM预测未来带宽,显著提高了玩家的游戏体验和公平性。仿真结果表明,我们提出的任务布置策略优于应用于类似问题的最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Study on Performance Bottleneck of Flow-Level Information-Centric Network Simulator An Empathetic Approach to Human-Centric Requirements Engineering Using Virtual Reality Comprehensive Analysis of Dieting Apps: Effectiveness, Design, and Frequency Usage Towards data generation to alleviate privacy concerns for cybersecurity applications VA4SM: A Visual Analytics Tool for Software Maintenance
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1