Chun-An Yang, Hongli Xu, Shixiao Fan, Xuan Cheng, Minghui Liu, Xiaomin Wang
{"title":"Efficient Resource Allocation Policy for Cloud Edge End Framework by Reinforcement Learning","authors":"Chun-An Yang, Hongli Xu, Shixiao Fan, Xuan Cheng, Minghui Liu, Xiaomin Wang","doi":"10.1109/ICCC56324.2022.10065844","DOIUrl":null,"url":null,"abstract":"Recently, Mobile Edge Cloud Computing (MECC) emerges as a promising partial offloading paradigm to provide computing services. However, the design of computation resource allocation policies for the MECC network inevitably encounters a challenging delay-sensitive two-queue optimization problem. Specifically, the coupled computation resource allocation of edge processing queue and cloud processing queue makes it difficult to guarantee the end-to-end delay requirements. This study investigates this problem with the stochasticity of computation request arrival, service time, and dynamic computation resources. We first model the MECC network as a two-stage tandem queue that consists of two sequential computation processing queues with multiple servers. A Deep Reinforcement Learning (DRL) algorithm, is then applied to learn a computation speed adjusting policy for the tandem queue, which can provide end-to-end delay insurance for multiple mobile applications while preventing the total computation resources of edge servers and cloud servers from overuse. Finally, extensive simulation results demonstrate that our approach can achieve better performance than others in dynamic network environment.","PeriodicalId":263098,"journal":{"name":"2022 IEEE 8th International Conference on Computer and Communications (ICCC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 8th International Conference on Computer and Communications (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCC56324.2022.10065844","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, Mobile Edge Cloud Computing (MECC) emerges as a promising partial offloading paradigm to provide computing services. However, the design of computation resource allocation policies for the MECC network inevitably encounters a challenging delay-sensitive two-queue optimization problem. Specifically, the coupled computation resource allocation of edge processing queue and cloud processing queue makes it difficult to guarantee the end-to-end delay requirements. This study investigates this problem with the stochasticity of computation request arrival, service time, and dynamic computation resources. We first model the MECC network as a two-stage tandem queue that consists of two sequential computation processing queues with multiple servers. A Deep Reinforcement Learning (DRL) algorithm, is then applied to learn a computation speed adjusting policy for the tandem queue, which can provide end-to-end delay insurance for multiple mobile applications while preventing the total computation resources of edge servers and cloud servers from overuse. Finally, extensive simulation results demonstrate that our approach can achieve better performance than others in dynamic network environment.