{"title":"用于 MEC 动态网络计算卸载的深度强化方法","authors":"Yibiao Fan, Xiaowei Cai","doi":"10.1186/s13634-024-01142-2","DOIUrl":null,"url":null,"abstract":"<p>In this study, we investigate the challenges associated with dynamic time slot server selection in mobile edge computing (MEC) systems. This study considers the fluctuating nature of user access at edge servers and the various factors that influence server workload, including offloading policies, offloading ratios, users’ transmission power, and the servers’ reserved capacity. To streamline the process of selecting edge servers with an eye on long-term optimization, we cast the problem as a Markov Decision Process (MDP) and propose a Deep Reinforcement Learning (DRL)-based algorithm as a solution. Our approach involves learning the selection strategy by analyzing the performance of server selections in previous iterations. Simulation outcomes show that our DRL-based algorithm surpasses benchmarks, delivering minimal average latency.</p>","PeriodicalId":11816,"journal":{"name":"EURASIP Journal on Advances in Signal Processing","volume":"23 1","pages":""},"PeriodicalIF":1.9000,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A deep reinforcement approach for computation offloading in MEC dynamic networks\",\"authors\":\"Yibiao Fan, Xiaowei Cai\",\"doi\":\"10.1186/s13634-024-01142-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In this study, we investigate the challenges associated with dynamic time slot server selection in mobile edge computing (MEC) systems. This study considers the fluctuating nature of user access at edge servers and the various factors that influence server workload, including offloading policies, offloading ratios, users’ transmission power, and the servers’ reserved capacity. To streamline the process of selecting edge servers with an eye on long-term optimization, we cast the problem as a Markov Decision Process (MDP) and propose a Deep Reinforcement Learning (DRL)-based algorithm as a solution. Our approach involves learning the selection strategy by analyzing the performance of server selections in previous iterations. Simulation outcomes show that our DRL-based algorithm surpasses benchmarks, delivering minimal average latency.</p>\",\"PeriodicalId\":11816,\"journal\":{\"name\":\"EURASIP Journal on Advances in Signal Processing\",\"volume\":\"23 1\",\"pages\":\"\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2024-04-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"EURASIP Journal on Advances in Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1186/s13634-024-01142-2\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Engineering\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"EURASIP Journal on Advances in Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1186/s13634-024-01142-2","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Engineering","Score":null,"Total":0}
A deep reinforcement approach for computation offloading in MEC dynamic networks
In this study, we investigate the challenges associated with dynamic time slot server selection in mobile edge computing (MEC) systems. This study considers the fluctuating nature of user access at edge servers and the various factors that influence server workload, including offloading policies, offloading ratios, users’ transmission power, and the servers’ reserved capacity. To streamline the process of selecting edge servers with an eye on long-term optimization, we cast the problem as a Markov Decision Process (MDP) and propose a Deep Reinforcement Learning (DRL)-based algorithm as a solution. Our approach involves learning the selection strategy by analyzing the performance of server selections in previous iterations. Simulation outcomes show that our DRL-based algorithm surpasses benchmarks, delivering minimal average latency.
期刊介绍:
The aim of the EURASIP Journal on Advances in Signal Processing is to highlight the theoretical and practical aspects of signal processing in new and emerging technologies. The journal is directed as much at the practicing engineer as at the academic researcher. Authors of articles with novel contributions to the theory and/or practice of signal processing are welcome to submit their articles for consideration.