Babak Badnava, Taejoon Kim, Kenny Cheung, Zaheer Ali, M. Hashemi
{"title":"Spectrum-Aware Mobile Edge Computing for UAVs Using Reinforcement Learning","authors":"Babak Badnava, Taejoon Kim, Kenny Cheung, Zaheer Ali, M. Hashemi","doi":"10.1145/3453142.3491414","DOIUrl":null,"url":null,"abstract":"We consider the problem of task offloading by unmanned aerial vehicles (UAV) using mobile edge computing (MEC). In this context, each UAV makes a decision to offload the computation task to a more powerful MEC server (e.g., base station), or to perform the task locally. In this paper, we propose a spectrum-aware decision-making framework such that each agent can dynamically select one of the available channels for offloading. To this end, we develop a deep reinforcement learning (DRL) framework for the UAVs to select the channel for task offloading or perform the computation locally. In the numerical results based on deep Q-network, we con-sider a combination of energy consumption and task completion time as the reward. Simulation results based on low-band, mid-band, and high-band channels demonstrate that the DQN agents efficiently learn the environment and dynamically adjust their actions to maximize the long-term reward.","PeriodicalId":6779,"journal":{"name":"2021 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"66 1","pages":"376-380"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/ACM Symposium on Edge Computing (SEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3453142.3491414","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
We consider the problem of task offloading by unmanned aerial vehicles (UAV) using mobile edge computing (MEC). In this context, each UAV makes a decision to offload the computation task to a more powerful MEC server (e.g., base station), or to perform the task locally. In this paper, we propose a spectrum-aware decision-making framework such that each agent can dynamically select one of the available channels for offloading. To this end, we develop a deep reinforcement learning (DRL) framework for the UAVs to select the channel for task offloading or perform the computation locally. In the numerical results based on deep Q-network, we con-sider a combination of energy consumption and task completion time as the reward. Simulation results based on low-band, mid-band, and high-band channels demonstrate that the DQN agents efficiently learn the environment and dynamically adjust their actions to maximize the long-term reward.