{"title":"Prediction Based Sub-Task Offloading in Mobile Edge Computing","authors":"Kitae Kim, Jared Lynskey, S. Kang, C. Hong","doi":"10.1109/ICOIN.2019.8718183","DOIUrl":null,"url":null,"abstract":"Mobile Edge Cloud Computing has been developed and introduced to provide low-latency service in close proximity to users. In this environment., resource constrained UE (user equipment) incapable to execute complex applications (i.e VR/AR., Deep Learning., Image Processing Applications) can dynamically offload computationally demanding tasks to neighboring MEC nodes. To process tasks even faster with MEC nodes., we can divide one task into several sub-tasks and offload to multiple MEC nodes simultaneously., thereby each sub-task will be processed in parallel. In this paper., we predict the total processing duration of each task on each candidate MEC node using Linear Regression. According to the previously observed state of each MEC node., we offload sub-tasks to their respective edge node. We also developed a monitoring module at core cloud. The results show a decrease in execution duration when we offload an entire application to one edge node compared with local execution.","PeriodicalId":422041,"journal":{"name":"2019 International Conference on Information Networking (ICOIN)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Information Networking (ICOIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOIN.2019.8718183","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14
Abstract
Mobile Edge Cloud Computing has been developed and introduced to provide low-latency service in close proximity to users. In this environment., resource constrained UE (user equipment) incapable to execute complex applications (i.e VR/AR., Deep Learning., Image Processing Applications) can dynamically offload computationally demanding tasks to neighboring MEC nodes. To process tasks even faster with MEC nodes., we can divide one task into several sub-tasks and offload to multiple MEC nodes simultaneously., thereby each sub-task will be processed in parallel. In this paper., we predict the total processing duration of each task on each candidate MEC node using Linear Regression. According to the previously observed state of each MEC node., we offload sub-tasks to their respective edge node. We also developed a monitoring module at core cloud. The results show a decrease in execution duration when we offload an entire application to one edge node compared with local execution.