{"title":"基于强化学习的工业物联网环境下最优任务卸载决策","authors":"S. Koo, Yujin Lim","doi":"10.1109/ECICE52819.2021.9645710","DOIUrl":null,"url":null,"abstract":"In the Industrial Internet of Things (IIoT), various types of tasks are processed for the small quantity batch production. But there are many challenges due to the limited battery lifespan and computational capabilities of devices. To overcome the limitations, Mobile Edge Computing (MEC) has been introduced. In MEC, a task offloading technique to execute the tasks attracts much attention. A MEC server (MECS) has limited computational capability, which increases the burden on the server and a cellular network if a larger number of tasks are offloaded to the server. It can reduce the quality of service for task execution. Thus, offloading between nearby devices through device-to-device (D2D) communication is drawing attention. We propose the optimal task offloading decision strategy in the MEC and D2D communication architecture. We aim to minimize the energy consumption of devices and task execution delay under delay constraints. To solve the problem, we adopt Q-learning algorithm as one of Reinforcement Learning (RL). Simulation results show that the proposed algorithm outperforms the other methods in terms of energy consumption of devices and task execution delay.","PeriodicalId":176225,"journal":{"name":"2021 IEEE 3rd Eurasia Conference on IOT, Communication and Engineering (ECICE)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Optimal Task Offloading Decision in IIoT Enviornments Using Reinforcement Learning\",\"authors\":\"S. Koo, Yujin Lim\",\"doi\":\"10.1109/ECICE52819.2021.9645710\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the Industrial Internet of Things (IIoT), various types of tasks are processed for the small quantity batch production. But there are many challenges due to the limited battery lifespan and computational capabilities of devices. To overcome the limitations, Mobile Edge Computing (MEC) has been introduced. In MEC, a task offloading technique to execute the tasks attracts much attention. A MEC server (MECS) has limited computational capability, which increases the burden on the server and a cellular network if a larger number of tasks are offloaded to the server. It can reduce the quality of service for task execution. Thus, offloading between nearby devices through device-to-device (D2D) communication is drawing attention. We propose the optimal task offloading decision strategy in the MEC and D2D communication architecture. We aim to minimize the energy consumption of devices and task execution delay under delay constraints. To solve the problem, we adopt Q-learning algorithm as one of Reinforcement Learning (RL). Simulation results show that the proposed algorithm outperforms the other methods in terms of energy consumption of devices and task execution delay.\",\"PeriodicalId\":176225,\"journal\":{\"name\":\"2021 IEEE 3rd Eurasia Conference on IOT, Communication and Engineering (ECICE)\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 3rd Eurasia Conference on IOT, Communication and Engineering (ECICE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ECICE52819.2021.9645710\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 3rd Eurasia Conference on IOT, Communication and Engineering (ECICE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ECICE52819.2021.9645710","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Optimal Task Offloading Decision in IIoT Enviornments Using Reinforcement Learning
In the Industrial Internet of Things (IIoT), various types of tasks are processed for the small quantity batch production. But there are many challenges due to the limited battery lifespan and computational capabilities of devices. To overcome the limitations, Mobile Edge Computing (MEC) has been introduced. In MEC, a task offloading technique to execute the tasks attracts much attention. A MEC server (MECS) has limited computational capability, which increases the burden on the server and a cellular network if a larger number of tasks are offloaded to the server. It can reduce the quality of service for task execution. Thus, offloading between nearby devices through device-to-device (D2D) communication is drawing attention. We propose the optimal task offloading decision strategy in the MEC and D2D communication architecture. We aim to minimize the energy consumption of devices and task execution delay under delay constraints. To solve the problem, we adopt Q-learning algorithm as one of Reinforcement Learning (RL). Simulation results show that the proposed algorithm outperforms the other methods in terms of energy consumption of devices and task execution delay.