{"title":"面向LoRaWAN优化的Q-Learning ADR代理","authors":"Rodrigo Carvalho, F. Al-Tam, N. Correia","doi":"10.1109/IAICT52856.2021.9532518","DOIUrl":null,"url":null,"abstract":"LoRaWAN has emerged as one of the most popular technologies in the LPWAN industry due to its low cost and straightforward management. Despite its relatively simple architecture, LoRaWAN is able to optimize energy, data rate, and time on-air by means of an adaptive data rate mechanism. In this paper, a reinforcement learning agent is designed to contrast with the central ADR component. This new agent operates seamlessly to all end nodes while still reacting quickly to changes. A comparative analysis between the classic ADR and the proposed RL-based ADR agent is done using discrete event simulation. Results show that the new ADR mechanism can determine the best configuration and that the proposed reward function fits the intended learning process.","PeriodicalId":416542,"journal":{"name":"2021 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Q-Learning ADR Agent for LoRaWAN Optimization\",\"authors\":\"Rodrigo Carvalho, F. Al-Tam, N. Correia\",\"doi\":\"10.1109/IAICT52856.2021.9532518\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"LoRaWAN has emerged as one of the most popular technologies in the LPWAN industry due to its low cost and straightforward management. Despite its relatively simple architecture, LoRaWAN is able to optimize energy, data rate, and time on-air by means of an adaptive data rate mechanism. In this paper, a reinforcement learning agent is designed to contrast with the central ADR component. This new agent operates seamlessly to all end nodes while still reacting quickly to changes. A comparative analysis between the classic ADR and the proposed RL-based ADR agent is done using discrete event simulation. Results show that the new ADR mechanism can determine the best configuration and that the proposed reward function fits the intended learning process.\",\"PeriodicalId\":416542,\"journal\":{\"name\":\"2021 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IAICT52856.2021.9532518\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IAICT52856.2021.9532518","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
LoRaWAN has emerged as one of the most popular technologies in the LPWAN industry due to its low cost and straightforward management. Despite its relatively simple architecture, LoRaWAN is able to optimize energy, data rate, and time on-air by means of an adaptive data rate mechanism. In this paper, a reinforcement learning agent is designed to contrast with the central ADR component. This new agent operates seamlessly to all end nodes while still reacting quickly to changes. A comparative analysis between the classic ADR and the proposed RL-based ADR agent is done using discrete event simulation. Results show that the new ADR mechanism can determine the best configuration and that the proposed reward function fits the intended learning process.