{"title":"Handling Coexistence of LoRa with Other Networks through Embedded Reinforcement Learning","authors":"Sezana Fahmida, Venkata Prashant Modekurthy, Mahbubur Rahman, Abusayeed Saifullah","doi":"10.1145/3576842.3582383","DOIUrl":null,"url":null,"abstract":"The rapid growth of various Low-Power Wide-Area Network (LPWAN) technologies in the limited spectrum brings forth the challenge of their coexistence. Today, LPWANs are not equipped to handle this impending challenge. It is difficult to employ sophisticated media access control protocol for low-power nodes. Coexistence handling for WiFi or traditional short-range wireless network will not work for LPWANs. Due to long range, their nodes can be subject to an unprecedented number of hidden nodes, requiring highly energy-efficient techniques to handle such coexistence. In this paper, we address the coexistence problem for LoRa, a leading LPWAN technology. To improve the performance of a LoRa network under coexistence with many independent networks, we propose the design of a novel embedded learning agent based on a lightweight reinforcement learning at LoRa nodes. This is done by developing a Q-learning framework while ensuring minimal memory and computation overhead at LoRa nodes. The framework exploits transmission acknowledgments as feedback from the network based on what a node makes transmission decisions. To our knowledge, this is the first Q-learning approach for handling coexistence of low-power networks. Considering various coexistence scenarios of a LoRa network, we evaluate our approach through experiments indoors and outdoors. The outdoor results show that our Q-learning approach on average achieves an improvement of 46% in packet reception rate while reducing energy consumption by 66% in a LoRa network. In indoor experiments, we have observed some coexistence scenarios where a current LoRa network loses all the packets while our approach enables 99% packet reception rate with up to 90% improvement in energy consumption.","PeriodicalId":266438,"journal":{"name":"Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3576842.3582383","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The rapid growth of various Low-Power Wide-Area Network (LPWAN) technologies in the limited spectrum brings forth the challenge of their coexistence. Today, LPWANs are not equipped to handle this impending challenge. It is difficult to employ sophisticated media access control protocol for low-power nodes. Coexistence handling for WiFi or traditional short-range wireless network will not work for LPWANs. Due to long range, their nodes can be subject to an unprecedented number of hidden nodes, requiring highly energy-efficient techniques to handle such coexistence. In this paper, we address the coexistence problem for LoRa, a leading LPWAN technology. To improve the performance of a LoRa network under coexistence with many independent networks, we propose the design of a novel embedded learning agent based on a lightweight reinforcement learning at LoRa nodes. This is done by developing a Q-learning framework while ensuring minimal memory and computation overhead at LoRa nodes. The framework exploits transmission acknowledgments as feedback from the network based on what a node makes transmission decisions. To our knowledge, this is the first Q-learning approach for handling coexistence of low-power networks. Considering various coexistence scenarios of a LoRa network, we evaluate our approach through experiments indoors and outdoors. The outdoor results show that our Q-learning approach on average achieves an improvement of 46% in packet reception rate while reducing energy consumption by 66% in a LoRa network. In indoor experiments, we have observed some coexistence scenarios where a current LoRa network loses all the packets while our approach enables 99% packet reception rate with up to 90% improvement in energy consumption.