{"title":"Improving performance of WSNs in IoT applications by transmission power control and adaptive learning rates in reinforcement learning","authors":"Arunita Chaukiyal","doi":"10.1007/s11235-024-01191-w","DOIUrl":null,"url":null,"abstract":"<p>The paper investigates the effect of controlling the transmission power used for communication of data packets at physical layer to prolong longevity of network and adaptive learning rates in a reinforcement-learning algorithm working at network layer for dynamic and quick decision making. A routing protocol is proposed for data communication, which works in tandem with physical layer, to improve performance of Wireless Sensor Networks used in IoT applications. The proposed methodology employs Q-learning, a form of reinforcement learning algorithm at network layer. Here, an agent at each sensor node employs the Q-learning algorithm to decide on an agent which is to be used as packet forwarder and also helps in mitigating energy-hole problem. On the other hand, the transmission power control method saves agents’ battery energy by determining the appropriate power level to be used for packet transmission, and also achieving reduction in overhearing among neighboring agents. An agent derives its learning rate from its environment comprising of its neighboring agents. Each agents determines its own learning rate by using the hop distance to sink, and the residual energy (RE) of neighboring agents. The proposed method uses a higher learning rate at first, which is gradually decreased with the reduction in energy levels of agents over time. The proposed protocol is simulated to work in high-traffic scenarios with multiple source-sink pairs, which is a common feature of IoT applications in the monitoring and surveillance domain. Based on the NS3 simulation results, the proposed strategy significantly improved network performance in comparison with other routing protocols using Q-learning.</p>","PeriodicalId":51194,"journal":{"name":"Telecommunication Systems","volume":"13 1","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Telecommunication Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11235-024-01191-w","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
The paper investigates the effect of controlling the transmission power used for communication of data packets at physical layer to prolong longevity of network and adaptive learning rates in a reinforcement-learning algorithm working at network layer for dynamic and quick decision making. A routing protocol is proposed for data communication, which works in tandem with physical layer, to improve performance of Wireless Sensor Networks used in IoT applications. The proposed methodology employs Q-learning, a form of reinforcement learning algorithm at network layer. Here, an agent at each sensor node employs the Q-learning algorithm to decide on an agent which is to be used as packet forwarder and also helps in mitigating energy-hole problem. On the other hand, the transmission power control method saves agents’ battery energy by determining the appropriate power level to be used for packet transmission, and also achieving reduction in overhearing among neighboring agents. An agent derives its learning rate from its environment comprising of its neighboring agents. Each agents determines its own learning rate by using the hop distance to sink, and the residual energy (RE) of neighboring agents. The proposed method uses a higher learning rate at first, which is gradually decreased with the reduction in energy levels of agents over time. The proposed protocol is simulated to work in high-traffic scenarios with multiple source-sink pairs, which is a common feature of IoT applications in the monitoring and surveillance domain. Based on the NS3 simulation results, the proposed strategy significantly improved network performance in comparison with other routing protocols using Q-learning.
期刊介绍:
Telecommunication Systems is a journal covering all aspects of modeling, analysis, design and management of telecommunication systems. The journal publishes high quality articles dealing with the use of analytic and quantitative tools for the modeling, analysis, design and management of telecommunication systems covering:
Performance Evaluation of Wide Area and Local Networks;
Network Interconnection;
Wire, wireless, Adhoc, mobile networks;
Impact of New Services (economic and organizational impact);
Fiberoptics and photonic switching;
DSL, ADSL, cable TV and their impact;
Design and Analysis Issues in Metropolitan Area Networks;
Networking Protocols;
Dynamics and Capacity Expansion of Telecommunication Systems;
Multimedia Based Systems, Their Design Configuration and Impact;
Configuration of Distributed Systems;
Pricing for Networking and Telecommunication Services;
Performance Analysis of Local Area Networks;
Distributed Group Decision Support Systems;
Configuring Telecommunication Systems with Reliability and Availability;
Cost Benefit Analysis and Economic Impact of Telecommunication Systems;
Standardization and Regulatory Issues;
Security, Privacy and Encryption in Telecommunication Systems;
Cellular, Mobile and Satellite Based Systems.