Mojtaba Farmani, Saman Farnam, Razieh Mohammadi, Zahra Shirmohammadi
{"title":"D2PG: deep deterministic policy gradient based for maximizing network throughput in clustered EH-WSN","authors":"Mojtaba Farmani, Saman Farnam, Razieh Mohammadi, Zahra Shirmohammadi","doi":"10.1007/s11276-024-03767-5","DOIUrl":null,"url":null,"abstract":"<p>Wireless sensor networks are considered one of the effective technologies in various applications, responsible for monitoring and sensing. In these networks, sensors are powered by batteries with limited energy capacity. Consequently, the required energy for the sensors is obtained from the surrounding environment using energy harvesters. However, these environmental resources are unpredictable, making power management a critical issue that demands careful consideration. Reinforcement Learning (RL) algorithms offer an efficient solution for throughput management in these networks, enabling the adjustment of data rates for nodes based on the network’s energy conditions. Nevertheless, previous throughput management methods based on RL algorithms suffer from one of the key challenges: discretizing the state space does not guarantee the maximum improvement in throughput the network. Therefore, this paper proposes a method called Deep Deterministic Policy Gradient-Based for Maximizing Network Throughput (D2PG), which utilizes a Deep Reinforcement Learning algorithm known as Deep Deterministic Policy Gradient and introduces a novel reward function. This method can lead to maximizing the data transmission rate and enhancing network throughput across the entire network through continuous state space optimization among sensor energy consumption. The D2PG method is evaluated and compared with RL, RL-new, and Deep Q-Network methods, resulting in throughput enhancements of 15.3%, 12.9%, and 5.7%, respectively, in the network’s throughput. Additionally, the new reward function demonstrates superior performance in terms of data rate proportionality concerning the energy level.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"43 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Wireless Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11276-024-03767-5","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Wireless sensor networks are considered one of the effective technologies in various applications, responsible for monitoring and sensing. In these networks, sensors are powered by batteries with limited energy capacity. Consequently, the required energy for the sensors is obtained from the surrounding environment using energy harvesters. However, these environmental resources are unpredictable, making power management a critical issue that demands careful consideration. Reinforcement Learning (RL) algorithms offer an efficient solution for throughput management in these networks, enabling the adjustment of data rates for nodes based on the network’s energy conditions. Nevertheless, previous throughput management methods based on RL algorithms suffer from one of the key challenges: discretizing the state space does not guarantee the maximum improvement in throughput the network. Therefore, this paper proposes a method called Deep Deterministic Policy Gradient-Based for Maximizing Network Throughput (D2PG), which utilizes a Deep Reinforcement Learning algorithm known as Deep Deterministic Policy Gradient and introduces a novel reward function. This method can lead to maximizing the data transmission rate and enhancing network throughput across the entire network through continuous state space optimization among sensor energy consumption. The D2PG method is evaluated and compared with RL, RL-new, and Deep Q-Network methods, resulting in throughput enhancements of 15.3%, 12.9%, and 5.7%, respectively, in the network’s throughput. Additionally, the new reward function demonstrates superior performance in terms of data rate proportionality concerning the energy level.
期刊介绍:
The wireless communication revolution is bringing fundamental changes to data networking, telecommunication, and is making integrated networks a reality. By freeing the user from the cord, personal communications networks, wireless LAN''s, mobile radio networks and cellular systems, harbor the promise of fully distributed mobile computing and communications, any time, anywhere.
Focusing on the networking and user aspects of the field, Wireless Networks provides a global forum for archival value contributions documenting these fast growing areas of interest. The journal publishes refereed articles dealing with research, experience and management issues of wireless networks. Its aim is to allow the reader to benefit from experience, problems and solutions described.