Improving performance of WSNs in IoT applications by transmission power control and adaptive learning rates in reinforcement learning

IF 1.7 4区 计算机科学 Q3 TELECOMMUNICATIONS Telecommunication Systems Pub Date : 2024-07-21 DOI:10.1007/s11235-024-01191-w
Arunita Chaukiyal
{"title":"Improving performance of WSNs in IoT applications by transmission power control and adaptive learning rates in reinforcement learning","authors":"Arunita Chaukiyal","doi":"10.1007/s11235-024-01191-w","DOIUrl":null,"url":null,"abstract":"<p>The paper investigates the effect of controlling the transmission power used for communication of data packets at physical layer to prolong longevity of network and adaptive learning rates in a reinforcement-learning algorithm working at network layer for dynamic and quick decision making. A routing protocol is proposed for data communication, which works in tandem with physical layer, to improve performance of Wireless Sensor Networks used in IoT applications. The proposed methodology employs Q-learning, a form of reinforcement learning algorithm at network layer. Here, an agent at each sensor node employs the Q-learning algorithm to decide on an agent which is to be used as packet forwarder and also helps in mitigating energy-hole problem. On the other hand, the transmission power control method saves agents’ battery energy by determining the appropriate power level to be used for packet transmission, and also achieving reduction in overhearing among neighboring agents. An agent derives its learning rate from its environment comprising of its neighboring agents. Each agents determines its own learning rate by using the hop distance to sink, and the residual energy (RE) of neighboring agents. The proposed method uses a higher learning rate at first, which is gradually decreased with the reduction in energy levels of agents over time. The proposed protocol is simulated to work in high-traffic scenarios with multiple source-sink pairs, which is a common feature of IoT applications in the monitoring and surveillance domain. Based on the NS3 simulation results, the proposed strategy significantly improved network performance in comparison with other routing protocols using Q-learning.</p>","PeriodicalId":51194,"journal":{"name":"Telecommunication Systems","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Telecommunication Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11235-024-01191-w","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

The paper investigates the effect of controlling the transmission power used for communication of data packets at physical layer to prolong longevity of network and adaptive learning rates in a reinforcement-learning algorithm working at network layer for dynamic and quick decision making. A routing protocol is proposed for data communication, which works in tandem with physical layer, to improve performance of Wireless Sensor Networks used in IoT applications. The proposed methodology employs Q-learning, a form of reinforcement learning algorithm at network layer. Here, an agent at each sensor node employs the Q-learning algorithm to decide on an agent which is to be used as packet forwarder and also helps in mitigating energy-hole problem. On the other hand, the transmission power control method saves agents’ battery energy by determining the appropriate power level to be used for packet transmission, and also achieving reduction in overhearing among neighboring agents. An agent derives its learning rate from its environment comprising of its neighboring agents. Each agents determines its own learning rate by using the hop distance to sink, and the residual energy (RE) of neighboring agents. The proposed method uses a higher learning rate at first, which is gradually decreased with the reduction in energy levels of agents over time. The proposed protocol is simulated to work in high-traffic scenarios with multiple source-sink pairs, which is a common feature of IoT applications in the monitoring and surveillance domain. Based on the NS3 simulation results, the proposed strategy significantly improved network performance in comparison with other routing protocols using Q-learning.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过强化学习中的传输功率控制和自适应学习率提高物联网应用中 WSN 的性能
本文研究了在物理层控制用于数据包通信的传输功率对延长网络寿命的影响,以及在网络层工作的强化学习算法中的自适应学习率对动态快速决策的影响。提出了一种与物理层协同工作的数据通信路由协议,以提高物联网应用中使用的无线传感器网络的性能。所提出的方法在网络层采用了 Q-learning(一种强化学习算法)。在这里,每个传感器节点上的代理都采用 Q-learning 算法来决定使用哪个代理作为数据包转发器,这也有助于缓解能量漏洞问题。另一方面,传输功率控制方法通过确定用于数据包传输的适当功率水平来节省代理的电池能量,还能减少相邻代理之间的窃听。一个代理的学习率来自于它所处的环境,包括它的相邻代理。每个代理通过使用到水槽的跳距和相邻代理的剩余能量(RE)来确定自己的学习率。建议的方法一开始使用较高的学习率,随着时间的推移,学习率会随着代理能量水平的降低而逐渐降低。所提出的协议可在具有多个源-汇对的高流量场景中模拟运行,而这正是监控领域物联网应用的一个共同特点。根据 NS3 仿真结果,与其他使用 Q-learning 的路由协议相比,所提出的策略显著提高了网络性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Telecommunication Systems
Telecommunication Systems 工程技术-电信学
CiteScore
5.40
自引率
8.00%
发文量
105
审稿时长
6.0 months
期刊介绍: Telecommunication Systems is a journal covering all aspects of modeling, analysis, design and management of telecommunication systems. The journal publishes high quality articles dealing with the use of analytic and quantitative tools for the modeling, analysis, design and management of telecommunication systems covering: Performance Evaluation of Wide Area and Local Networks; Network Interconnection; Wire, wireless, Adhoc, mobile networks; Impact of New Services (economic and organizational impact); Fiberoptics and photonic switching; DSL, ADSL, cable TV and their impact; Design and Analysis Issues in Metropolitan Area Networks; Networking Protocols; Dynamics and Capacity Expansion of Telecommunication Systems; Multimedia Based Systems, Their Design Configuration and Impact; Configuration of Distributed Systems; Pricing for Networking and Telecommunication Services; Performance Analysis of Local Area Networks; Distributed Group Decision Support Systems; Configuring Telecommunication Systems with Reliability and Availability; Cost Benefit Analysis and Economic Impact of Telecommunication Systems; Standardization and Regulatory Issues; Security, Privacy and Encryption in Telecommunication Systems; Cellular, Mobile and Satellite Based Systems.
期刊最新文献
Next-cell prediction with LSTM based on vehicle mobility for 5G mc-IoT slices Secure positioning of wireless sensor networks against wormhole attacks Safeguarding the Internet of Health Things: advancements, challenges, and trust-based solution Optimized task offloading for federated learning based on β-skeleton graph in edge computing Noise robust automatic speaker verification systems: review and analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1