利用 Q 学习实现延迟关键型和能量感知型 SW-UAV-WN 的吞吐量最大化

IF 6.3 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Open Journal of the Communications Society Pub Date : 2024-11-12 DOI:10.1109/OJCOMS.2024.3496740
Sreenivasa Reddy Yeduri;Neha Sharma;Om Jee Pandey;Linga Reddy Cenkeramaddi
{"title":"利用 Q 学习实现延迟关键型和能量感知型 SW-UAV-WN 的吞吐量最大化","authors":"Sreenivasa Reddy Yeduri;Neha Sharma;Om Jee Pandey;Linga Reddy Cenkeramaddi","doi":"10.1109/OJCOMS.2024.3496740","DOIUrl":null,"url":null,"abstract":"Unmanned aerial vehicles (UAVs) are getting significant attention from both researchers and the industry due to their wide range of applications. Remote sensing is one such application, in which UAVs are deployed to sense remote areas and transmit the data to a ground station for processing. However, due to the mobility and limited transmission range of UAVs, data transfer requires multiple hops. Nevertheless, the higher the number of hops, the larger the network latency. Thus, there is a need to reduce the number of hops and improve the connectivity. This can be achieved by creating small-world networks (SWNs) that perform better than traditional networks in terms of network evaluation metrics. The SWNs are created by adding shortcuts to the traditional network. In the literature, many theoretical works have been proposed for the creation of SWNs. However, these works add shortcuts randomly into the existing conventional network and fail to account for the costs incurred with the added shortcuts. As a result, these works are ineffective in improving the overall performance of the network. Thus, this work presents a novel reinforcement learning technique that uses a Q-learning algorithm to optimize throughput in delay-critical and energy-aware small-world UAV-assisted wireless networks (SW-UAV-WNs). The proposed algorithm populates the Q-matrix with all possible shortcuts and updates the Q-values based on the reward/penalty. It then adds shortcuts based on descending Q-values until the SW-UAV-WN is established. Through numerical results, we demonstrate that the proposed framework surpasses the conventional SWC approach, canonical particle swarm data delivery method, Low Energy Adaptive Clustering Hierarchy (LEACH), modified LEACH, and conventional shortest path routing method in terms of network latency, lifetime, packet delivery ratio, and throughput. Furthermore, we discuss the effect of different UAV velocities and different heights of the layers in which the UAVs hover on the performance of the proposed approach.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"5 ","pages":"7228-7243"},"PeriodicalIF":6.3000,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10750848","citationCount":"0","resultStr":"{\"title\":\"Throughput Maximization in Delay-Critical and Energy-Aware SW-UAV-WNs Using Q-Learning\",\"authors\":\"Sreenivasa Reddy Yeduri;Neha Sharma;Om Jee Pandey;Linga Reddy Cenkeramaddi\",\"doi\":\"10.1109/OJCOMS.2024.3496740\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Unmanned aerial vehicles (UAVs) are getting significant attention from both researchers and the industry due to their wide range of applications. Remote sensing is one such application, in which UAVs are deployed to sense remote areas and transmit the data to a ground station for processing. However, due to the mobility and limited transmission range of UAVs, data transfer requires multiple hops. Nevertheless, the higher the number of hops, the larger the network latency. Thus, there is a need to reduce the number of hops and improve the connectivity. This can be achieved by creating small-world networks (SWNs) that perform better than traditional networks in terms of network evaluation metrics. The SWNs are created by adding shortcuts to the traditional network. In the literature, many theoretical works have been proposed for the creation of SWNs. However, these works add shortcuts randomly into the existing conventional network and fail to account for the costs incurred with the added shortcuts. As a result, these works are ineffective in improving the overall performance of the network. Thus, this work presents a novel reinforcement learning technique that uses a Q-learning algorithm to optimize throughput in delay-critical and energy-aware small-world UAV-assisted wireless networks (SW-UAV-WNs). The proposed algorithm populates the Q-matrix with all possible shortcuts and updates the Q-values based on the reward/penalty. It then adds shortcuts based on descending Q-values until the SW-UAV-WN is established. Through numerical results, we demonstrate that the proposed framework surpasses the conventional SWC approach, canonical particle swarm data delivery method, Low Energy Adaptive Clustering Hierarchy (LEACH), modified LEACH, and conventional shortest path routing method in terms of network latency, lifetime, packet delivery ratio, and throughput. Furthermore, we discuss the effect of different UAV velocities and different heights of the layers in which the UAVs hover on the performance of the proposed approach.\",\"PeriodicalId\":33803,\"journal\":{\"name\":\"IEEE Open Journal of the Communications Society\",\"volume\":\"5 \",\"pages\":\"7228-7243\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2024-11-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10750848\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Open Journal of the Communications Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10750848/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Communications Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10750848/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

无人驾驶飞行器(UAV)因其广泛的应用而受到研究人员和业界的极大关注。遥感就是其中一种应用,部署无人飞行器对偏远地区进行感知,并将数据传输到地面站进行处理。然而,由于无人机的移动性和有限的传输距离,数据传输需要多次跳转。然而,跳数越多,网络延迟就越大。因此,有必要减少跳数,提高连接性。这可以通过创建小世界网络(SWN)来实现,就网络评估指标而言,小世界网络的性能优于传统网络。SWN 是通过在传统网络中添加捷径而创建的。文献中提出了许多创建 SWN 的理论工作。但是,这些工作都是在现有的传统网络中随机添加捷径,而且没有考虑到添加捷径所产生的成本。因此,这些工作无法有效提高网络的整体性能。因此,本研究提出了一种新型强化学习技术,利用 Q-learning 算法来优化延迟关键型和能量感知型小型世界无人机辅助无线网络(SW-UAV-WNs)的吞吐量。所提出的算法用所有可能的捷径填充 Q 矩阵,并根据奖励/惩罚更新 Q 值。然后,该算法根据递减的 Q 值添加捷径,直到 SW-UAV-WN 建立起来。通过数值结果,我们证明了所提出的框架在网络延迟、寿命、数据包交付率和吞吐量方面超越了传统的 SWC 方法、典型的粒子群数据交付方法、低能量自适应聚类体系(LEACH)、改进的 LEACH 和传统的最短路径路由方法。此外,我们还讨论了不同的无人机速度和无人机悬停层的不同高度对拟议方法性能的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Throughput Maximization in Delay-Critical and Energy-Aware SW-UAV-WNs Using Q-Learning
Unmanned aerial vehicles (UAVs) are getting significant attention from both researchers and the industry due to their wide range of applications. Remote sensing is one such application, in which UAVs are deployed to sense remote areas and transmit the data to a ground station for processing. However, due to the mobility and limited transmission range of UAVs, data transfer requires multiple hops. Nevertheless, the higher the number of hops, the larger the network latency. Thus, there is a need to reduce the number of hops and improve the connectivity. This can be achieved by creating small-world networks (SWNs) that perform better than traditional networks in terms of network evaluation metrics. The SWNs are created by adding shortcuts to the traditional network. In the literature, many theoretical works have been proposed for the creation of SWNs. However, these works add shortcuts randomly into the existing conventional network and fail to account for the costs incurred with the added shortcuts. As a result, these works are ineffective in improving the overall performance of the network. Thus, this work presents a novel reinforcement learning technique that uses a Q-learning algorithm to optimize throughput in delay-critical and energy-aware small-world UAV-assisted wireless networks (SW-UAV-WNs). The proposed algorithm populates the Q-matrix with all possible shortcuts and updates the Q-values based on the reward/penalty. It then adds shortcuts based on descending Q-values until the SW-UAV-WN is established. Through numerical results, we demonstrate that the proposed framework surpasses the conventional SWC approach, canonical particle swarm data delivery method, Low Energy Adaptive Clustering Hierarchy (LEACH), modified LEACH, and conventional shortest path routing method in terms of network latency, lifetime, packet delivery ratio, and throughput. Furthermore, we discuss the effect of different UAV velocities and different heights of the layers in which the UAVs hover on the performance of the proposed approach.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
13.70
自引率
3.80%
发文量
94
审稿时长
10 weeks
期刊介绍: The IEEE Open Journal of the Communications Society (OJ-COMS) is an open access, all-electronic journal that publishes original high-quality manuscripts on advances in the state of the art of telecommunications systems and networks. The papers in IEEE OJ-COMS are included in Scopus. Submissions reporting new theoretical findings (including novel methods, concepts, and studies) and practical contributions (including experiments and development of prototypes) are welcome. Additionally, survey and tutorial articles are considered. The IEEE OJCOMS received its debut impact factor of 7.9 according to the Journal Citation Reports (JCR) 2023. The IEEE Open Journal of the Communications Society covers science, technology, applications and standards for information organization, collection and transfer using electronic, optical and wireless channels and networks. Some specific areas covered include: Systems and network architecture, control and management Protocols, software, and middleware Quality of service, reliability, and security Modulation, detection, coding, and signaling Switching and routing Mobile and portable communications Terminals and other end-user devices Networks for content distribution and distributed computing Communications-based distributed resources control.
期刊最新文献
Link Scheduling in Satellite Networks via Machine Learning Over Riemannian Manifolds Harnessing Meta-Reinforcement Learning for Enhanced Tracking in Geofencing Systems Deep Reinforcement Learning-Based Anti-Jamming Approach for Fast Frequency Hopping Systems 5G Networks Security Mitigation Model: An ANN-ISM Hybrid Approach Enhanced Lightweight Quantum Key Distribution Protocol for Improved Efficiency and Security
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1