无线 ad hoc 网络中基于强化学习的双向收发定向天线邻居发现功能

IF 4.4 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Ad Hoc Networks Pub Date : 2024-10-23 DOI:10.1016/j.adhoc.2024.103689
Zongheng Wei , Huakun Wu , Zhiyong Lin , Qingji Wen , Lili Zheng , Jianfeng Wen , Hai Liu
{"title":"无线 ad hoc 网络中基于强化学习的双向收发定向天线邻居发现功能","authors":"Zongheng Wei ,&nbsp;Huakun Wu ,&nbsp;Zhiyong Lin ,&nbsp;Qingji Wen ,&nbsp;Lili Zheng ,&nbsp;Jianfeng Wen ,&nbsp;Hai Liu","doi":"10.1016/j.adhoc.2024.103689","DOIUrl":null,"url":null,"abstract":"<div><div>The utilization of directional antennas for neighbor discovery in wireless ad hoc networks brings notable benefits, such as extended transmission range, reduced transmission interference, and enhanced antenna gain. However, when nodes use directional antennas for neighbor discovery, the communication range is limited, resulting in a lack of knowledge of potential neighbors. Hence, it is necessary to design a special antenna direction switching strategy for neighbor discovery based on directional antennas. Traditional methods of switching antenna directions are often random or follow predefined sequences, overlooking the historical knowledge of sector exploration for antenna directions. In contrast, existing machine learning approaches aim to leverage observed historical knowledge to adjust antenna directions for faster neighbor discovery. Nonetheless, the latency of neighbor discovery is still high because the node cannot fully utilize the observed historical knowledge (<em>i.e.</em>., only using the knowledge observed by the node in transmission mode, ignoring the knowledge observed by the node in reception mode). Meanwhile, the corresponding reward and penalty mechanisms are still not detailed enough (<em>i.e.</em>., these reward and penalty mechanisms only consider the sectors of discovered and undiscovered neighboring nodes, ignoring the scenario of sectors that have been rewarded). In this paper, the neighbor discovery process is modeled as a reinforcement learning-based learning automaton. We propose an enhanced reinforcement learning-based two-way transmit-receive directional antennas neighbor discovery algorithm, called ERTTND. The algorithm consists of a two-way transmit-receive reinforcement learning mechanism (TTRL) and an enhanced reward-and-penalty mechanism (ERAP). This algorithm leverages insights from nodes in transmission and reception modes to refine their tactical decisions. Then, through an enriched reward-and-penalty framework, nodes optimize their strategies, thus expediting neighbor discovery based on directional antennas in wireless ad hoc networks. Simulation results demonstrate that compared to existing representative algorithms, the proposed ERTTND algorithm can achieve over 30% savings in terms of average discovery delay and energy consumption.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"167 ","pages":"Article 103689"},"PeriodicalIF":4.4000,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhanced reinforcement learning-based two-way transmit-receive directional antennas neighbor discovery in wireless ad hoc networks\",\"authors\":\"Zongheng Wei ,&nbsp;Huakun Wu ,&nbsp;Zhiyong Lin ,&nbsp;Qingji Wen ,&nbsp;Lili Zheng ,&nbsp;Jianfeng Wen ,&nbsp;Hai Liu\",\"doi\":\"10.1016/j.adhoc.2024.103689\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The utilization of directional antennas for neighbor discovery in wireless ad hoc networks brings notable benefits, such as extended transmission range, reduced transmission interference, and enhanced antenna gain. However, when nodes use directional antennas for neighbor discovery, the communication range is limited, resulting in a lack of knowledge of potential neighbors. Hence, it is necessary to design a special antenna direction switching strategy for neighbor discovery based on directional antennas. Traditional methods of switching antenna directions are often random or follow predefined sequences, overlooking the historical knowledge of sector exploration for antenna directions. In contrast, existing machine learning approaches aim to leverage observed historical knowledge to adjust antenna directions for faster neighbor discovery. Nonetheless, the latency of neighbor discovery is still high because the node cannot fully utilize the observed historical knowledge (<em>i.e.</em>., only using the knowledge observed by the node in transmission mode, ignoring the knowledge observed by the node in reception mode). Meanwhile, the corresponding reward and penalty mechanisms are still not detailed enough (<em>i.e.</em>., these reward and penalty mechanisms only consider the sectors of discovered and undiscovered neighboring nodes, ignoring the scenario of sectors that have been rewarded). In this paper, the neighbor discovery process is modeled as a reinforcement learning-based learning automaton. We propose an enhanced reinforcement learning-based two-way transmit-receive directional antennas neighbor discovery algorithm, called ERTTND. The algorithm consists of a two-way transmit-receive reinforcement learning mechanism (TTRL) and an enhanced reward-and-penalty mechanism (ERAP). This algorithm leverages insights from nodes in transmission and reception modes to refine their tactical decisions. Then, through an enriched reward-and-penalty framework, nodes optimize their strategies, thus expediting neighbor discovery based on directional antennas in wireless ad hoc networks. Simulation results demonstrate that compared to existing representative algorithms, the proposed ERTTND algorithm can achieve over 30% savings in terms of average discovery delay and energy consumption.</div></div>\",\"PeriodicalId\":55555,\"journal\":{\"name\":\"Ad Hoc Networks\",\"volume\":\"167 \",\"pages\":\"Article 103689\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2024-10-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ad Hoc Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1570870524003007\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ad Hoc Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1570870524003007","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

在无线 ad hoc 网络中利用定向天线进行邻居发现具有显著的优势,如扩大传输范围、减少传输干扰和增强天线增益。然而,当节点使用定向天线进行邻居发现时,通信范围会受到限制,导致对潜在邻居的了解不足。因此,有必要为基于定向天线的邻居发现设计一种特殊的天线方向切换策略。传统的天线方向切换方法通常是随机的或遵循预定义的序列,忽略了对天线方向进行扇区探索的历史知识。相比之下,现有的机器学习方法旨在利用观察到的历史知识来调整天线方向,从而更快地发现邻居。然而,由于节点无法充分利用观察到的历史知识(即只利用节点在发送模式下观察到的知识,而忽略节点在接收模式下观察到的知识),邻居发现的延迟仍然很高。同时,相应的奖惩机制还不够细致(即这些奖惩机制只考虑已发现和未发现邻居节点的扇区,忽略了已奖励扇区的情况)。本文将邻居发现过程建模为基于强化学习的学习自动机。我们提出了一种基于增强学习的双向收发定向天线邻居发现算法,称为 ERTTND。该算法由双向收发强化学习机制(TTRL)和增强奖惩机制(ERAP)组成。该算法利用节点在发送和接收模式下的洞察力来完善其战术决策。然后,通过增强型奖惩框架,节点可优化其策略,从而加快无线 ad hoc 网络中基于定向天线的邻居发现。仿真结果表明,与现有的代表性算法相比,所提出的ERTTND算法在平均发现延迟和能耗方面可节省30%以上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Enhanced reinforcement learning-based two-way transmit-receive directional antennas neighbor discovery in wireless ad hoc networks
The utilization of directional antennas for neighbor discovery in wireless ad hoc networks brings notable benefits, such as extended transmission range, reduced transmission interference, and enhanced antenna gain. However, when nodes use directional antennas for neighbor discovery, the communication range is limited, resulting in a lack of knowledge of potential neighbors. Hence, it is necessary to design a special antenna direction switching strategy for neighbor discovery based on directional antennas. Traditional methods of switching antenna directions are often random or follow predefined sequences, overlooking the historical knowledge of sector exploration for antenna directions. In contrast, existing machine learning approaches aim to leverage observed historical knowledge to adjust antenna directions for faster neighbor discovery. Nonetheless, the latency of neighbor discovery is still high because the node cannot fully utilize the observed historical knowledge (i.e.., only using the knowledge observed by the node in transmission mode, ignoring the knowledge observed by the node in reception mode). Meanwhile, the corresponding reward and penalty mechanisms are still not detailed enough (i.e.., these reward and penalty mechanisms only consider the sectors of discovered and undiscovered neighboring nodes, ignoring the scenario of sectors that have been rewarded). In this paper, the neighbor discovery process is modeled as a reinforcement learning-based learning automaton. We propose an enhanced reinforcement learning-based two-way transmit-receive directional antennas neighbor discovery algorithm, called ERTTND. The algorithm consists of a two-way transmit-receive reinforcement learning mechanism (TTRL) and an enhanced reward-and-penalty mechanism (ERAP). This algorithm leverages insights from nodes in transmission and reception modes to refine their tactical decisions. Then, through an enriched reward-and-penalty framework, nodes optimize their strategies, thus expediting neighbor discovery based on directional antennas in wireless ad hoc networks. Simulation results demonstrate that compared to existing representative algorithms, the proposed ERTTND algorithm can achieve over 30% savings in terms of average discovery delay and energy consumption.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Ad Hoc Networks
Ad Hoc Networks 工程技术-电信学
CiteScore
10.20
自引率
4.20%
发文量
131
审稿时长
4.8 months
期刊介绍: The Ad Hoc Networks is an international and archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in ad hoc and sensor networking areas. The Ad Hoc Networks considers original, high quality and unpublished contributions addressing all aspects of ad hoc and sensor networks. Specific areas of interest include, but are not limited to: Mobile and Wireless Ad Hoc Networks Sensor Networks Wireless Local and Personal Area Networks Home Networks Ad Hoc Networks of Autonomous Intelligent Systems Novel Architectures for Ad Hoc and Sensor Networks Self-organizing Network Architectures and Protocols Transport Layer Protocols Routing protocols (unicast, multicast, geocast, etc.) Media Access Control Techniques Error Control Schemes Power-Aware, Low-Power and Energy-Efficient Designs Synchronization and Scheduling Issues Mobility Management Mobility-Tolerant Communication Protocols Location Tracking and Location-based Services Resource and Information Management Security and Fault-Tolerance Issues Hardware and Software Platforms, Systems, and Testbeds Experimental and Prototype Results Quality-of-Service Issues Cross-Layer Interactions Scalability Issues Performance Analysis and Simulation of Protocols.
期刊最新文献
Cross-layer UAV network routing protocol for spectrum denial environments Editorial Board JamBIT: RL-based framework for disrupting adversarial information in battlefields Wireless sensor networks and machine learning centric resource management schemes: A survey V2X application server and vehicle centric distribution of commitments for V2V message authentication
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1