Pub Date : 2024-11-22DOI: 10.1016/j.adhoc.2024.103715
M. Mikus , Ja. Konecny , P. Krömer , K. Bancik , Ji. Konecny , J. Choutka , M. Prauzek
This study presents an in-depth analysis of the computational costs associated with the application of an Evolutionary Fuzzy Rule-based (EFR) energy management system for Internet of Things (IoT) devices. In energy-harvesting IoT nodes, energy management is critical for sustaining long-term operation. The proposed EFR approach integrates fuzzy logic and genetic programming to autonomously control energy consumption based on available resources. The study evaluates the system’s computational performance, particularly focusing on processing time, RAM and flash memory usage across various hardware configurations. Different compiler optimization levels and floating-point unit (FPU) settings were also explored, comparing standard and pre-compiled algorithms. The results reveal computational times ranging from 2.43 to 5.23 ms, RAM usage peaking at 6.23 kB, and flash memory consumption between 19 kB and 32 kB. A significant reduction in computational overhead is achieved with optimized compiler settings and hardware FPU, highlighting the feasibility of deploying EFR-based energy management systems in low-power, resource-constrained IoT environments. The findings demonstrate the trade-offs between computational efficiency and energy management, with particular benefits observed in scenarios requiring real-time control in remote and energy-limited environments.
{"title":"Analysis of the computational costs of an evolutionary fuzzy rule-based internet-of-things energy management approach","authors":"M. Mikus , Ja. Konecny , P. Krömer , K. Bancik , Ji. Konecny , J. Choutka , M. Prauzek","doi":"10.1016/j.adhoc.2024.103715","DOIUrl":"10.1016/j.adhoc.2024.103715","url":null,"abstract":"<div><div>This study presents an in-depth analysis of the computational costs associated with the application of an Evolutionary Fuzzy Rule-based (EFR) energy management system for Internet of Things (IoT) devices. In energy-harvesting IoT nodes, energy management is critical for sustaining long-term operation. The proposed EFR approach integrates fuzzy logic and genetic programming to autonomously control energy consumption based on available resources. The study evaluates the system’s computational performance, particularly focusing on processing time, RAM and flash memory usage across various hardware configurations. Different compiler optimization levels and floating-point unit (FPU) settings were also explored, comparing standard and pre-compiled algorithms. The results reveal computational times ranging from 2.43 to 5.23 ms, RAM usage peaking at 6.23 kB, and flash memory consumption between 19 kB and 32 kB. A significant reduction in computational overhead is achieved with optimized compiler settings and hardware FPU, highlighting the feasibility of deploying EFR-based energy management systems in low-power, resource-constrained IoT environments. The findings demonstrate the trade-offs between computational efficiency and energy management, with particular benefits observed in scenarios requiring real-time control in remote and energy-limited environments.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"168 ","pages":"Article 103715"},"PeriodicalIF":4.4,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142719647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-20DOI: 10.1016/j.adhoc.2024.103699
Zhu Sifeng , Song Zhaowei , Zhu Hai , Qiao Rui
The challenges posed by structured large-scale tasks to resource-sensitive intelligent transportation systems have been acknowledged, particularly regarding the need to reduce delay and energy consumption during the caching and offloading processes. To address these challenges and improve the quality of service for vehicular users, a cloud–edge-end collaboration caching strategy (CACCSC) based on structured task content awareness was proposed in this paper. The dependencies among task fragments were modeled through fuzzy judgment criteria. In addition, a system delay model, an energy consumption model, and an edge server load balancing model were developed, along with a multi-objective optimization model that integrates system delay, energy consumption, and edge server load balancing variance. To solve this multi-objective optimization problem, an adaptive multi-objective optimization algorithm (MDE-NSGA-III) was developed, which combines an enhanced version of the Differential Evolution algorithm with improvements to the NSGA-III algorithm. Finally, it has been demonstrated through simulation experiments that when the number of users in the system reaches 35, the system delay, energy consumption, and load balancing variance of the MDE-NSGA-III optimization scheme proposed in this paper are 6.1%, 6.6%, and 25% lower than those of the NSGA-III scheme, 15.8%, 10%, and 41.7% lower than those of the NSGA-II scheme, and 62.7%, 20.7%, and 8.3% lower than those of the PeEA scheme.
{"title":"Efficient slicing scheme and cache optimization strategy for structured dependent tasks in intelligent transportation scenarios","authors":"Zhu Sifeng , Song Zhaowei , Zhu Hai , Qiao Rui","doi":"10.1016/j.adhoc.2024.103699","DOIUrl":"10.1016/j.adhoc.2024.103699","url":null,"abstract":"<div><div>The challenges posed by structured large-scale tasks to resource-sensitive intelligent transportation systems have been acknowledged, particularly regarding the need to reduce delay and energy consumption during the caching and offloading processes. To address these challenges and improve the quality of service for vehicular users, a cloud–edge-end collaboration caching strategy (CACCSC) based on structured task content awareness was proposed in this paper. The dependencies among task fragments were modeled through fuzzy judgment criteria. In addition, a system delay model, an energy consumption model, and an edge server load balancing model were developed, along with a multi-objective optimization model that integrates system delay, energy consumption, and edge server load balancing variance. To solve this multi-objective optimization problem, an adaptive multi-objective optimization algorithm (MDE-NSGA-III) was developed, which combines an enhanced version of the Differential Evolution algorithm with improvements to the NSGA-III algorithm. Finally, it has been demonstrated through simulation experiments that when the number of users in the system reaches 35, the system delay, energy consumption, and load balancing variance of the MDE-NSGA-III optimization scheme proposed in this paper are 6.1%, 6.6%, and 25% lower than those of the NSGA-III scheme, 15.8%, 10%, and 41.7% lower than those of the NSGA-II scheme, and 62.7%, 20.7%, and 8.3% lower than those of the PeEA scheme.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"168 ","pages":"Article 103699"},"PeriodicalIF":4.4,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142719625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-19DOI: 10.1016/j.adhoc.2024.103695
Henok Gashaw , Jamie Wubben , Carlos T. Calafate , Fabrizio Granelli
The steady rise in the use of unmanned aerial vehicles (UAVs) is leading to the development of an ever-growing number of applications. In urban settings, efforts like the U-Space initiative in Europe are striving to standardize and regulate the operations of UAVs. To support these applications and further UAV research, it is essential to thoroughly understand UAV communication, both among and between UAVs. Nonetheless, we have identified a lack of studies on communication models, especially in urban areas where obstacles like tall buildings can disrupt communication. This study offers a comprehensive review of current measurement campaigns on channel models for aerial communication. In addition, we conducted experiments on (i) the separation distance between two UAVs, (ii) Multi-UAV communication and (iii) Multi-UAV to ground communication using three different city profiles in Spain (Valencia, Barcelona, and Madrid). To accomplish this, we utilized an advanced co-simulation framework that accurately models both UAV mobility (Ardusim) and communication (OMNeT++). Our results regarding UAV-to-UAV communication in a city environment indicate that: (i) the communication range, in our specific experiments, is limited to around 400 meters. Afterward, the Packet Delivery Ratio (PDR) declines significantly. (ii) Different communication models yield similar results. (iii) UAV-to-UAV communication becomes feasible at higher altitudes (e.g., 120 m), particularly in the presence of tall buildings. With respect to the Multi-UAV to ground communications, we can conclude that again, the altitude of the UAVs is paramount. Furthermore, increasing the number of UAVs providing service to the ground does increase the PDR, but only ever so slightly.
{"title":"Impact of urban environments on FANET communication: A comparative study of propagation models","authors":"Henok Gashaw , Jamie Wubben , Carlos T. Calafate , Fabrizio Granelli","doi":"10.1016/j.adhoc.2024.103695","DOIUrl":"10.1016/j.adhoc.2024.103695","url":null,"abstract":"<div><div>The steady rise in the use of unmanned aerial vehicles (UAVs) is leading to the development of an ever-growing number of applications. In urban settings, efforts like the U-Space initiative in Europe are striving to standardize and regulate the operations of UAVs. To support these applications and further UAV research, it is essential to thoroughly understand UAV communication, both among and between UAVs. Nonetheless, we have identified a lack of studies on communication models, especially in urban areas where obstacles like tall buildings can disrupt communication. This study offers a comprehensive review of current measurement campaigns on channel models for aerial communication. In addition, we conducted experiments on (i) the separation distance between two UAVs, (ii) Multi-UAV communication and (iii) Multi-UAV to ground communication using three different city profiles in Spain (Valencia, Barcelona, and Madrid). To accomplish this, we utilized an advanced co-simulation framework that accurately models both UAV mobility (Ardusim) and communication (OMNeT++). Our results regarding UAV-to-UAV communication in a city environment indicate that: (i) the communication range, in our specific experiments, is limited to around 400 meters. Afterward, the Packet Delivery Ratio (PDR) declines significantly. (ii) Different communication models yield similar results. (iii) UAV-to-UAV communication becomes feasible at higher altitudes (e.g., 120 m), particularly in the presence of tall buildings. With respect to the Multi-UAV to ground communications, we can conclude that again, the altitude of the UAVs is paramount. Furthermore, increasing the number of UAVs providing service to the ground does increase the PDR, but only ever so slightly.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"168 ","pages":"Article 103695"},"PeriodicalIF":4.4,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142747819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This survey explores the convergence of Internet of Things (IoT) technologies with Water Distribution Systems (WDSs), focusing on large-scale deployments and the role of edge computing (EC). Effective water management increasingly relies on IoT monitoring, resulting in massive deployments and the generation of Big Data. While previous research has examined these topics individually, this work integrates them into a comprehensive analysis. We systematically reviewed 255 studies on IoT in WDS, identifying key challenges such as interoperability, scalability, energy efficiency, network coverage, and reliability. We also examined technologies like LPWAN and the growing use of EC for real-time data processing. In large-scale WDS scenarios, where vast amounts of data are generated, we highlighted the importance of technologies like NB-IoT, SigFox, and LoRaWAN due to their low power consumption and wide coverage. Based on our findings, we provide guidelines for sustainable, large-scale IoT deployment in WDS, emphasizing the need for edge data processing to reduce cloud dependency, improve scalability, and enable smarter cities and digital twins.
{"title":"A survey on massive IoT for water distribution systems: Challenges, simulation tools, and guidelines for large-scale deployment","authors":"Antonino Pagano , Domenico Garlisi , Ilenia Tinnirello , Fabrizio Giuliano , Giovanni Garbo , Mariana Falco , Francesca Cuomo","doi":"10.1016/j.adhoc.2024.103714","DOIUrl":"10.1016/j.adhoc.2024.103714","url":null,"abstract":"<div><div>This survey explores the convergence of Internet of Things (IoT) technologies with Water Distribution Systems (WDSs), focusing on large-scale deployments and the role of edge computing (EC). Effective water management increasingly relies on IoT monitoring, resulting in massive deployments and the generation of Big Data. While previous research has examined these topics individually, this work integrates them into a comprehensive analysis. We systematically reviewed 255 studies on IoT in WDS, identifying key challenges such as interoperability, scalability, energy efficiency, network coverage, and reliability. We also examined technologies like LPWAN and the growing use of EC for real-time data processing. In large-scale WDS scenarios, where vast amounts of data are generated, we highlighted the importance of technologies like NB-IoT, SigFox, and LoRaWAN due to their low power consumption and wide coverage. Based on our findings, we provide guidelines for sustainable, large-scale IoT deployment in WDS, emphasizing the need for edge data processing to reduce cloud dependency, improve scalability, and enable smarter cities and digital twins.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"168 ","pages":"Article 103714"},"PeriodicalIF":4.4,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142719626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper analyzes the performance of reconfigurable intelligent surface (RIS)-assisted device-to-device (D2D) communication systems, focusing on addressing co-channel interference, a prevalent issue due to the frequency reuse of sidelink in the underlay in-band D2D communications. In contrast to previous studies that either neglect interference or consider it only at the user, our research investigates a performance analysis in terms of outage probability (OP) for RIS-assisted D2D communication systems considering the presence of interference at both the user and the RIS. More specifically, we introduce a novel integral-form expression for an exact analysis of OP. Additionally, we present a new accurate approximation expression for OP, using the gamma distributions to approximate the fading of both desired and interference links, thereby yielding a closed-form expression. Nevertheless, both derived expressions, i.e., the exact integral-form and the approximate closed-form, contain special functions, such as Meijer’s G-function and the parabolic cylinder function, which complicate real-time OP analysis. To circumvent this, we employ a deep neural network (DNN) for real-time OP prediction, trained with data generated by the exact expression. Moreover, we present a tight upper bound that quantifies the impact of interference on achievable diversity order and coding gain. We validate the derived expressions through Monte Carlo simulations. Our analysis reveals that while interference does not affect the system’s diversity order, it significantly degrades the performance by reducing the coding gain. The results further demonstrate that increasing the number of RIS’s reflecting elements is an effective strategy to mitigate the adverse effects of the interference on the system performance.
本文分析了可重构智能表面(RIS)辅助设备对设备(D2D)通信系统的性能,重点是解决同信道干扰问题,这是由于带内 D2D 底层通信中侧向链路的频率重用而普遍存在的问题。以往的研究要么忽略了干扰,要么只考虑了用户的干扰,与此不同的是,我们的研究从中断概率 (OP) 的角度对 RIS 辅助 D2D 通信系统进行了性能分析,同时考虑了用户和 RIS 存在的干扰。更具体地说,我们引入了一种新的积分形式表达式,用于对 OP 进行精确分析。此外,我们还提出了一种新的 OP 精确近似表达式,利用伽马分布来近似期望链路和干扰链路的衰减,从而得出闭式表达式。然而,这两种推导表达式,即精确积分形式和近似闭合形式,都包含一些特殊函数,如 Meijer 的 G 函数和抛物线圆柱体函数,这使得实时 OP 分析变得复杂。为了避免这种情况,我们采用了一种深度神经网络(DNN)来进行实时 OP 预测,该网络由精确表达式生成的数据训练而成。此外,我们还提出了一个严密的上限,可量化干扰对可实现的分集阶和编码增益的影响。我们通过蒙特卡罗模拟验证了推导出的表达式。我们的分析表明,虽然干扰不会影响系统的分集顺序,但会通过降低编码增益而显著降低性能。结果进一步证明,增加 RIS 反射元件的数量是减轻干扰对系统性能不利影响的有效策略。
{"title":"RIS-assisted D2D communication in the presence of interference: Outage performance analysis and DNN-based prediction","authors":"Hamid Amiriara , Farid Ashtiani , Mahtab Mirmohseni , Masoumeh Nasiri-Kenari , Behrouz Maham","doi":"10.1016/j.adhoc.2024.103703","DOIUrl":"10.1016/j.adhoc.2024.103703","url":null,"abstract":"<div><div>This paper analyzes the performance of reconfigurable intelligent surface (RIS)-assisted device-to-device (D2D) communication systems, focusing on addressing co-channel interference, a prevalent issue due to the frequency reuse of sidelink in the underlay in-band D2D communications. In contrast to previous studies that either neglect interference or consider it only at the user, our research investigates a performance analysis in terms of outage probability (OP) for RIS-assisted D2D communication systems considering the presence of interference at both the user and the RIS. More specifically, we introduce a novel integral-form expression for an exact analysis of OP. Additionally, we present a new accurate approximation expression for OP, using the gamma distributions to approximate the fading of both desired and interference links, thereby yielding a closed-form expression. Nevertheless, both derived expressions, i.e., the exact integral-form and the approximate closed-form, contain special functions, such as Meijer’s G-function and the parabolic cylinder function, which complicate real-time OP analysis. To circumvent this, we employ a deep neural network (DNN) for real-time OP prediction, trained with data generated by the exact expression. Moreover, we present a tight upper bound that quantifies the impact of interference on achievable diversity order and coding gain. We validate the derived expressions through Monte Carlo simulations. Our analysis reveals that while interference does not affect the system’s diversity order, it significantly degrades the performance by reducing the coding gain. The results further demonstrate that increasing the number of RIS’s reflecting elements is an effective strategy to mitigate the adverse effects of the interference on the system performance.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"167 ","pages":"Article 103703"},"PeriodicalIF":4.4,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142702560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless Sensor Networks (WSNs) have become pivotal in numerous applications, including environmental monitoring, precision agriculture, and disaster response. In the context of urban flood monitoring, utilizing unmanned aerial vehicles (UAVs) presents unique challenges due to the dynamic and unpredictable nature of the environment. The primary challenges involve designing strategies that maximize data collection while minimizing the Age of Information (AoI) to ensure timely and accurate decision-making. Efficient data collection is crucial to capturing all relevant information and providing a comprehensive understanding of flood dynamics. Simultaneously, reducing AoI is essential, as outdated data can lead to delayed or incorrect responses, potentially worsening the situation. Addressing these challenges is critical for the effective use of WSNs in urban flood monitoring. Initially, we formulate the problem as a mixed integer non-linear programming (MINLP) problem. Further, it is solved using a Lagrangian-based branch and bound technique by converting it into an unconstrained problem. Then, for large-scale WSN, we propose a hybrid optimization technique which combines a genetic algorithm with a particle swarm optimization technique to simultaneously maximize the data collection and reduce the AoI of the collected data with the constraint of energy consumption of the UAVs. Simulation results demonstrate that our proposed algorithm outperforms existing approaches in terms of both data collection and AoI.
{"title":"Age and energy aware data collection scheme for urban flood monitoring in UAV-assisted Wireless Sensor Networks","authors":"Mekala Ratna Raju , Sai Krishna Mothku , Manoj Kumar Somesula , Srilatha Chebrolu","doi":"10.1016/j.adhoc.2024.103704","DOIUrl":"10.1016/j.adhoc.2024.103704","url":null,"abstract":"<div><div>Wireless Sensor Networks (WSNs) have become pivotal in numerous applications, including environmental monitoring, precision agriculture, and disaster response. In the context of urban flood monitoring, utilizing unmanned aerial vehicles (UAVs) presents unique challenges due to the dynamic and unpredictable nature of the environment. The primary challenges involve designing strategies that maximize data collection while minimizing the Age of Information (AoI) to ensure timely and accurate decision-making. Efficient data collection is crucial to capturing all relevant information and providing a comprehensive understanding of flood dynamics. Simultaneously, reducing AoI is essential, as outdated data can lead to delayed or incorrect responses, potentially worsening the situation. Addressing these challenges is critical for the effective use of WSNs in urban flood monitoring. Initially, we formulate the problem as a mixed integer non-linear programming (MINLP) problem. Further, it is solved using a Lagrangian-based branch and bound technique by converting it into an unconstrained problem. Then, for large-scale WSN, we propose a hybrid optimization technique which combines a genetic algorithm with a particle swarm optimization technique to simultaneously maximize the data collection and reduce the AoI of the collected data with the constraint of energy consumption of the UAVs. Simulation results demonstrate that our proposed algorithm outperforms existing approaches in terms of both data collection and AoI.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"168 ","pages":"Article 103704"},"PeriodicalIF":4.4,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142719624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unmanned Aerial Vehicles (UAVs), which connect to one another over wireless networks, are being used in warfare more frequently. Nevertheless, adversarial interference has the potential to disrupt wireless communication, and the UAV routing methods in use today struggle to handle interference. In this paper, we propose a Cross-Layer UAV Link State Routing protocol, CLUN-LSR, to combat against jamming attacks. CLUN-LSR features three designs. First, it obtains real-time spectrum status from the link layer. Such capabilities are provided by many existing radios, especially the ones in military applications, but are ignored by traditional routing protocols. Second, based on the cross-layer information, CLUN-LSR adds efficient routing functions during routing, including the use of the number of two-hop neighbor nodes as a metric for route selection. Third, CLUN-LSR selects nodes that are not in the interference area, thereby reducing network interruptions and improving data transmission efficiency. All table-driven routing protocols can apply CLUN-LSR for better performance. We apply CLUN-LSR to the existing routing protocol MP-OLSR and simulate it using a commercial network simulator. Simulation results show that our innovative routing protocol demonstrates superior performance compared to existing table-driven routing methods, particularly in terms of packet transmission rate and overall throughput.
{"title":"Cross-layer UAV network routing protocol for spectrum denial environments","authors":"Siyue Zheng , Xiaojun Zhu , Zhengrui Qin , Chao Dong","doi":"10.1016/j.adhoc.2024.103702","DOIUrl":"10.1016/j.adhoc.2024.103702","url":null,"abstract":"<div><div>Unmanned Aerial Vehicles (UAVs), which connect to one another over wireless networks, are being used in warfare more frequently. Nevertheless, adversarial interference has the potential to disrupt wireless communication, and the UAV routing methods in use today struggle to handle interference. In this paper, we propose a Cross-Layer UAV Link State Routing protocol, CLUN-LSR, to combat against jamming attacks. CLUN-LSR features three designs. First, it obtains real-time spectrum status from the link layer. Such capabilities are provided by many existing radios, especially the ones in military applications, but are ignored by traditional routing protocols. Second, based on the cross-layer information, CLUN-LSR adds efficient routing functions during routing, including the use of the number of two-hop neighbor nodes as a metric for route selection. Third, CLUN-LSR selects nodes that are not in the interference area, thereby reducing network interruptions and improving data transmission efficiency. All table-driven routing protocols can apply CLUN-LSR for better performance. We apply CLUN-LSR to the existing routing protocol MP-OLSR and simulate it using a commercial network simulator. Simulation results show that our innovative routing protocol demonstrates superior performance compared to existing table-driven routing methods, particularly in terms of packet transmission rate and overall throughput.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"167 ","pages":"Article 103702"},"PeriodicalIF":4.4,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-06DOI: 10.1016/j.adhoc.2024.103697
Muhammad Salman , Taehong Lee , Ali Hassan , Muhammad Yasin , Kiran Khurshid , Youngtae Noh
During battlefield operations, military radios (hereafter nodes) exchange information among various units using a mobile ad-hoc network (MANET) due to its infrastructure-less and self-healing capabilities. Adversarial cyberwarfare plays a crucial role in modern combat by disrupting communication between critical nodes (i.e., nodes mainly responsible for propagating important information) to gain dominance over the opposing side. However, determining critical nodes within a complex network is an NP-hard problem. This paper formulates a mathematical model to identify important links and their connected nodes, and presents JamBIT, a reinforcement learning-based framework with an encoder–decoder architecture, for efficiently detecting and jamming critical nodes. The encoder transforms network structures into embedding vectors, while the decoder assigns a score to the embedding vector with the highest reward. Our framework is trained and tested on custom-built MANET topologies using the Named Data Networking (NDN) protocol. JamBIT has been evaluated across various scales and weighting methods for both connected node and network dismantling problems. Our proposed method outperformed existing RL-based baselines, with a 24% performance gain for smaller topologies (50–100 nodes) and 8% for larger ones (400–500 nodes) in connected node problems, and a 7% gain for smaller topologies and 15% for larger ones in network dismantling problems.
{"title":"JamBIT: RL-based framework for disrupting adversarial information in battlefields","authors":"Muhammad Salman , Taehong Lee , Ali Hassan , Muhammad Yasin , Kiran Khurshid , Youngtae Noh","doi":"10.1016/j.adhoc.2024.103697","DOIUrl":"10.1016/j.adhoc.2024.103697","url":null,"abstract":"<div><div>During battlefield operations, military radios (hereafter nodes) exchange information among various units using a mobile ad-hoc network (MANET) due to its infrastructure-less and self-healing capabilities. Adversarial cyberwarfare plays a crucial role in modern combat by disrupting communication between critical nodes (i.e., nodes mainly responsible for propagating important information) to gain dominance over the opposing side. However, determining critical nodes within a complex network is an NP-hard problem. This paper formulates a mathematical model to identify important links and their connected nodes, and presents JamBIT, a reinforcement learning-based framework with an encoder–decoder architecture, for efficiently detecting and jamming critical nodes. The encoder transforms network structures into embedding vectors, while the decoder assigns a score to the embedding vector with the highest reward. Our framework is trained and tested on custom-built MANET topologies using the Named Data Networking (NDN) protocol. JamBIT has been evaluated across various scales and weighting methods for both connected node and network dismantling problems. Our proposed method outperformed existing RL-based baselines, with a 24% performance gain for smaller topologies (50–100 nodes) and 8% for larger ones (400–500 nodes) in connected node problems, and a 7% gain for smaller topologies and 15% for larger ones in network dismantling problems.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"167 ","pages":"Article 103697"},"PeriodicalIF":4.4,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-05DOI: 10.1016/j.adhoc.2024.103698
Gururaj S. Kori , Mahabaleshwar S. Kakkasageri , Poornima M. Chanal , Rajani S. Pujar , Vinayak A. Telsang
Wireless Sensor Network (WSN) is a heterogeneous, distributed network composed of tiny cognitive, autonomous sensor nodes integrated with processor, sensors, transceivers, and software. WSNs offer much to the sensing world and are deployed in predefined geographical areas that are out of human interventions to perform multiple applications. Sensing, computing, and communication are the main functions of the sensor node. However, WSNs are mainly constrained by limited resources such as power, computational speed, memory, sensing capability, communication range, and bandwidth. WSNs when shared for multiple tasks and applications, resource management becomes a challenging task. Hence, effective utilization of available resources is a critical issue to prolong the life span of sensor network. Current research has explored various methods for resources management in WSNs, but most of these approaches are traditional and often fall short in addressing the resource management issues during real-time applications. Resource management schemes involves in resource identification, resource scheduling, resource allocation, resource utilization and monitoring, etc. This paper aims to fill the gap by reviewing and analysing the latest Computational Intelligence (CI) techniques, particularly Machine Learning (ML) and Artificial Intelligence (AI). AIML has been applied to countless humdrum and complex problems arising in WSN operation and resource management. AIML algorithms increase the efficiency of the network and speed up the computational time with optimized utilization of the available resources. Therefore, this is a timely perspective on the ramifications of machine learning algorithms for autonomous WSN establishment, operation, and resource management.
{"title":"Wireless sensor networks and machine learning centric resource management schemes: A survey","authors":"Gururaj S. Kori , Mahabaleshwar S. Kakkasageri , Poornima M. Chanal , Rajani S. Pujar , Vinayak A. Telsang","doi":"10.1016/j.adhoc.2024.103698","DOIUrl":"10.1016/j.adhoc.2024.103698","url":null,"abstract":"<div><div>Wireless Sensor Network (WSN) is a heterogeneous, distributed network composed of tiny cognitive, autonomous sensor nodes integrated with processor, sensors, transceivers, and software. WSNs offer much to the sensing world and are deployed in predefined geographical areas that are out of human interventions to perform multiple applications. Sensing, computing, and communication are the main functions of the sensor node. However, WSNs are mainly constrained by limited resources such as power, computational speed, memory, sensing capability, communication range, and bandwidth. WSNs when shared for multiple tasks and applications, resource management becomes a challenging task. Hence, effective utilization of available resources is a critical issue to prolong the life span of sensor network. Current research has explored various methods for resources management in WSNs, but most of these approaches are traditional and often fall short in addressing the resource management issues during real-time applications. Resource management schemes involves in resource identification, resource scheduling, resource allocation, resource utilization and monitoring, etc. This paper aims to fill the gap by reviewing and analysing the latest Computational Intelligence (CI) techniques, particularly Machine Learning (ML) and Artificial Intelligence (AI). AIML has been applied to countless humdrum and complex problems arising in WSN operation and resource management. AIML algorithms increase the efficiency of the network and speed up the computational time with optimized utilization of the available resources. Therefore, this is a timely perspective on the ramifications of machine learning algorithms for autonomous WSN establishment, operation, and resource management.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"167 ","pages":"Article 103698"},"PeriodicalIF":4.4,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-04DOI: 10.1016/j.adhoc.2024.103701
Mujahid Muhammad , Ghazanfar Ali Safdar
Safety applications, such as intersection collision warnings and emergency brake warnings, enhance road safety and traffic efficiency through periodic broadcast messages by vehicles and roadside infrastructure. While the Elliptic Curve Digital Signature Algorithm (ECDSA) is a widely used security approach, its performance limitations make it unsuitable for time-critical safety applications. As such, a symmetric cryptography-based technique called Timed Efficient Stream Loss-tolerant Authentication (TESLA) offers a viable alternative. However, applying standard TESLA in the context of vehicle-to-vehicle (V2V) communications has its own challenges. One challenge is the difficulty of distributing authentication information called commitments in the highly dynamic V2V environment. In this paper, we propose two novel solutions to this problem, namely, V2X Application Server (VAS)-centric and vehicle-centric. The former is an application-level solution that involves selective unicasting of commitments to vehicles by a central server, the VAS, and the latter is a reactive scheme that involves the periodic broadcast of commitments by the vehicles themselves. Extensive simulations are conducted using representatives of the real V2V environment to evaluate the performance of these approaches under different traffic situations; as well as performance comparison with a state-of-the-art distribution solution. The simulation results indicate that the VAS-centric solution is preferable for use in a TESLA-like V2V security scheme. It demonstrates desirable features, including timely delivery of commitments and high distribution efficiency, with over 95 % of commitments sent by the VAS are associated with relevant safety messages when compared with the vehicle-centric and state-of-the-art solutions. Formal security analysis, conducted using the Random Oracle Model (ROM), proves the correctness of our proposed distribution schemes. Additionally, an informal security analysis shows the resilience of the proposed schemes against various attacks, including impersonation, replay, and bogus commitment messages.
交叉路口碰撞警告和紧急制动警告等安全应用通过车辆和路边基础设施的定期广播信息来提高道路安全和交通效率。虽然椭圆曲线数字签名算法(ECDSA)是一种广泛使用的安全方法,但其性能限制使其不适合时间紧迫的安全应用。因此,一种名为 "定时高效流损容限验证"(TESLA)的对称加密技术提供了一种可行的替代方案。然而,在车对车 (V2V) 通信中应用标准 TESLA 有其自身的挑战。挑战之一是在高度动态的 V2V 环境中难以分发称为承诺的验证信息。本文针对这一问题提出了两种新颖的解决方案,即以 V2X 应用服务器 (VAS) 为中心和以车辆为中心。前者是一种应用级解决方案,包括由中央服务器(VAS)有选择地向车辆单播承诺;后者是一种反应式方案,包括由车辆本身定期广播承诺。我们使用真实 V2V 环境的代表进行了大量模拟,以评估这些方法在不同交通状况下的性能,并与最先进的分配解决方案进行性能比较。模拟结果表明,以 VAS 为中心的解决方案更适合用于类似 TESLA 的 V2V 安全方案。与以车辆为中心的解决方案和最先进的解决方案相比,VAS 发送的承诺中有 95% 以上与相关的安全信息有关,因此它具有及时交付承诺和高分配效率等理想特性。使用随机甲骨文模型(ROM)进行的正式安全分析证明了我们提出的分配方案的正确性。此外,非正式的安全分析表明,所提出的方案能够抵御各种攻击,包括冒名顶替、重放和伪造承诺信息。
{"title":"V2X application server and vehicle centric distribution of commitments for V2V message authentication","authors":"Mujahid Muhammad , Ghazanfar Ali Safdar","doi":"10.1016/j.adhoc.2024.103701","DOIUrl":"10.1016/j.adhoc.2024.103701","url":null,"abstract":"<div><div>Safety applications, such as intersection collision warnings and emergency brake warnings, enhance road safety and traffic efficiency through periodic broadcast messages by vehicles and roadside infrastructure. While the Elliptic Curve Digital Signature Algorithm (ECDSA) is a widely used security approach, its performance limitations make it unsuitable for time-critical safety applications. As such, a symmetric cryptography-based technique called Timed Efficient Stream Loss-tolerant Authentication (TESLA) offers a viable alternative. However, applying standard TESLA in the context of vehicle-to-vehicle (V2V) communications has its own challenges. One challenge is the difficulty of distributing authentication information called commitments in the highly dynamic V2V environment. In this paper, we propose two novel solutions to this problem, namely, V2X Application Server (VAS)-centric and vehicle-centric. The former is an application-level solution that involves selective unicasting of commitments to vehicles by a central server, the VAS, and the latter is a reactive scheme that involves the periodic broadcast of commitments by the vehicles themselves. Extensive simulations are conducted using representatives of the real V2V environment to evaluate the performance of these approaches under different traffic situations; as well as performance comparison with a state-of-the-art distribution solution. The simulation results indicate that the VAS-centric solution is preferable for use in a TESLA-like V2V security scheme. It demonstrates desirable features, including timely delivery of commitments and high distribution efficiency, with over 95 % of commitments sent by the VAS are associated with relevant safety messages when compared with the vehicle-centric and state-of-the-art solutions. Formal security analysis, conducted using the Random Oracle Model (ROM), proves the correctness of our proposed distribution schemes. Additionally, an informal security analysis shows the resilience of the proposed schemes against various attacks, including impersonation, replay, and bogus commitment messages.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"167 ","pages":"Article 103701"},"PeriodicalIF":4.4,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}