SummaryThe proliferation of network devices capable of gathering, transmitting, and receiving data over the Internet has spurred the widespread adoption of Internet of Things (IoT) devices, particularly in resource‐oriented applications. Integrating blockchain, IoT, homomorphic encryption, and federated learning requires a balance between computational requirements and real‐time performance. Secure key management is crucial to maintain data privacy and integrity. Compliance with privacy regulations requires careful implementation of privacy‐preserving mechanisms in blockchain‐enabled IoT environments, which can be subjected to various attacks. Addressing these challenges requires interdisciplinary expertise, research, and innovation to develop more efficient and effective privacy‐preserving techniques tailored to the unique characteristics of such environments. This research introduces the Modified Homomorphic Encryption Federated‐based Adaptive Hybrid Dandelion Search (MHEF‐AHDS) algorithm as an effective framework to enhance security in blockchain‐enabled IoT systems. The amalgamation of Modified Homomorphic Encryption (MHE) and Federated Learning (FL) constitutes a potent alliance that addresses privacy concerns within collaborative and decentralized machine learning environments. This facilitates secure and adaptable data collaboration, effectively mitigating privacy risks associated with sensitive information. The integration of quantum machine learning into security applications presents an exciting opportunity for distinctive progress and innovation. Within this work, the Adaptive Hybrid Dandelion optimization algorithm, featuring an Initial search strategy, is employed for hyperparameter optimization thereby elevating the performances of the proposed MHEF‐AHDS method. Furthermore, the integration of smart contracts and Blockchain‐based IoT enhances the overall security of the proposed method. MHEF‐AHDS comprehensively tackles privacy, security, and scalability challenges through robust security measures and privacy enhancements. The performance evaluation of the MHEF‐AHDS method encompasses a thorough analysis based on key metrics such as throughput, latency, scalability, energy consumption, accuracy, precision, recall, and f1‐score. Comparative assessments against existing methods are conducted to gauge the effectiveness of the proposed method in addressing security, privacy, and scalability concerns.
{"title":"Privacy‐preserving collaboration in blockchain‐enabled IoT: The synergy of modified homomorphic encryption and federated learning","authors":"Raja Anitha, Mahalingam Murugan","doi":"10.1002/dac.5955","DOIUrl":"https://doi.org/10.1002/dac.5955","url":null,"abstract":"SummaryThe proliferation of network devices capable of gathering, transmitting, and receiving data over the Internet has spurred the widespread adoption of Internet of Things (IoT) devices, particularly in resource‐oriented applications. Integrating blockchain, IoT, homomorphic encryption, and federated learning requires a balance between computational requirements and real‐time performance. Secure key management is crucial to maintain data privacy and integrity. Compliance with privacy regulations requires careful implementation of privacy‐preserving mechanisms in blockchain‐enabled IoT environments, which can be subjected to various attacks. Addressing these challenges requires interdisciplinary expertise, research, and innovation to develop more efficient and effective privacy‐preserving techniques tailored to the unique characteristics of such environments. This research introduces the Modified Homomorphic Encryption Federated‐based Adaptive Hybrid Dandelion Search (MHEF‐AHDS) algorithm as an effective framework to enhance security in blockchain‐enabled IoT systems. The amalgamation of Modified Homomorphic Encryption (MHE) and Federated Learning (FL) constitutes a potent alliance that addresses privacy concerns within collaborative and decentralized machine learning environments. This facilitates secure and adaptable data collaboration, effectively mitigating privacy risks associated with sensitive information. The integration of quantum machine learning into security applications presents an exciting opportunity for distinctive progress and innovation. Within this work, the Adaptive Hybrid Dandelion optimization algorithm, featuring an Initial search strategy, is employed for hyperparameter optimization thereby elevating the performances of the proposed MHEF‐AHDS method. Furthermore, the integration of smart contracts and Blockchain‐based IoT enhances the overall security of the proposed method. MHEF‐AHDS comprehensively tackles privacy, security, and scalability challenges through robust security measures and privacy enhancements. The performance evaluation of the MHEF‐AHDS method encompasses a thorough analysis based on key metrics such as throughput, latency, scalability, energy consumption, accuracy, precision, recall, and f1‐score. Comparative assessments against existing methods are conducted to gauge the effectiveness of the proposed method in addressing security, privacy, and scalability concerns.","PeriodicalId":13946,"journal":{"name":"International Journal of Communication Systems","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SummaryThe channel availability problem reaches a higher degree in mobile ad hoc networks (MANETs) and garners a lot of attention in communication networks. Because increased mobile usage might result in a lack of channel allocation, an improved channel allocation technique is presented to tackle the availability problem. The distributed dynamic channel allocation (DDCA) model is built in this paper using the hybrid memory dragonfly with imperialist competitive (HMDIC) method. Based on optimization logic, this strategy assigns the channel to mobile hosts. The MANET provides a dispersed network within the coverage region in the absence of base station infrastructure. The HMDIC optimizer approach in this circumstance randomly begins every respective node to update and store their pbest value utilizing RAM dragonfly employing satellite images. The constraint values are then used to construct the cost function, which results in a strong kind of global optimum solution. The channels are therefore distributed in an effective manner. The HMDIC algorithm is used in this research to build a novel channel allocation system. It makes advantage of the exploration capabilities to successfully explore the individual node using MDA (Modified Dragonfly Algorithm) and locate the global best solution using imperialist competitive algorithm (ICA). Both of these combined tactics are more effective in accelerating the convergence of the allocation model. To validate the performance, the HMDIC‐based DDCA system provides promising results in terms of assigning available channels, thereby enhancing channel reuse efficiency and fractional interference.
{"title":"An effective channel allocation designed using hybrid memory dragonfly with imperialist competitive algorithm in distributed mobile adhoc network","authors":"Suganya Rangasamy, Kanmani Ramasamy, Rajesh Kumar Thangavel","doi":"10.1002/dac.5906","DOIUrl":"https://doi.org/10.1002/dac.5906","url":null,"abstract":"SummaryThe channel availability problem reaches a higher degree in mobile ad hoc networks (MANETs) and garners a lot of attention in communication networks. Because increased mobile usage might result in a lack of channel allocation, an improved channel allocation technique is presented to tackle the availability problem. The distributed dynamic channel allocation (DDCA) model is built in this paper using the hybrid memory dragonfly with imperialist competitive (HMDIC) method. Based on optimization logic, this strategy assigns the channel to mobile hosts. The MANET provides a dispersed network within the coverage region in the absence of base station infrastructure. The HMDIC optimizer approach in this circumstance randomly begins every respective node to update and store their pbest value utilizing RAM dragonfly employing satellite images. The constraint values are then used to construct the cost function, which results in a strong kind of global optimum solution. The channels are therefore distributed in an effective manner. The HMDIC algorithm is used in this research to build a novel channel allocation system. It makes advantage of the exploration capabilities to successfully explore the individual node using MDA (Modified Dragonfly Algorithm) and locate the global best solution using imperialist competitive algorithm (ICA). Both of these combined tactics are more effective in accelerating the convergence of the allocation model. To validate the performance, the HMDIC‐based DDCA system provides promising results in terms of assigning available channels, thereby enhancing channel reuse efficiency and fractional interference.","PeriodicalId":13946,"journal":{"name":"International Journal of Communication Systems","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Madhavi, R. Praveen, S. Jagatheswari, K. Nivitha
SummaryIn wireless sensor networks (WSNs), trusted routing path needs to be determined for guaranteeing reliable data dissemination with maximized Quality of Service (QoS). But the sensor nodes may not exhibit a cooperative behavior for the objective of conserving energy and remaining active in the network. Trust management techniques are essential for alleviating the problem of packet dropping attacks of sensor nodes that intentionally deteriorates the performance of the network. In this paper, Hybrid ELECTRE and bipolar fuzzy PROMOTHEE‐based trust management (HEBFPTM) scheme is proposed for addressing the impact of packet dropping attacks that targets on improving QoS in WSNs. This is proposed as a multi‐criteria decision analysis solution for obtaining feasible number of parameters that could be derived from the sensor nodes of the network to determine its cooperation degree in the network. This HEBFPTM is proposed with the objective of integrating the ordinal evaluation of mobile nodes into a cardinal procedure using the method of PROMETHEE to attain quantitative and qualitative analysis that aides in identifying the weights of each criterion considered for cooperation determination using pairwise comparison. It adopted three preference models using partial, complete, and outranking through intervals. It handled the problem of uncertainty using the merits of bipolar fuzzy that helped in attaining the weight of the criteria and preference functions used for ranking the sensor nodes in the routing path. The experiments of the proposed HEBFPTM achieved using ns‐2 simulator confirmed its efficacy in improving the attack detection rate by 21.38%, reduced false positive rate by 15.42%, maximized packet delivery rate of 18.94%, and reduced energy utilization of 19.84% better than the benchmarked approaches used for investigation.
{"title":"Hybrid ELECTRE and bipolar fuzzy PROMOTHEE‐based packet dropping malicious node mitigation technique for improving QoS in WSNs","authors":"S. Madhavi, R. Praveen, S. Jagatheswari, K. Nivitha","doi":"10.1002/dac.5974","DOIUrl":"https://doi.org/10.1002/dac.5974","url":null,"abstract":"SummaryIn wireless sensor networks (WSNs), trusted routing path needs to be determined for guaranteeing reliable data dissemination with maximized Quality of Service (QoS). But the sensor nodes may not exhibit a cooperative behavior for the objective of conserving energy and remaining active in the network. Trust management techniques are essential for alleviating the problem of packet dropping attacks of sensor nodes that intentionally deteriorates the performance of the network. In this paper, Hybrid ELECTRE and bipolar fuzzy PROMOTHEE‐based trust management (HEBFPTM) scheme is proposed for addressing the impact of packet dropping attacks that targets on improving QoS in WSNs. This is proposed as a multi‐criteria decision analysis solution for obtaining feasible number of parameters that could be derived from the sensor nodes of the network to determine its cooperation degree in the network. This HEBFPTM is proposed with the objective of integrating the ordinal evaluation of mobile nodes into a cardinal procedure using the method of PROMETHEE to attain quantitative and qualitative analysis that aides in identifying the weights of each criterion considered for cooperation determination using pairwise comparison. It adopted three preference models using partial, complete, and outranking through intervals. It handled the problem of uncertainty using the merits of bipolar fuzzy that helped in attaining the weight of the criteria and preference functions used for ranking the sensor nodes in the routing path. The experiments of the proposed HEBFPTM achieved using ns‐2 simulator confirmed its efficacy in improving the attack detection rate by 21.38%, reduced false positive rate by 15.42%, maximized packet delivery rate of 18.94%, and reduced energy utilization of 19.84% better than the benchmarked approaches used for investigation.","PeriodicalId":13946,"journal":{"name":"International Journal of Communication Systems","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danish Mehmood Mughal, Tahira Mahboob, Syed Tariq Shah, Sang‐Hyo Kim, Min Young Chung
SummaryOwing to the exponential increase in wireless network services and bandwidth requirements, sharing the radio spectrum among multiple network operators seems inevitable. In wireless networks, enabling efficient spectrum sharing for resource allocation is quite challenging due to several random factors, especially in multi‐operator spectrum sharing. While spectrum sensing can be useful in spectrum‐sharing networks, the chance of collision exists due to the inherent unreliability of wireless networks, making operators reluctant to use sensing‐based mechanisms for spectrum sharing. To circumvent these issues, we utilize an alternative approach, whereby we propose an efficient spectrum‐sharing mechanism leveraging a spectrum coordinator (SC) in a multi‐operator spectrum‐sharing scenario assisted by deep learning (DL). In our proposed scheme, before the beginning of each timeslot, the base station of each operator transmits the number of required resources based on the number of packets in the base station's queue to SC. In addition, base stations also transmit the list of available channels to SC. After gathering information from all base stations, SC distributes this collected information to all the base stations. Each base station then utilizes the DL‐based spectrum‐sharing algorithm and computes the number of resources it can use based on the number of packets in its queue and the number of packets in the queues of other operators. Furthermore, by leveraging DL, each operator also computes the cost it must pay to other operators for using their resources. We evaluate the performance of the proposed network through extensive simulations. It is shown that the proposed DL‐based spectrum‐sharing mechanism outperforms the conventional spectrum allocation scheme, thus paving the way for more dynamic and efficient multi‐operator spectrum sharing.
{"title":"Deep learning‐based spectrum sharing in next generation multi‐operator cellular networks","authors":"Danish Mehmood Mughal, Tahira Mahboob, Syed Tariq Shah, Sang‐Hyo Kim, Min Young Chung","doi":"10.1002/dac.5964","DOIUrl":"https://doi.org/10.1002/dac.5964","url":null,"abstract":"SummaryOwing to the exponential increase in wireless network services and bandwidth requirements, sharing the radio spectrum among multiple network operators seems inevitable. In wireless networks, enabling efficient spectrum sharing for resource allocation is quite challenging due to several random factors, especially in multi‐operator spectrum sharing. While spectrum sensing can be useful in spectrum‐sharing networks, the chance of collision exists due to the inherent unreliability of wireless networks, making operators reluctant to use sensing‐based mechanisms for spectrum sharing. To circumvent these issues, we utilize an alternative approach, whereby we propose an efficient spectrum‐sharing mechanism leveraging a spectrum coordinator (SC) in a multi‐operator spectrum‐sharing scenario assisted by deep learning (DL). In our proposed scheme, before the beginning of each timeslot, the base station of each operator transmits the number of required resources based on the number of packets in the base station's queue to SC. In addition, base stations also transmit the list of available channels to SC. After gathering information from all base stations, SC distributes this collected information to all the base stations. Each base station then utilizes the DL‐based spectrum‐sharing algorithm and computes the number of resources it can use based on the number of packets in its queue and the number of packets in the queues of other operators. Furthermore, by leveraging DL, each operator also computes the cost it must pay to other operators for using their resources. We evaluate the performance of the proposed network through extensive simulations. It is shown that the proposed DL‐based spectrum‐sharing mechanism outperforms the conventional spectrum allocation scheme, thus paving the way for more dynamic and efficient multi‐operator spectrum sharing.","PeriodicalId":13946,"journal":{"name":"International Journal of Communication Systems","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SummaryThe sensor networks are the primary and essential components on which the world of Internet of Things (IoT) is built. IoT empowers smart communication, computation, and sensing capabilities. In sensor networks, the data are collected by the sensor nodes and sent to the sink along a communication path. These communication paths are collaboratively established by the nodes and the sink. By incorporating energy‐efficient data gathering techniques, the lifetime of these networks is improved. The major contribution of the study in this work is to provide a survey of various techniques for data aggregation (DA) and the employed algorithmic strategies that facilitate and influence network lifetime (NL) in these environments. DA in wireless sensor networks (WSN), IoTs, and cloud computing extend the lifetime of these networks since it enables efficient merging of traffic flows, thus reducing transmissions and energy consumption of devices. In sensor networks, data aggregation tree (DAT)‐based routing facilitates energy‐efficient routing that extends NL. NL maximization using DATs constructs DATs with optimal NL and is a known NP‐complete problem. Subsequently, the study in this work surveys the various approaches employed by researchers to construct DATs and discusses techniques for DAT scheduling. This work further explores various sensor deployment techniques and discusses real world scenario in which NL is influenced by uncertainty in communication links. Finally, the study in this survey highlights the achievements in realizing NL improvement using DAT and identifies the limitations and research challenges.
摘要传感器网络是构建物联网(IoT)世界的主要和基本组成部分。物联网增强了智能通信、计算和传感能力。在传感器网络中,数据由传感器节点收集,并沿着通信路径发送到汇。这些通信路径由节点和汇协同建立。通过采用高能效数据收集技术,这些网络的寿命得到了改善。这项研究的主要贡献在于对数据聚合(DA)的各种技术以及在这些环境中促进和影响网络寿命(NL)的算法策略进行了调查。无线传感器网络(WSN)、物联网和云计算中的数据汇聚可延长这些网络的使用寿命,因为它能有效合并流量,从而减少设备的传输和能耗。在传感器网络中,基于数据聚合树(DAT)的路由可促进高能效路由,从而延长 NL。使用 DAT 实现 NL 最大化需要构建具有最佳 NL 的 DAT,这是一个已知的 NP-完全问题。随后,本研究调查了研究人员构建 DAT 的各种方法,并讨论了 DAT 调度技术。本研究还进一步探讨了各种传感器部署技术,并讨论了 NL 受通信链路不确定性影响的现实场景。最后,本调查报告强调了利用 DAT 实现 NL 改进的成就,并指出了局限性和研究挑战。
{"title":"A survey on network lifetime maximization using data aggregation trees","authors":"Preeti A. Kale, Manisha J. Nene","doi":"10.1002/dac.5962","DOIUrl":"https://doi.org/10.1002/dac.5962","url":null,"abstract":"SummaryThe sensor networks are the primary and essential components on which the world of Internet of Things (IoT) is built. IoT empowers smart communication, computation, and sensing capabilities. In sensor networks, the data are collected by the sensor nodes and sent to the sink along a communication path. These communication paths are collaboratively established by the nodes and the sink. By incorporating energy‐efficient data gathering techniques, the lifetime of these networks is improved. The major contribution of the study in this work is to provide a survey of various techniques for data aggregation (DA) and the employed algorithmic strategies that facilitate and influence network lifetime (NL) in these environments. DA in wireless sensor networks (WSN), IoTs, and cloud computing extend the lifetime of these networks since it enables efficient merging of traffic flows, thus reducing transmissions and energy consumption of devices. In sensor networks, data aggregation tree (DAT)‐based routing facilitates energy‐efficient routing that extends NL. NL maximization using DATs constructs DATs with optimal NL and is a known NP‐complete problem. Subsequently, the study in this work surveys the various approaches employed by researchers to construct DATs and discusses techniques for DAT scheduling. This work further explores various sensor deployment techniques and discusses real world scenario in which NL is influenced by uncertainty in communication links. Finally, the study in this survey highlights the achievements in realizing NL improvement using DAT and identifies the limitations and research challenges.","PeriodicalId":13946,"journal":{"name":"International Journal of Communication Systems","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Ali Lodhi, Lei Wang, Khalid Mahmood, Arshad Farhad, Jenhui Chen, Saru Kumari
SummaryThe long‐range wide area network (LoRaWAN) is a standard for the Internet of Things (IoT) because it has low cost, long range, not energy‐intensive, and capable of supporting massive end devices (EDs). The adaptive data rate (ADR) adjusts parameters at both EDs and the network server (NS). This includes modifying the transmission spreading factor (SF) and transmit power (TP) to minimize packet errors and optimize transmission performance at the NS. The ADR managed by NS aims to provide reliable and energy‐efficient resources (e.g., SF and TP) to EDs by monitoring the packets received from the EDs. However, since the channel condition changes rapidly in LoRaWAN due to mobility, the existing ADR algorithm is unsuitable and results in a significant amount of packet loss and retransmissions causing an increase in energy consumption. In this paper, we enhance the ADR by introducing Kalman filter‐based ADR (KF‐ADR) and moving median‐based ADR (Median‐ADR), which estimate the optimal SNR by considering the mobility later used to assign the SF and TP to EDs. The simulation results showed that the proposed techniques outperform the legacy ADRs in terms of convergence period, energy consumption, and packet success ratio.
摘要长距离广域网(LoRaWAN)是物联网(IoT)的一个标准,因为它成本低、距离远、不耗能,而且能够支持大量终端设备(ED)。自适应数据速率(ADR)可调整 ED 和网络服务器(NS)的参数。这包括修改传输扩展因子(SF)和传输功率(TP),以尽量减少数据包错误并优化 NS 的传输性能。由 NS 管理的 ADR 旨在通过监控从 ED 接收到的数据包,为 ED 提供可靠且节能的资源(如 SF 和 TP)。然而,由于 LoRaWAN 中的信道条件因移动性而快速变化,现有的 ADR 算法并不适用,会导致大量数据包丢失和重传,从而增加能耗。本文通过引入基于卡尔曼滤波器的 ADR(KF-ADR)和基于移动中值的 ADR(Median-ADR)来增强 ADR,这两种算法通过考虑移动性来估计最佳信噪比,然后用于为 ED 分配 SF 和 TP。仿真结果表明,所提出的技术在收敛周期、能耗和数据包成功率方面都优于传统的 ADR。
{"title":"Enhanced adaptive data rate strategies for energy‐efficient Internet of Things communication in LoRaWAN","authors":"Muhammad Ali Lodhi, Lei Wang, Khalid Mahmood, Arshad Farhad, Jenhui Chen, Saru Kumari","doi":"10.1002/dac.5966","DOIUrl":"https://doi.org/10.1002/dac.5966","url":null,"abstract":"SummaryThe long‐range wide area network (LoRaWAN) is a standard for the Internet of Things (IoT) because it has low cost, long range, not energy‐intensive, and capable of supporting massive end devices (EDs). The adaptive data rate (ADR) adjusts parameters at both EDs and the network server (NS). This includes modifying the transmission spreading factor (SF) and transmit power (TP) to minimize packet errors and optimize transmission performance at the NS. The ADR managed by NS aims to provide reliable and energy‐efficient resources (e.g., SF and TP) to EDs by monitoring the packets received from the EDs. However, since the channel condition changes rapidly in LoRaWAN due to mobility, the existing ADR algorithm is unsuitable and results in a significant amount of packet loss and retransmissions causing an increase in energy consumption. In this paper, we enhance the ADR by introducing Kalman filter‐based ADR (KF‐ADR) and moving median‐based ADR (Median‐ADR), which estimate the optimal SNR by considering the mobility later used to assign the SF and TP to EDs. The simulation results showed that the proposed techniques outperform the legacy ADRs in terms of convergence period, energy consumption, and packet success ratio.","PeriodicalId":13946,"journal":{"name":"International Journal of Communication Systems","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SummaryWireless sensor networks (WSNs) consist of numerous sensor nodes with limited battery life, computational power, and network capabilities. These sensors are deployed in specific areas to monitor environmental physical parameters. Once the data are collected, it is processed and transmitted to a base station (BS) via designated routes. The processes of sensing and transmitting consume significant energy, leading to rapid depletion of node batteries and the occurrence of hot spot problems. Consequently, relying on a single route for data transmission can result in network overhead issues. Enhancing the energy efficiency of WSNs is a persistent challenge. To address this, improvements in processes, such as routing and clustering are necessary. Implementing dynamic cluster head (CH) selection is a key approach for optimal path selection and energy conservation. Accordingly, in this work, a novel multiobjective CH selection and routing method for providing energy‐aware data transmission in WSN is presented. Here, CH selection is carried out using the proposed chronological wild geese optimization (CWGO) technique based on multiple constraints, such as delay, intercluster distance, intracluster distance, Link Life Time (LLT), and predicted energy. Further, the nodes' energy is determined by the deep recurrent neural network (DRNN). Then, the ideal path from the node to the BS is identified by the CWGO considering constraints, like predicted energy, delay, distance, and trust. Moreover, the proposed CWGO is examined considering metrics, like energy, trust, distance, and delay and is found to have attained superior values of 0.963 J, 0.700, 19.468 m, and 0.252 s, respectively.
{"title":"Chronological wild geese optimization algorithm for cluster head selection and routing in wireless sensor network","authors":"Zoren P. Mabunga, Jennifer C. Dela Cruz","doi":"10.1002/dac.5963","DOIUrl":"https://doi.org/10.1002/dac.5963","url":null,"abstract":"SummaryWireless sensor networks (WSNs) consist of numerous sensor nodes with limited battery life, computational power, and network capabilities. These sensors are deployed in specific areas to monitor environmental physical parameters. Once the data are collected, it is processed and transmitted to a base station (BS) via designated routes. The processes of sensing and transmitting consume significant energy, leading to rapid depletion of node batteries and the occurrence of hot spot problems. Consequently, relying on a single route for data transmission can result in network overhead issues. Enhancing the energy efficiency of WSNs is a persistent challenge. To address this, improvements in processes, such as routing and clustering are necessary. Implementing dynamic cluster head (CH) selection is a key approach for optimal path selection and energy conservation. Accordingly, in this work, a novel multiobjective CH selection and routing method for providing energy‐aware data transmission in WSN is presented. Here, CH selection is carried out using the proposed chronological wild geese optimization (CWGO) technique based on multiple constraints, such as delay, intercluster distance, intracluster distance, Link Life Time (LLT), and predicted energy. Further, the nodes' energy is determined by the deep recurrent neural network (DRNN). Then, the ideal path from the node to the BS is identified by the CWGO considering constraints, like predicted energy, delay, distance, and trust. Moreover, the proposed CWGO is examined considering metrics, like energy, trust, distance, and delay and is found to have attained superior values of 0.963 J, 0.700, 19.468 m, and 0.252 s, respectively.","PeriodicalId":13946,"journal":{"name":"International Journal of Communication Systems","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SummaryIt is estimated that there will be over two dozen billion Internet of Things (IoT) connections in the future as the number of connected IoT devices grows rapidly. Due to characteristics like low power consumption and extensive coverage, low‐power wide area networks (LPWANs) have become particularly relevant for the new paradigm. Long range wide area network (LoRaWAN) is one of the most alluring technological advances in these networks. Although it is one of the most developed LPWAN platforms, there are still unresolved issues, such as capacity limitations. Hence, this research introduces a novel resource scheduling technique for the LoRAWAN network using deep reinforcement learning. Here, the information on the LoRaWAN nodes is learned by the reinforcement technique, and the knowledge is utilized to allocate resources to improve the packet delivery ratio (PDR) performance through a proposed coati optimal Q‐reinforcement learning (CO_QRL) model. Here, Q‐reinforcement learning is utilized to learn the information about nodes, and the coati optimization algorithm (COA) helps to choose the optimal action for enhancing the reward. In the proposed scheduling algorithm, the weighted sum of successfully received packets is treated as a reward, and the server allocates resources to maximize this Q‐reward. The evaluation of the proposed method based on PDR, packet success ratio (PSR), packet collision rate (PCR), time, delay, and energy accomplished the values of 0.917, 0.759, 0.253, 85, 0.029, 7.89, and 10.08, respectively.
{"title":"An efficient resource scheduling mechanism in LoRaWAN environment using coati optimal Q‐reinforcement learning","authors":"J Uma Mahesh, Judhistir Mahapatro","doi":"10.1002/dac.5965","DOIUrl":"https://doi.org/10.1002/dac.5965","url":null,"abstract":"SummaryIt is estimated that there will be over two dozen billion Internet of Things (IoT) connections in the future as the number of connected IoT devices grows rapidly. Due to characteristics like low power consumption and extensive coverage, low‐power wide area networks (LPWANs) have become particularly relevant for the new paradigm. Long range wide area network (LoRaWAN) is one of the most alluring technological advances in these networks. Although it is one of the most developed LPWAN platforms, there are still unresolved issues, such as capacity limitations. Hence, this research introduces a novel resource scheduling technique for the LoRAWAN network using deep reinforcement learning. Here, the information on the LoRaWAN nodes is learned by the reinforcement technique, and the knowledge is utilized to allocate resources to improve the packet delivery ratio (PDR) performance through a proposed coati optimal Q‐reinforcement learning (CO_QRL) model. Here, Q‐reinforcement learning is utilized to learn the information about nodes, and the coati optimization algorithm (COA) helps to choose the optimal action for enhancing the reward. In the proposed scheduling algorithm, the weighted sum of successfully received packets is treated as a reward, and the server allocates resources to maximize this Q‐reward. The evaluation of the proposed method based on PDR, packet success ratio (PSR), packet collision rate (PCR), time, delay, and energy accomplished the values of 0.917, 0.759, 0.253, 85, 0.029, 7.89, and 10.08, respectively.","PeriodicalId":13946,"journal":{"name":"International Journal of Communication Systems","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SummaryNode localization in wireless sensor networks (WSNs) ensures that the collected data is contextually accurate, enabling effective monitoring and management of various applications. Recently, there has been a surge in research focused on addressing node localization within WSNs. Emerging trends in this field involve the application of metaheuristic optimization techniques to refine node location determination accuracy. However, existing techniques often struggle with balancing accuracy, energy consumption, network lifetime, and computational efficiency, particularly in challenging WSN environments. Therefore, this research introduces a novel approach called efficient hybrid bat sand cat swarm optimization (EHBSCSO) to address node localization within WSNs. The hybrid method leverages the exploration capabilities of the bat optimization algorithm and the exploitation strengths of the sand cat swarm optimization algorithm. This combination allows for efficient determination of node positions, significantly improving localization accuracy while minimizing energy consumption. The EHBSCSO utilizes the received signal strength indicator (RSSI) and time of flight (ToF) approaches to assess distances among nodes accurately. Accurate node localization directly improves data quality by ensuring spatially precise data collection, reducing communication overhead, and enhancing the overall reliability of the collected data. Compared to conventional methods, the proposed EHBSCSO algorithm demonstrates superior performance, with a mean localization error of 0.18%, energy consumption of 7.2 J, computational time of 8.9 s, and localization time of 0.19 s. These metrics underscore its efficiency and precision. The research indicates that EHBSCSO not only optimizes localization accuracy but also contributes to energy efficiency and faster computational times, addressing key challenges in WSN node localization.
{"title":"An efficient hybrid bat sand cat swarm optimization‐based node localization for data quality improvement in wireless sensor networks","authors":"Dasappagounden Pudur Velusamy Soundari, Poongodi Chenniappan","doi":"10.1002/dac.5961","DOIUrl":"https://doi.org/10.1002/dac.5961","url":null,"abstract":"SummaryNode localization in wireless sensor networks (WSNs) ensures that the collected data is contextually accurate, enabling effective monitoring and management of various applications. Recently, there has been a surge in research focused on addressing node localization within WSNs. Emerging trends in this field involve the application of metaheuristic optimization techniques to refine node location determination accuracy. However, existing techniques often struggle with balancing accuracy, energy consumption, network lifetime, and computational efficiency, particularly in challenging WSN environments. Therefore, this research introduces a novel approach called efficient hybrid bat sand cat swarm optimization (EHBSCSO) to address node localization within WSNs. The hybrid method leverages the exploration capabilities of the bat optimization algorithm and the exploitation strengths of the sand cat swarm optimization algorithm. This combination allows for efficient determination of node positions, significantly improving localization accuracy while minimizing energy consumption. The EHBSCSO utilizes the received signal strength indicator (RSSI) and time of flight (ToF) approaches to assess distances among nodes accurately. Accurate node localization directly improves data quality by ensuring spatially precise data collection, reducing communication overhead, and enhancing the overall reliability of the collected data. Compared to conventional methods, the proposed EHBSCSO algorithm demonstrates superior performance, with a mean localization error of 0.18%, energy consumption of 7.2 J, computational time of 8.9 s, and localization time of 0.19 s. These metrics underscore its efficiency and precision. The research indicates that EHBSCSO not only optimizes localization accuracy but also contributes to energy efficiency and faster computational times, addressing key challenges in WSN node localization.","PeriodicalId":13946,"journal":{"name":"International Journal of Communication Systems","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SummaryThe cluster networking protocols are the roots that embed intelligent decision‐making and enhance the lifespan of wireless sensor networks (WSNs). Wireless sensors with limited capabilities face several challenges due to the heterogeneous application environments. Especially, the mobility‐incorporated sensors in most situations trouble the cluster network's robustness. Many cluster networking protocols have been presented in the past to enhance the network lifespan and data delivery ratio. However, they lack a dedicated and efficient mechanism for mobility assistance, an adequate cluster management process and cluster head selection criteria. To overcome these issues and for the uniform energy load distribution, we propose a mobility‐compatible cache controlled cluster networking protocol (MC‐CCCNP) in this paper. It is an energy‐efficient cluster networking protocol that supports sensor movement. Network resource management and routing are controlled distributively by an optimal number of cache nodes. It defines a new strategy for cache node deployment based on neighbour density as well as a weight formula for cluster head selection and cluster formation based on the residual energy, the distance to the base station and the node velocity. It also includes techniques for detaching and reconnecting a mobile node to an appropriate cluster cache if it crosses the cluster boundary. We simulate and compare the performance of our protocol with the centralised energy‐efficient clustering routing, energy‐efficient mobility‐based cluster head selection protocol and dual tier cluster‐based routing protocols over different network configurations with varying mobility, scalability and heterogeneity. The MC‐CCCNP showed remarkable improvements in energy utilisation uniformity and energy consumption. With the improved network lifespan, it also maintains a higher data throughput rate of 95% or more in almost all network configurations.
{"title":"Mobility‐compatible cache controlled cluster networking protocol","authors":"Priyank Sunhare, Manju K. Chattopadhyay","doi":"10.1002/dac.5960","DOIUrl":"https://doi.org/10.1002/dac.5960","url":null,"abstract":"SummaryThe cluster networking protocols are the roots that embed intelligent decision‐making and enhance the lifespan of wireless sensor networks (WSNs). Wireless sensors with limited capabilities face several challenges due to the heterogeneous application environments. Especially, the mobility‐incorporated sensors in most situations trouble the cluster network's robustness. Many cluster networking protocols have been presented in the past to enhance the network lifespan and data delivery ratio. However, they lack a dedicated and efficient mechanism for mobility assistance, an adequate cluster management process and cluster head selection criteria. To overcome these issues and for the uniform energy load distribution, we propose a mobility‐compatible cache controlled cluster networking protocol (MC‐CCCNP) in this paper. It is an energy‐efficient cluster networking protocol that supports sensor movement. Network resource management and routing are controlled distributively by an optimal number of cache nodes. It defines a new strategy for cache node deployment based on neighbour density as well as a weight formula for cluster head selection and cluster formation based on the residual energy, the distance to the base station and the node velocity. It also includes techniques for detaching and reconnecting a mobile node to an appropriate cluster cache if it crosses the cluster boundary. We simulate and compare the performance of our protocol with the centralised energy‐efficient clustering routing, energy‐efficient mobility‐based cluster head selection protocol and dual tier cluster‐based routing protocols over different network configurations with varying mobility, scalability and heterogeneity. The MC‐CCCNP showed remarkable improvements in energy utilisation uniformity and energy consumption. With the improved network lifespan, it also maintains a higher data throughput rate of 95% or more in almost all network configurations.","PeriodicalId":13946,"journal":{"name":"International Journal of Communication Systems","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}