Pub Date : 2023-08-31DOI: 10.22247/ijcna/2023/223311
A. Anitha, S. Mythili
– Each sensor node functions autonomously to conduct data transmission in wireless sensor networks. It is very essential to focus on energy dissipation and sensor nodes lifespan. There are many existing energy consumption models, and the problem of selecting optimized cluster head along with efficient path selection is still challenging. To address this energy consumption issue in an effective way the proposed work is designed with a two-phase model for performing cluster head selection, clustering, and optimized route selection for the secure transmission of data packets with reduced overhead. The scope of the proposed methodology is to choose the most prominent cluster head and assistant cluster head which aids in prolonging the network lifespan and also securing the inter-cluster components from selective forwarding attack (SFA) and black hole attack (BHA). The proposed methodology is Empowered Chicken Swarm Optimization (ECSO) with Intuitionistic Fuzzy Trust Model (IFTM) in Inter-Cluster communication. ECSO provides an efficient clustering technique and cluster head selection and IFTM provides a secure and fast routing path from SFA and BHA for Inter-Cluster Single-Hop and Multi-Hop Communication. ESCO uses chaos theory for local optima in cluster head selection. The IFTM incorporates reliance of neighbourhood nodes, derived confidence of nodes, estimation of data propagation of nodes and an element of trustworthiness of nodes are used to implement security in inter-cluster communication. Experimental results prove that the proposed methodology outperforms the existing approaches by increasing packet delivery ratio and throughput, and minimizing packet drop ratio and energy consumption.
{"title":"Empowered Chicken Swarm Optimization with Intuitionistic Fuzzy Trust Model for Optimized Secure and Energy Aware Data Transmission in Clustered Wireless Sensor Networks","authors":"A. Anitha, S. Mythili","doi":"10.22247/ijcna/2023/223311","DOIUrl":"https://doi.org/10.22247/ijcna/2023/223311","url":null,"abstract":"– Each sensor node functions autonomously to conduct data transmission in wireless sensor networks. It is very essential to focus on energy dissipation and sensor nodes lifespan. There are many existing energy consumption models, and the problem of selecting optimized cluster head along with efficient path selection is still challenging. To address this energy consumption issue in an effective way the proposed work is designed with a two-phase model for performing cluster head selection, clustering, and optimized route selection for the secure transmission of data packets with reduced overhead. The scope of the proposed methodology is to choose the most prominent cluster head and assistant cluster head which aids in prolonging the network lifespan and also securing the inter-cluster components from selective forwarding attack (SFA) and black hole attack (BHA). The proposed methodology is Empowered Chicken Swarm Optimization (ECSO) with Intuitionistic Fuzzy Trust Model (IFTM) in Inter-Cluster communication. ECSO provides an efficient clustering technique and cluster head selection and IFTM provides a secure and fast routing path from SFA and BHA for Inter-Cluster Single-Hop and Multi-Hop Communication. ESCO uses chaos theory for local optima in cluster head selection. The IFTM incorporates reliance of neighbourhood nodes, derived confidence of nodes, estimation of data propagation of nodes and an element of trustworthiness of nodes are used to implement security in inter-cluster communication. Experimental results prove that the proposed methodology outperforms the existing approaches by increasing packet delivery ratio and throughput, and minimizing packet drop ratio and energy consumption.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47669813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-31DOI: 10.22247/ijcna/2023/223319
J. Ramkumar, K. S. Jeen Marseline, D. R. Medhunhashini
– The widespread adoption of Internet of Things (IoT) technology and the rise of fintech applications have raised concerns regarding the secure and efficient routing of data in IoT-based ad-hoc networks (IoT-AN). Challenges in this context include vulnerability to security breaches, potential malicious node presence, routing instability, and energy inefficiency. This article proposes the Relentless Firefly Optimization-based Routing Protocol (RFORP) to overcome these issues. Inspired by fireflies’ natural behaviour, RFORP incorporates relentless firefly optimization techniques to enhance packet delivery, malicious node detection, routing stability, and overall network resilience. Simulation results demonstrate RFORP’s superiority over existing protocols, achieving higher packet delivery ratios, accurate malicious node detection, improved routing stability, and significant energy efficiency. The proposed RFORP offers a promising solution for securing fintech data in IoT-AN, providing enhanced performance, reliability, and security while effectively addressing the identified challenges. This research contributes to advancing secure routing protocols in fintech applications and guides network security and protocol selection in IoT environments.
{"title":"Relentless Firefly Optimization-Based Routing Protocol (RFORP) for Securing Fintech Data in IoT-Based Ad-Hoc Networks","authors":"J. Ramkumar, K. S. Jeen Marseline, D. R. Medhunhashini","doi":"10.22247/ijcna/2023/223319","DOIUrl":"https://doi.org/10.22247/ijcna/2023/223319","url":null,"abstract":"– The widespread adoption of Internet of Things (IoT) technology and the rise of fintech applications have raised concerns regarding the secure and efficient routing of data in IoT-based ad-hoc networks (IoT-AN). Challenges in this context include vulnerability to security breaches, potential malicious node presence, routing instability, and energy inefficiency. This article proposes the Relentless Firefly Optimization-based Routing Protocol (RFORP) to overcome these issues. Inspired by fireflies’ natural behaviour, RFORP incorporates relentless firefly optimization techniques to enhance packet delivery, malicious node detection, routing stability, and overall network resilience. Simulation results demonstrate RFORP’s superiority over existing protocols, achieving higher packet delivery ratios, accurate malicious node detection, improved routing stability, and significant energy efficiency. The proposed RFORP offers a promising solution for securing fintech data in IoT-AN, providing enhanced performance, reliability, and security while effectively addressing the identified challenges. This research contributes to advancing secure routing protocols in fintech applications and guides network security and protocol selection in IoT environments.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43068252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-31DOI: 10.22247/ijcna/2023/223318
Vuppala Sukanya, Ramachandram S
– Recent times, the Wireless Sensor Networks (WSN) has played an important role in smart farming systems. However, WSN-enabled smart farming (SF) systems need reliable communication to minimize overhead, end-to-end delay, latency etc., Hence, this work introduces a 3-tiered framework based on the integration of WSN with the edge and cloud computing platforms to acquire, process and store useful soil data from agricultural lands. Initially, the sensors are deployed randomly throughout the network region to collect information regarding different types of soil components. The sensors are clustered based on distance using the Levy flight based K-means clustering algorithm to promote efficient communication. The Tasmanian devil optimization (TDO) algorithm is used to choose the cluster heads (CHs) based on the distance among the node and edge server, residual energy, and the number of neighbors. Then, the optimal paths to transmit the data are identified using the all members group search optimization (AMGSO) algorithm based on different parameters. Each edge server assesses the quality of the data (QoD) with respect to some data quality criteria after receiving the data from the edge server. Also, the load across the servers are balanced in order to overcome the overloading and under loading issues. The legitimate data that received higher scores in the QoD evaluation alone is sent to the cloud servers for archival. Using the ICRISAT dataset, the efficiency of the proposed work is evaluated using a number of indicators. The average improvement rate attained by the proposed model in terms of energy consumption is 40%, in terms of packet delivery ratio is 7%, in terms of network lifetime is 38%, and in terms of latency is 24% for a total of 250 nodes.
{"title":"A Novel All Members Group Search Optimization Based Data Acquisition in Cloud Assisted Wireless Sensor Network for Smart Farming","authors":"Vuppala Sukanya, Ramachandram S","doi":"10.22247/ijcna/2023/223318","DOIUrl":"https://doi.org/10.22247/ijcna/2023/223318","url":null,"abstract":"– Recent times, the Wireless Sensor Networks (WSN) has played an important role in smart farming systems. However, WSN-enabled smart farming (SF) systems need reliable communication to minimize overhead, end-to-end delay, latency etc., Hence, this work introduces a 3-tiered framework based on the integration of WSN with the edge and cloud computing platforms to acquire, process and store useful soil data from agricultural lands. Initially, the sensors are deployed randomly throughout the network region to collect information regarding different types of soil components. The sensors are clustered based on distance using the Levy flight based K-means clustering algorithm to promote efficient communication. The Tasmanian devil optimization (TDO) algorithm is used to choose the cluster heads (CHs) based on the distance among the node and edge server, residual energy, and the number of neighbors. Then, the optimal paths to transmit the data are identified using the all members group search optimization (AMGSO) algorithm based on different parameters. Each edge server assesses the quality of the data (QoD) with respect to some data quality criteria after receiving the data from the edge server. Also, the load across the servers are balanced in order to overcome the overloading and under loading issues. The legitimate data that received higher scores in the QoD evaluation alone is sent to the cloud servers for archival. Using the ICRISAT dataset, the efficiency of the proposed work is evaluated using a number of indicators. The average improvement rate attained by the proposed model in terms of energy consumption is 40%, in terms of packet delivery ratio is 7%, in terms of network lifetime is 38%, and in terms of latency is 24% for a total of 250 nodes.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41592293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-31DOI: 10.22247/ijcna/2023/223313
Sultana Parween, Syed Zeeshan Hussain
– TCP operates as a unicast protocol that prioritizes the reliability of established connections. This protocol allows for the explicit and acknowledged establishment and dissolution of connections, the transmission of data without loss of context or duplication, the management of traffic flows, the avoidance of congestion, and the asynchronous signaling of time-sensitive information. In this research, we use the Systematic Literature Review (SLR) technique to examine and better understand the several methods recently given for enhancing TCP performance in IoT and MANET networks. This work aims to assess and classify the current research strategies on TCP performance approaches published between 2016 and 2023 using both analytical and statistical methods. Technical parameters suggested case study and evaluation settings are compared between MANET and IoT to give a taxonomy for TCP performance improvement options based on the content of current studies chosen using the SLR procedure. Each study's merits and limitations are outlined, along with suggestions for improving those studies and areas where further research is needed. This work outlines the basic issues of TCP when it is used in IoT and MANET. It also highlights the recent approaches for TCP performance enhancement, such as machine Learning-based approaches, multi-path TCP, congestion control, buffer management, and route optimization. It also provides the potential for future research directions into the effectiveness of TCP performance in IoT and MANET. The major findings of this review are to provide a thorough understanding of the latest techniques for enhancing TCP performance in the IoT and MANET networks, which can be beneficial for researchers and practitioners in the field of networking.
{"title":"TCP Performance Enhancement in IoT and MANET: A Systematic Literature Review","authors":"Sultana Parween, Syed Zeeshan Hussain","doi":"10.22247/ijcna/2023/223313","DOIUrl":"https://doi.org/10.22247/ijcna/2023/223313","url":null,"abstract":"– TCP operates as a unicast protocol that prioritizes the reliability of established connections. This protocol allows for the explicit and acknowledged establishment and dissolution of connections, the transmission of data without loss of context or duplication, the management of traffic flows, the avoidance of congestion, and the asynchronous signaling of time-sensitive information. In this research, we use the Systematic Literature Review (SLR) technique to examine and better understand the several methods recently given for enhancing TCP performance in IoT and MANET networks. This work aims to assess and classify the current research strategies on TCP performance approaches published between 2016 and 2023 using both analytical and statistical methods. Technical parameters suggested case study and evaluation settings are compared between MANET and IoT to give a taxonomy for TCP performance improvement options based on the content of current studies chosen using the SLR procedure. Each study's merits and limitations are outlined, along with suggestions for improving those studies and areas where further research is needed. This work outlines the basic issues of TCP when it is used in IoT and MANET. It also highlights the recent approaches for TCP performance enhancement, such as machine Learning-based approaches, multi-path TCP, congestion control, buffer management, and route optimization. It also provides the potential for future research directions into the effectiveness of TCP performance in IoT and MANET. The major findings of this review are to provide a thorough understanding of the latest techniques for enhancing TCP performance in the IoT and MANET networks, which can be beneficial for researchers and practitioners in the field of networking.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41892049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-31DOI: 10.22247/ijcna/2023/223317
K. Kowsalyadevi, N.V. Balaji
– The emerging digital world has recently utilized the massive power of the emerging Internet of Things (IoT) technology that fuels the growth of many intelligent applications. The Internet of Battlefield Things (IoBT) greatly enables critical information dissemination and efficient war strategy planning with situational awareness. The lightweight Routing Protocol for Low-Power and Lossy Networks (RPL) is critical for successful IoT application deployment. RPL has low-security features that are insufficient to protect the IoBT environment due to device heterogeneity and open wireless device-to-device communication. Hence, it is crucial to provide strong security to RPL-IoBT against multiple attacks and enhance its performance. This work proposes IoBTSec-RPL, a hybrid Deep Learning (DL)-based multi-attack detection model, to overcome the attacks. The proposed IoBTSec-RPL learns prominent routing attacks and efficiently classifies the attackers. It includes four steps: data collection and preprocessing, feature selection, data augmentation, and attack detection and classification. Initially, the proposed model employs min-max normalization and missing value imputation to preprocess network packets. Secondly, the enhanced pelican optimization algorithm selects the most suitable features for attack detection through an efficient ranking method. Thirdly, data augmentation utilizes an auxiliary classifier gated adversarial network to alleviate the class imbalance concerns over the multiple attack classes. Finally, the proposed approach successfully detects and classifies the attacks using a hybrid DL model that combines LongShort-Term Memory (LSTM) and Deep Belief Network (DBN). The performance results reveal that the IoBTSec-RPL accurately recognizes the multiple RPL attacks in IoT and accomplished 98.93% recall. It also achieved improved accuracy of 2.16%, 5.73%, and 6.06% than the LGBM, LSTM, and DBN for 200K traffic samples.
{"title":"IoBTSec-RPL: A Novel RPL Attack Detecting Mechanism Using Hybrid Deep Learning Over Battlefield IoT Environment","authors":"K. Kowsalyadevi, N.V. Balaji","doi":"10.22247/ijcna/2023/223317","DOIUrl":"https://doi.org/10.22247/ijcna/2023/223317","url":null,"abstract":"– The emerging digital world has recently utilized the massive power of the emerging Internet of Things (IoT) technology that fuels the growth of many intelligent applications. The Internet of Battlefield Things (IoBT) greatly enables critical information dissemination and efficient war strategy planning with situational awareness. The lightweight Routing Protocol for Low-Power and Lossy Networks (RPL) is critical for successful IoT application deployment. RPL has low-security features that are insufficient to protect the IoBT environment due to device heterogeneity and open wireless device-to-device communication. Hence, it is crucial to provide strong security to RPL-IoBT against multiple attacks and enhance its performance. This work proposes IoBTSec-RPL, a hybrid Deep Learning (DL)-based multi-attack detection model, to overcome the attacks. The proposed IoBTSec-RPL learns prominent routing attacks and efficiently classifies the attackers. It includes four steps: data collection and preprocessing, feature selection, data augmentation, and attack detection and classification. Initially, the proposed model employs min-max normalization and missing value imputation to preprocess network packets. Secondly, the enhanced pelican optimization algorithm selects the most suitable features for attack detection through an efficient ranking method. Thirdly, data augmentation utilizes an auxiliary classifier gated adversarial network to alleviate the class imbalance concerns over the multiple attack classes. Finally, the proposed approach successfully detects and classifies the attacks using a hybrid DL model that combines LongShort-Term Memory (LSTM) and Deep Belief Network (DBN). The performance results reveal that the IoBTSec-RPL accurately recognizes the multiple RPL attacks in IoT and accomplished 98.93% recall. It also achieved improved accuracy of 2.16%, 5.73%, and 6.06% than the LGBM, LSTM, and DBN for 200K traffic samples.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45419895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-31DOI: 10.22247/ijcna/2023/223310
Abhishek Gupta, H.S. Bhadauria
– Delivering shared data, software, and resources across a network to computers and other devices, the cloud computing paradigm aspires to offer computing as a service rather than a product. The management of the resource allocation process is essential given the technology's rapid development. For cloud computing, task scheduling techniques are crucial. Use scheduling algorithms to distribute virtual machines to user tasks and balance the workload on each machine's capacity and overall. This task's major goal is to offer a load-balancing algorithm that can be used by both cloud consumers and service providers. In this paper, we propose the ‘Bat Load’ algorithm, which utilizes the Bat algorithm for work scheduling and the Honey Bee algorithm for load balancing. This hybrid approach efficiently addresses the load balancing problem in cloud computing, optimizing resource allocation, make span, degree of imbalance, cost, execution time, and processing time. The effectiveness of the Bat Load algorithm is evaluated in comparison to other scheduling methods, including bee load balancer, ant colony optimization, particle swarm optimization, and ant colony and particle swarm optimization. Through comprehensive experiments and statistical analysis, the Bat Load algorithm demonstrates its superiority in terms of processing cost, total processing time, imbalance degree, and completion time. The results showcase its ability to achieve balanced load distribution and efficient resource allocation in the cloud computing environment, outperforming the existing scheduling methods, including ACO, PSO, and ACO and PSO with the honey bee load balancer. Our research contributes to addressing scheduling challenges and resource optimization in cloud computing, providing a robust solution for both cloud consumers and service providers.
{"title":"Honey Bee Based Improvised BAT Algorithm for Cloud Task Scheduling","authors":"Abhishek Gupta, H.S. Bhadauria","doi":"10.22247/ijcna/2023/223310","DOIUrl":"https://doi.org/10.22247/ijcna/2023/223310","url":null,"abstract":"– Delivering shared data, software, and resources across a network to computers and other devices, the cloud computing paradigm aspires to offer computing as a service rather than a product. The management of the resource allocation process is essential given the technology's rapid development. For cloud computing, task scheduling techniques are crucial. Use scheduling algorithms to distribute virtual machines to user tasks and balance the workload on each machine's capacity and overall. This task's major goal is to offer a load-balancing algorithm that can be used by both cloud consumers and service providers. In this paper, we propose the ‘Bat Load’ algorithm, which utilizes the Bat algorithm for work scheduling and the Honey Bee algorithm for load balancing. This hybrid approach efficiently addresses the load balancing problem in cloud computing, optimizing resource allocation, make span, degree of imbalance, cost, execution time, and processing time. The effectiveness of the Bat Load algorithm is evaluated in comparison to other scheduling methods, including bee load balancer, ant colony optimization, particle swarm optimization, and ant colony and particle swarm optimization. Through comprehensive experiments and statistical analysis, the Bat Load algorithm demonstrates its superiority in terms of processing cost, total processing time, imbalance degree, and completion time. The results showcase its ability to achieve balanced load distribution and efficient resource allocation in the cloud computing environment, outperforming the existing scheduling methods, including ACO, PSO, and ACO and PSO with the honey bee load balancer. Our research contributes to addressing scheduling challenges and resource optimization in cloud computing, providing a robust solution for both cloud consumers and service providers.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68278500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-31DOI: 10.22247/ijcna/2023/223315
B. S. Gouda, Sudhakar Das, Trilochan Panigrahi
– A distributed sensor network (DSN) is a grouping of low-power and low-cost sensor nodes (SNs) that are stochastically placed in a large-scale area for monitoring regions and enabling various applications. The quality of service in DSN is impacted by the sporadic appearance of defective sensor nodes, especially over the dense wireless network. Due to that, sensor nodes are affected, which reduces network performance during communication. In recent years, the majority of the fault detection techniques in use rely on the neighbor's sensing data over the dense sensor network to determine the fault state of SNs, and based on these, the self-diagnosis is done by receiving information on statistics, thresholds, majority voting, hypothetical testing, comparison, or machine learning. As a result, the false data positive rate (FDPR), detection data accuracy (DDA), and false data alarm rate (FDAR) of these defect detection algorithms are low. Due to high energy expenditure and long detection delay these approaches are not suitable for large scale. In this paper, an enhanced three-sigma edit test-based distributed self-fault dense diagnosis (DSFDD3SET) algorithm is proposed. The performance of the proposed DSFDD3SET has been evaluated using Python, and MATLAB. The experimental results of the DSFDD3SET have been compared with the existing distributed self-fault diagnosis algorithm. The experimental results efficacy outperforms the existing algorithms .
{"title":"Distributed Self Intermittent Fault Diagnosis in Dense Wireless Sensor Network","authors":"B. S. Gouda, Sudhakar Das, Trilochan Panigrahi","doi":"10.22247/ijcna/2023/223315","DOIUrl":"https://doi.org/10.22247/ijcna/2023/223315","url":null,"abstract":"– A distributed sensor network (DSN) is a grouping of low-power and low-cost sensor nodes (SNs) that are stochastically placed in a large-scale area for monitoring regions and enabling various applications. The quality of service in DSN is impacted by the sporadic appearance of defective sensor nodes, especially over the dense wireless network. Due to that, sensor nodes are affected, which reduces network performance during communication. In recent years, the majority of the fault detection techniques in use rely on the neighbor's sensing data over the dense sensor network to determine the fault state of SNs, and based on these, the self-diagnosis is done by receiving information on statistics, thresholds, majority voting, hypothetical testing, comparison, or machine learning. As a result, the false data positive rate (FDPR), detection data accuracy (DDA), and false data alarm rate (FDAR) of these defect detection algorithms are low. Due to high energy expenditure and long detection delay these approaches are not suitable for large scale. In this paper, an enhanced three-sigma edit test-based distributed self-fault dense diagnosis (DSFDD3SET) algorithm is proposed. The performance of the proposed DSFDD3SET has been evaluated using Python, and MATLAB. The experimental results of the DSFDD3SET have been compared with the existing distributed self-fault diagnosis algorithm. The experimental results efficacy outperforms the existing algorithms .","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44711179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-30DOI: 10.22247/ijcna/2023/221886
M. Ramakrishnan, Tharini Chandrapragasam
– Low Density Parity Check Codes are appropriate for high data rate applications like Internet of Things and 5G communication due to its support for bigger block size and higher code rate. In this paper an improved LDPC encoding algorithm is proposed to reduce girth 4 short cycles. This reduction helps in achieving lesser Bit Error Rate (BER) for various channel models with different code rates and modulation schemes. The proposed work is analyzed both for Pseudo Random sequence and audio messages. The simulation results demonstrate that the algorithm achieves low BER of 𝟏𝟎 −𝟖 for code rate of 0.7 when tested for various code rates. The proposed algorithm also achieves reduced short cycles when compared with conventional LDPC encoding algorithm. Simulation results were verified by implementing the proposed algorithm in NI USRP Software Defined Radio. The SDR results verify that proposed algorithm provide low BER with reduced short cycles.
{"title":"Analysis of Improved Rate Adaptive Irregular Low Density Parity Check Encoding for Fifth Generation Networks Using Software Defined Radio","authors":"M. Ramakrishnan, Tharini Chandrapragasam","doi":"10.22247/ijcna/2023/221886","DOIUrl":"https://doi.org/10.22247/ijcna/2023/221886","url":null,"abstract":"– Low Density Parity Check Codes are appropriate for high data rate applications like Internet of Things and 5G communication due to its support for bigger block size and higher code rate. In this paper an improved LDPC encoding algorithm is proposed to reduce girth 4 short cycles. This reduction helps in achieving lesser Bit Error Rate (BER) for various channel models with different code rates and modulation schemes. The proposed work is analyzed both for Pseudo Random sequence and audio messages. The simulation results demonstrate that the algorithm achieves low BER of 𝟏𝟎 −𝟖 for code rate of 0.7 when tested for various code rates. The proposed algorithm also achieves reduced short cycles when compared with conventional LDPC encoding algorithm. Simulation results were verified by implementing the proposed algorithm in NI USRP Software Defined Radio. The SDR results verify that proposed algorithm provide low BER with reduced short cycles.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49486637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-30DOI: 10.22247/ijcna/2023/221887
Ramya Boopathi, E. S. Samundeeswari
– Cloud computing has emerged as the feasible paradigm to satisfy the computing requirements of high-performance applications by an ideal distribution of tasks to resources. But, it is problematic when attaining multiple scheduling objectives such as throughput, makespan, and resource use. To resolve this problem, many Task Scheduling Algorithms (TSAs) are recently developed using single or multi-objective metaheuristic strategies. Amongst, the TS based on a Multi-objective Grey Wolf Optimizer (TSMGWO) handles multiple objectives to discover ideal tasks and assign resources to the tasks. However, it only maximizes the resource use and throughput when reducing the makespan, whereas it is also crucial to optimize other parameters like the utilization of the memory, and bandwidth. Hence, this article proposes a hybrid TSA depending on the linear matching method and backfilling, which uses the memory and bandwidth requirements for effective TS. Initially, a Long Short-Term Memory (LSTM) network is adopted as a meta-learner to predict the task runtime reliability. Then, the tasks are divided into predictable and unpredictable queues. The tasks with higher expected runtime are scheduled by a plan-based scheduling approach based on the Tuna Swarm Optimization (TSO). The remaining tasks are backfilled by the VIKOR technique. To reduce resource use, a particular fraction of CPU cores is kept for backfilling, which is modified dynamically depending on the Resource Use Ratio (RUR) of predictable tasks among freshly submitted tasks. Finally, a general simulation reveals that the proposed algorithm outperforms the earlier metaheuristic, plan-based, and backfilling TSAs.
{"title":"Amended Hybrid Scheduling for Cloud Computing with Real-Time Reliability Forecasting","authors":"Ramya Boopathi, E. S. Samundeeswari","doi":"10.22247/ijcna/2023/221887","DOIUrl":"https://doi.org/10.22247/ijcna/2023/221887","url":null,"abstract":"– Cloud computing has emerged as the feasible paradigm to satisfy the computing requirements of high-performance applications by an ideal distribution of tasks to resources. But, it is problematic when attaining multiple scheduling objectives such as throughput, makespan, and resource use. To resolve this problem, many Task Scheduling Algorithms (TSAs) are recently developed using single or multi-objective metaheuristic strategies. Amongst, the TS based on a Multi-objective Grey Wolf Optimizer (TSMGWO) handles multiple objectives to discover ideal tasks and assign resources to the tasks. However, it only maximizes the resource use and throughput when reducing the makespan, whereas it is also crucial to optimize other parameters like the utilization of the memory, and bandwidth. Hence, this article proposes a hybrid TSA depending on the linear matching method and backfilling, which uses the memory and bandwidth requirements for effective TS. Initially, a Long Short-Term Memory (LSTM) network is adopted as a meta-learner to predict the task runtime reliability. Then, the tasks are divided into predictable and unpredictable queues. The tasks with higher expected runtime are scheduled by a plan-based scheduling approach based on the Tuna Swarm Optimization (TSO). The remaining tasks are backfilled by the VIKOR technique. To reduce resource use, a particular fraction of CPU cores is kept for backfilling, which is modified dynamically depending on the Resource Use Ratio (RUR) of predictable tasks among freshly submitted tasks. Finally, a general simulation reveals that the proposed algorithm outperforms the earlier metaheuristic, plan-based, and backfilling TSAs.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43653652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-30DOI: 10.22247/ijcna/2023/221899
J. ., Dharmender Kumar, Amandeep .
– The demand for 5G networks is growing day by day, but there remain issues regarding resource allocation. Moreover, there is a need to focus on key performance indicators for the 5G network. This study looks at the assessment of 5G wireless communications as well as the minimal technical performance criteria for 5G network services according to the ITU-R, Next Generation Mobile, 3GPP, and Networks. 5G standards that have been created in the 3GPP, ITU-Telecommunication Standardization Sector, ITU-R Sector, Internet Engineering Task Force, and IEEE are covered. In 5G-based wireless communication systems, resource allocation is a key activity that must be done. It is essential for the new systems used in 5G wireless networks to be more dynamic and intelligent if they are going to be able to satisfy a range of network requirements at the same time. This may be accomplished via the use of new wireless technologies and methods. Key characteristics of 5G, such as waveform, dynamic slot-based frame structure, massive MIMO, and channel codecs, have been explained, along with emerging technologies in the 5G network. Previous research related to 5G networks that considered resource allocation in heterogeneous networks is elaborated, along with the requirement of KPIs for 5G networks. The functionality of 5G has been discussed, along with its common and technological challenges. The research paper has also focused on metrics, indicators, and parameters during resource allocation in 5G, along with machine learning. To move the massive amounts of data that may flow at speeds of up to 100 Gbps/km2, these devices need supplementary, well-organized, and widely deployed RATs. To accommodate the expected exponential growth in the data flow, 5G network RAN radio blocking and resource management solutions would need to be able to handle more than 1,000 times the present traffic level. In addition, all of the information that makes up this traffic must be available and shareable at any time, from any location, and using any device inside the 5G RAN and beyond 4G cellular coverage areas. The need for resource allocation is discussed, along with the existing algorithm and improvements made in technology for resource allocation.
{"title":"Investigating Resource Allocation Techniques and Key Performance Indicators (KPIs) for 5G New Radio Networks: A Review","authors":"J. ., Dharmender Kumar, Amandeep .","doi":"10.22247/ijcna/2023/221899","DOIUrl":"https://doi.org/10.22247/ijcna/2023/221899","url":null,"abstract":"– The demand for 5G networks is growing day by day, but there remain issues regarding resource allocation. Moreover, there is a need to focus on key performance indicators for the 5G network. This study looks at the assessment of 5G wireless communications as well as the minimal technical performance criteria for 5G network services according to the ITU-R, Next Generation Mobile, 3GPP, and Networks. 5G standards that have been created in the 3GPP, ITU-Telecommunication Standardization Sector, ITU-R Sector, Internet Engineering Task Force, and IEEE are covered. In 5G-based wireless communication systems, resource allocation is a key activity that must be done. It is essential for the new systems used in 5G wireless networks to be more dynamic and intelligent if they are going to be able to satisfy a range of network requirements at the same time. This may be accomplished via the use of new wireless technologies and methods. Key characteristics of 5G, such as waveform, dynamic slot-based frame structure, massive MIMO, and channel codecs, have been explained, along with emerging technologies in the 5G network. Previous research related to 5G networks that considered resource allocation in heterogeneous networks is elaborated, along with the requirement of KPIs for 5G networks. The functionality of 5G has been discussed, along with its common and technological challenges. The research paper has also focused on metrics, indicators, and parameters during resource allocation in 5G, along with machine learning. To move the massive amounts of data that may flow at speeds of up to 100 Gbps/km2, these devices need supplementary, well-organized, and widely deployed RATs. To accommodate the expected exponential growth in the data flow, 5G network RAN radio blocking and resource management solutions would need to be able to handle more than 1,000 times the present traffic level. In addition, all of the information that makes up this traffic must be available and shareable at any time, from any location, and using any device inside the 5G RAN and beyond 4G cellular coverage areas. The need for resource allocation is discussed, along with the existing algorithm and improvements made in technology for resource allocation.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41537764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}