首页 > 最新文献

International Journal of Computer Networks and Applications最新文献

英文 中文
Empowered Chicken Swarm Optimization with Intuitionistic Fuzzy Trust Model for Optimized Secure and Energy Aware Data Transmission in Clustered Wireless Sensor Networks 基于直觉模糊信任模型的赋权鸡群算法优化无线传感器网络中安全、能量感知的数据传输
Q4 Computer Science Pub Date : 2023-08-31 DOI: 10.22247/ijcna/2023/223311
A. Anitha, S. Mythili
– Each sensor node functions autonomously to conduct data transmission in wireless sensor networks. It is very essential to focus on energy dissipation and sensor nodes lifespan. There are many existing energy consumption models, and the problem of selecting optimized cluster head along with efficient path selection is still challenging. To address this energy consumption issue in an effective way the proposed work is designed with a two-phase model for performing cluster head selection, clustering, and optimized route selection for the secure transmission of data packets with reduced overhead. The scope of the proposed methodology is to choose the most prominent cluster head and assistant cluster head which aids in prolonging the network lifespan and also securing the inter-cluster components from selective forwarding attack (SFA) and black hole attack (BHA). The proposed methodology is Empowered Chicken Swarm Optimization (ECSO) with Intuitionistic Fuzzy Trust Model (IFTM) in Inter-Cluster communication. ECSO provides an efficient clustering technique and cluster head selection and IFTM provides a secure and fast routing path from SFA and BHA for Inter-Cluster Single-Hop and Multi-Hop Communication. ESCO uses chaos theory for local optima in cluster head selection. The IFTM incorporates reliance of neighbourhood nodes, derived confidence of nodes, estimation of data propagation of nodes and an element of trustworthiness of nodes are used to implement security in inter-cluster communication. Experimental results prove that the proposed methodology outperforms the existing approaches by increasing packet delivery ratio and throughput, and minimizing packet drop ratio and energy consumption.
–每个传感器节点自主运行,在无线传感器网络中进行数据传输。关注能量耗散和传感器节点的寿命是非常重要的。现有的能耗模型很多,选择优化的簇头和有效的路径选择问题仍然具有挑战性。为了以有效的方式解决这一能耗问题,所提出的工作设计了一个两阶段模型,用于执行簇头选择、聚类和优化路由选择,以减少开销的方式安全传输数据包。所提出的方法的范围是选择最突出的簇头和辅助簇头,这有助于延长网络寿命,并保护簇间组件免受选择性转发攻击(SFA)和黑洞攻击(BHA)的影响。该方法是基于直觉模糊信任模型的群间通信赋权鸡群优化算法。ECSO提供了一种有效的集群技术和簇头选择,IFTM为簇间单跳和多跳通信提供了一条从SFA和BHA的安全快速路由路径。ESCO将混沌理论用于簇头选择中的局部最优。IFTM结合了邻域节点的依赖性、导出的节点置信度、节点的数据传播估计和节点的可信度元素,用于实现集群间通信的安全性。实验结果证明,该方法在提高分组传递率和吞吐量、最小化分组丢弃率和能耗方面优于现有方法。
{"title":"Empowered Chicken Swarm Optimization with Intuitionistic Fuzzy Trust Model for Optimized Secure and Energy Aware Data Transmission in Clustered Wireless Sensor Networks","authors":"A. Anitha, S. Mythili","doi":"10.22247/ijcna/2023/223311","DOIUrl":"https://doi.org/10.22247/ijcna/2023/223311","url":null,"abstract":"– Each sensor node functions autonomously to conduct data transmission in wireless sensor networks. It is very essential to focus on energy dissipation and sensor nodes lifespan. There are many existing energy consumption models, and the problem of selecting optimized cluster head along with efficient path selection is still challenging. To address this energy consumption issue in an effective way the proposed work is designed with a two-phase model for performing cluster head selection, clustering, and optimized route selection for the secure transmission of data packets with reduced overhead. The scope of the proposed methodology is to choose the most prominent cluster head and assistant cluster head which aids in prolonging the network lifespan and also securing the inter-cluster components from selective forwarding attack (SFA) and black hole attack (BHA). The proposed methodology is Empowered Chicken Swarm Optimization (ECSO) with Intuitionistic Fuzzy Trust Model (IFTM) in Inter-Cluster communication. ECSO provides an efficient clustering technique and cluster head selection and IFTM provides a secure and fast routing path from SFA and BHA for Inter-Cluster Single-Hop and Multi-Hop Communication. ESCO uses chaos theory for local optima in cluster head selection. The IFTM incorporates reliance of neighbourhood nodes, derived confidence of nodes, estimation of data propagation of nodes and an element of trustworthiness of nodes are used to implement security in inter-cluster communication. Experimental results prove that the proposed methodology outperforms the existing approaches by increasing packet delivery ratio and throughput, and minimizing packet drop ratio and energy consumption.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47669813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relentless Firefly Optimization-Based Routing Protocol (RFORP) for Securing Fintech Data in IoT-Based Ad-Hoc Networks 在基于物联网的Ad Hoc网络中保护金融科技数据的基于无情萤火虫优化的路由协议(RFORP)
Q4 Computer Science Pub Date : 2023-08-31 DOI: 10.22247/ijcna/2023/223319
J. Ramkumar, K. S. Jeen Marseline, D. R. Medhunhashini
– The widespread adoption of Internet of Things (IoT) technology and the rise of fintech applications have raised concerns regarding the secure and efficient routing of data in IoT-based ad-hoc networks (IoT-AN). Challenges in this context include vulnerability to security breaches, potential malicious node presence, routing instability, and energy inefficiency. This article proposes the Relentless Firefly Optimization-based Routing Protocol (RFORP) to overcome these issues. Inspired by fireflies’ natural behaviour, RFORP incorporates relentless firefly optimization techniques to enhance packet delivery, malicious node detection, routing stability, and overall network resilience. Simulation results demonstrate RFORP’s superiority over existing protocols, achieving higher packet delivery ratios, accurate malicious node detection, improved routing stability, and significant energy efficiency. The proposed RFORP offers a promising solution for securing fintech data in IoT-AN, providing enhanced performance, reliability, and security while effectively addressing the identified challenges. This research contributes to advancing secure routing protocols in fintech applications and guides network security and protocol selection in IoT environments.
-物联网(IoT)技术的广泛采用和金融科技应用的兴起,引发了人们对基于物联网的自组织网络(IoT- an)中安全有效的数据路由的担忧。这方面的挑战包括易受安全破坏、潜在恶意节点存在、路由不稳定和能源效率低下。本文提出了基于无情萤火虫优化的路由协议(RFORP)来克服这些问题。受萤火虫自然行为的启发,RFORP结合了无情的萤火虫优化技术,以增强数据包传输、恶意节点检测、路由稳定性和整体网络弹性。仿真结果表明,RFORP优于现有协议,实现了更高的数据包投递率、准确的恶意节点检测、更好的路由稳定性和显著的能效。拟议的RFORP为IoT-AN中的金融科技数据保护提供了一个有前途的解决方案,提供增强的性能、可靠性和安全性,同时有效地应对已确定的挑战。本研究有助于推进金融科技应用中的安全路由协议,并指导物联网环境中的网络安全和协议选择。
{"title":"Relentless Firefly Optimization-Based Routing Protocol (RFORP) for Securing Fintech Data in IoT-Based Ad-Hoc Networks","authors":"J. Ramkumar, K. S. Jeen Marseline, D. R. Medhunhashini","doi":"10.22247/ijcna/2023/223319","DOIUrl":"https://doi.org/10.22247/ijcna/2023/223319","url":null,"abstract":"– The widespread adoption of Internet of Things (IoT) technology and the rise of fintech applications have raised concerns regarding the secure and efficient routing of data in IoT-based ad-hoc networks (IoT-AN). Challenges in this context include vulnerability to security breaches, potential malicious node presence, routing instability, and energy inefficiency. This article proposes the Relentless Firefly Optimization-based Routing Protocol (RFORP) to overcome these issues. Inspired by fireflies’ natural behaviour, RFORP incorporates relentless firefly optimization techniques to enhance packet delivery, malicious node detection, routing stability, and overall network resilience. Simulation results demonstrate RFORP’s superiority over existing protocols, achieving higher packet delivery ratios, accurate malicious node detection, improved routing stability, and significant energy efficiency. The proposed RFORP offers a promising solution for securing fintech data in IoT-AN, providing enhanced performance, reliability, and security while effectively addressing the identified challenges. This research contributes to advancing secure routing protocols in fintech applications and guides network security and protocol selection in IoT environments.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43068252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Novel All Members Group Search Optimization Based Data Acquisition in Cloud Assisted Wireless Sensor Network for Smart Farming 一种新的基于群搜索优化的智能农业云辅助无线传感器网络数据采集方法
Q4 Computer Science Pub Date : 2023-08-31 DOI: 10.22247/ijcna/2023/223318
Vuppala Sukanya, Ramachandram S
– Recent times, the Wireless Sensor Networks (WSN) has played an important role in smart farming systems. However, WSN-enabled smart farming (SF) systems need reliable communication to minimize overhead, end-to-end delay, latency etc., Hence, this work introduces a 3-tiered framework based on the integration of WSN with the edge and cloud computing platforms to acquire, process and store useful soil data from agricultural lands. Initially, the sensors are deployed randomly throughout the network region to collect information regarding different types of soil components. The sensors are clustered based on distance using the Levy flight based K-means clustering algorithm to promote efficient communication. The Tasmanian devil optimization (TDO) algorithm is used to choose the cluster heads (CHs) based on the distance among the node and edge server, residual energy, and the number of neighbors. Then, the optimal paths to transmit the data are identified using the all members group search optimization (AMGSO) algorithm based on different parameters. Each edge server assesses the quality of the data (QoD) with respect to some data quality criteria after receiving the data from the edge server. Also, the load across the servers are balanced in order to overcome the overloading and under loading issues. The legitimate data that received higher scores in the QoD evaluation alone is sent to the cloud servers for archival. Using the ICRISAT dataset, the efficiency of the proposed work is evaluated using a number of indicators. The average improvement rate attained by the proposed model in terms of energy consumption is 40%, in terms of packet delivery ratio is 7%, in terms of network lifetime is 38%, and in terms of latency is 24% for a total of 250 nodes.
–近年来,无线传感器网络(WSN)在智能农业系统中发挥了重要作用。然而,支持无线传感器网络的智能农业(SF)系统需要可靠的通信,以最大限度地减少开销、端到端延迟、延迟等。因此,本工作引入了一个基于无线传感器网络与边缘和云计算平台集成的三层框架,以获取、处理和存储农田中有用的土壤数据。最初,传感器在整个网络区域随机部署,以收集有关不同类型土壤成分的信息。使用基于Levy飞行的K-means聚类算法基于距离对传感器进行聚类,以促进高效通信。塔斯马尼亚魔鬼优化(TDO)算法用于根据节点与边缘服务器之间的距离、剩余能量和邻居数量来选择簇头(CH)。然后,使用基于不同参数的所有成员组搜索优化(AMGSO)算法来识别传输数据的最佳路径。每个边缘服务器在从边缘服务器接收到数据之后,根据一些数据质量标准来评估数据质量(QoD)。此外,服务器之间的负载是平衡的,以克服过载和负载不足的问题。仅在QoD评估中获得较高分数的合法数据就被发送到云服务器进行存档。利用ICRISAT数据集,使用一些指标对拟议工作的效率进行了评估。对于总共250个节点,所提出的模型在能量消耗方面获得的平均改进率为40%,在分组传递率方面获得的改进率为7%,在网络寿命方面获得的改善率为38%,在延迟方面获得的提高率为24%。
{"title":"A Novel All Members Group Search Optimization Based Data Acquisition in Cloud Assisted Wireless Sensor Network for Smart Farming","authors":"Vuppala Sukanya, Ramachandram S","doi":"10.22247/ijcna/2023/223318","DOIUrl":"https://doi.org/10.22247/ijcna/2023/223318","url":null,"abstract":"– Recent times, the Wireless Sensor Networks (WSN) has played an important role in smart farming systems. However, WSN-enabled smart farming (SF) systems need reliable communication to minimize overhead, end-to-end delay, latency etc., Hence, this work introduces a 3-tiered framework based on the integration of WSN with the edge and cloud computing platforms to acquire, process and store useful soil data from agricultural lands. Initially, the sensors are deployed randomly throughout the network region to collect information regarding different types of soil components. The sensors are clustered based on distance using the Levy flight based K-means clustering algorithm to promote efficient communication. The Tasmanian devil optimization (TDO) algorithm is used to choose the cluster heads (CHs) based on the distance among the node and edge server, residual energy, and the number of neighbors. Then, the optimal paths to transmit the data are identified using the all members group search optimization (AMGSO) algorithm based on different parameters. Each edge server assesses the quality of the data (QoD) with respect to some data quality criteria after receiving the data from the edge server. Also, the load across the servers are balanced in order to overcome the overloading and under loading issues. The legitimate data that received higher scores in the QoD evaluation alone is sent to the cloud servers for archival. Using the ICRISAT dataset, the efficiency of the proposed work is evaluated using a number of indicators. The average improvement rate attained by the proposed model in terms of energy consumption is 40%, in terms of packet delivery ratio is 7%, in terms of network lifetime is 38%, and in terms of latency is 24% for a total of 250 nodes.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41592293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TCP Performance Enhancement in IoT and MANET: A Systematic Literature Review TCP在IoT和MANET中的性能提升:系统的文献综述
Q4 Computer Science Pub Date : 2023-08-31 DOI: 10.22247/ijcna/2023/223313
Sultana Parween, Syed Zeeshan Hussain
– TCP operates as a unicast protocol that prioritizes the reliability of established connections. This protocol allows for the explicit and acknowledged establishment and dissolution of connections, the transmission of data without loss of context or duplication, the management of traffic flows, the avoidance of congestion, and the asynchronous signaling of time-sensitive information. In this research, we use the Systematic Literature Review (SLR) technique to examine and better understand the several methods recently given for enhancing TCP performance in IoT and MANET networks. This work aims to assess and classify the current research strategies on TCP performance approaches published between 2016 and 2023 using both analytical and statistical methods. Technical parameters suggested case study and evaluation settings are compared between MANET and IoT to give a taxonomy for TCP performance improvement options based on the content of current studies chosen using the SLR procedure. Each study's merits and limitations are outlined, along with suggestions for improving those studies and areas where further research is needed. This work outlines the basic issues of TCP when it is used in IoT and MANET. It also highlights the recent approaches for TCP performance enhancement, such as machine Learning-based approaches, multi-path TCP, congestion control, buffer management, and route optimization. It also provides the potential for future research directions into the effectiveness of TCP performance in IoT and MANET. The major findings of this review are to provide a thorough understanding of the latest techniques for enhancing TCP performance in the IoT and MANET networks, which can be beneficial for researchers and practitioners in the field of networking.
—TCP作为单播协议运行,优先考虑已建立连接的可靠性。该协议允许明确和确认连接的建立和解除,在不丢失上下文或重复的情况下传输数据,管理流量,避免拥塞,以及时间敏感信息的异步信令。在本研究中,我们使用系统文献综述(SLR)技术来检查和更好地理解最近提出的几种用于增强物联网和MANET网络中TCP性能的方法。这项工作旨在使用分析和统计方法对2016年至2023年间发表的TCP性能方法的当前研究策略进行评估和分类。将建议的技术参数、案例研究和评估设置在MANET和IoT之间进行比较,根据使用SLR程序选择的当前研究内容,给出TCP性能改进选项的分类。概述了每项研究的优点和局限性,并提出了改进这些研究和需要进一步研究的领域的建议。本工作概述了TCP在物联网和MANET中使用时的基本问题。它还强调了TCP性能增强的最新方法,如基于机器学习的方法、多路径TCP、拥塞控制、缓冲区管理和路由优化。它还为未来的研究方向提供了潜力,以研究TCP在物联网和MANET中的性能有效性。本综述的主要发现是全面了解在物联网和MANET网络中增强TCP性能的最新技术,这对网络领域的研究人员和从业者有益。
{"title":"TCP Performance Enhancement in IoT and MANET: A Systematic Literature Review","authors":"Sultana Parween, Syed Zeeshan Hussain","doi":"10.22247/ijcna/2023/223313","DOIUrl":"https://doi.org/10.22247/ijcna/2023/223313","url":null,"abstract":"– TCP operates as a unicast protocol that prioritizes the reliability of established connections. This protocol allows for the explicit and acknowledged establishment and dissolution of connections, the transmission of data without loss of context or duplication, the management of traffic flows, the avoidance of congestion, and the asynchronous signaling of time-sensitive information. In this research, we use the Systematic Literature Review (SLR) technique to examine and better understand the several methods recently given for enhancing TCP performance in IoT and MANET networks. This work aims to assess and classify the current research strategies on TCP performance approaches published between 2016 and 2023 using both analytical and statistical methods. Technical parameters suggested case study and evaluation settings are compared between MANET and IoT to give a taxonomy for TCP performance improvement options based on the content of current studies chosen using the SLR procedure. Each study's merits and limitations are outlined, along with suggestions for improving those studies and areas where further research is needed. This work outlines the basic issues of TCP when it is used in IoT and MANET. It also highlights the recent approaches for TCP performance enhancement, such as machine Learning-based approaches, multi-path TCP, congestion control, buffer management, and route optimization. It also provides the potential for future research directions into the effectiveness of TCP performance in IoT and MANET. The major findings of this review are to provide a thorough understanding of the latest techniques for enhancing TCP performance in the IoT and MANET networks, which can be beneficial for researchers and practitioners in the field of networking.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41892049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IoBTSec-RPL: A Novel RPL Attack Detecting Mechanism Using Hybrid Deep Learning Over Battlefield IoT Environment IoBTSec-RPL:基于战场物联网环境的混合深度学习的新型RPL攻击检测机制
Q4 Computer Science Pub Date : 2023-08-31 DOI: 10.22247/ijcna/2023/223317
K. Kowsalyadevi, N.V. Balaji
– The emerging digital world has recently utilized the massive power of the emerging Internet of Things (IoT) technology that fuels the growth of many intelligent applications. The Internet of Battlefield Things (IoBT) greatly enables critical information dissemination and efficient war strategy planning with situational awareness. The lightweight Routing Protocol for Low-Power and Lossy Networks (RPL) is critical for successful IoT application deployment. RPL has low-security features that are insufficient to protect the IoBT environment due to device heterogeneity and open wireless device-to-device communication. Hence, it is crucial to provide strong security to RPL-IoBT against multiple attacks and enhance its performance. This work proposes IoBTSec-RPL, a hybrid Deep Learning (DL)-based multi-attack detection model, to overcome the attacks. The proposed IoBTSec-RPL learns prominent routing attacks and efficiently classifies the attackers. It includes four steps: data collection and preprocessing, feature selection, data augmentation, and attack detection and classification. Initially, the proposed model employs min-max normalization and missing value imputation to preprocess network packets. Secondly, the enhanced pelican optimization algorithm selects the most suitable features for attack detection through an efficient ranking method. Thirdly, data augmentation utilizes an auxiliary classifier gated adversarial network to alleviate the class imbalance concerns over the multiple attack classes. Finally, the proposed approach successfully detects and classifies the attacks using a hybrid DL model that combines LongShort-Term Memory (LSTM) and Deep Belief Network (DBN). The performance results reveal that the IoBTSec-RPL accurately recognizes the multiple RPL attacks in IoT and accomplished 98.93% recall. It also achieved improved accuracy of 2.16%, 5.73%, and 6.06% than the LGBM, LSTM, and DBN for 200K traffic samples.
–新兴的数字世界最近利用了新兴物联网(IoT)技术的巨大力量,推动了许多智能应用的增长。战场物联网(IoBT)极大地实现了关键信息的传播和具有态势感知的高效战争战略规划。用于低功耗和有损网络的轻量级路由协议(RPL)对于成功的物联网应用部署至关重要。由于设备异构性和开放的无线设备对设备通信,RPL具有不足以保护IoBT环境的低安全性特征。因此,为RPL-IoBT提供强大的安全性以抵御多种攻击并提高其性能至关重要。本文提出了一种基于深度学习(DL)的混合多攻击检测模型IoBTSec RPL来克服攻击。所提出的IoBTSec RPL学习显著的路由攻击并有效地对攻击者进行分类。它包括四个步骤:数据收集和预处理、特征选择、数据扩充以及攻击检测和分类。最初,该模型采用最小-最大归一化和缺失值插补对网络数据包进行预处理。其次,增强型pelican优化算法通过高效的排序方法选择最适合攻击检测的特征。第三,数据扩充利用辅助分类器门控对抗性网络来缓解对多个攻击类别的类别不平衡问题。最后,该方法使用长短期记忆(LSTM)和深度信任网络(DBN)相结合的混合DL模型成功地检测和分类了攻击。性能结果表明,IoBTSec RPL能够准确识别物联网中的多个RPL攻击,并完成98.93%的召回率。对于200K流量样本,它还实现了比LGBM、LSTM和DBN提高2.16%、5.73%和6.06%的准确性。
{"title":"IoBTSec-RPL: A Novel RPL Attack Detecting Mechanism Using Hybrid Deep Learning Over Battlefield IoT Environment","authors":"K. Kowsalyadevi, N.V. Balaji","doi":"10.22247/ijcna/2023/223317","DOIUrl":"https://doi.org/10.22247/ijcna/2023/223317","url":null,"abstract":"– The emerging digital world has recently utilized the massive power of the emerging Internet of Things (IoT) technology that fuels the growth of many intelligent applications. The Internet of Battlefield Things (IoBT) greatly enables critical information dissemination and efficient war strategy planning with situational awareness. The lightweight Routing Protocol for Low-Power and Lossy Networks (RPL) is critical for successful IoT application deployment. RPL has low-security features that are insufficient to protect the IoBT environment due to device heterogeneity and open wireless device-to-device communication. Hence, it is crucial to provide strong security to RPL-IoBT against multiple attacks and enhance its performance. This work proposes IoBTSec-RPL, a hybrid Deep Learning (DL)-based multi-attack detection model, to overcome the attacks. The proposed IoBTSec-RPL learns prominent routing attacks and efficiently classifies the attackers. It includes four steps: data collection and preprocessing, feature selection, data augmentation, and attack detection and classification. Initially, the proposed model employs min-max normalization and missing value imputation to preprocess network packets. Secondly, the enhanced pelican optimization algorithm selects the most suitable features for attack detection through an efficient ranking method. Thirdly, data augmentation utilizes an auxiliary classifier gated adversarial network to alleviate the class imbalance concerns over the multiple attack classes. Finally, the proposed approach successfully detects and classifies the attacks using a hybrid DL model that combines LongShort-Term Memory (LSTM) and Deep Belief Network (DBN). The performance results reveal that the IoBTSec-RPL accurately recognizes the multiple RPL attacks in IoT and accomplished 98.93% recall. It also achieved improved accuracy of 2.16%, 5.73%, and 6.06% than the LGBM, LSTM, and DBN for 200K traffic samples.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45419895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Honey Bee Based Improvised BAT Algorithm for Cloud Task Scheduling 基于蜜蜂的改进BAT云任务调度算法
Q4 Computer Science Pub Date : 2023-08-31 DOI: 10.22247/ijcna/2023/223310
Abhishek Gupta, H.S. Bhadauria
– Delivering shared data, software, and resources across a network to computers and other devices, the cloud computing paradigm aspires to offer computing as a service rather than a product. The management of the resource allocation process is essential given the technology's rapid development. For cloud computing, task scheduling techniques are crucial. Use scheduling algorithms to distribute virtual machines to user tasks and balance the workload on each machine's capacity and overall. This task's major goal is to offer a load-balancing algorithm that can be used by both cloud consumers and service providers. In this paper, we propose the ‘Bat Load’ algorithm, which utilizes the Bat algorithm for work scheduling and the Honey Bee algorithm for load balancing. This hybrid approach efficiently addresses the load balancing problem in cloud computing, optimizing resource allocation, make span, degree of imbalance, cost, execution time, and processing time. The effectiveness of the Bat Load algorithm is evaluated in comparison to other scheduling methods, including bee load balancer, ant colony optimization, particle swarm optimization, and ant colony and particle swarm optimization. Through comprehensive experiments and statistical analysis, the Bat Load algorithm demonstrates its superiority in terms of processing cost, total processing time, imbalance degree, and completion time. The results showcase its ability to achieve balanced load distribution and efficient resource allocation in the cloud computing environment, outperforming the existing scheduling methods, including ACO, PSO, and ACO and PSO with the honey bee load balancer. Our research contributes to addressing scheduling challenges and resource optimization in cloud computing, providing a robust solution for both cloud consumers and service providers.
-通过网络向计算机和其他设备提供共享数据、软件和资源,云计算范式希望将计算作为一种服务而不是产品提供。随着技术的快速发展,对资源分配过程的管理是必不可少的。对于云计算,任务调度技术是至关重要的。使用调度算法将虚拟机分配给用户任务,并在每台机器的容量和总体上平衡工作负载。这项任务的主要目标是提供一种云消费者和服务提供商都可以使用的负载平衡算法。在本文中,我们提出了“蝙蝠负载”算法,该算法利用蝙蝠算法进行工作调度和蜜蜂算法进行负载平衡。这种混合方法有效地解决了云计算中的负载平衡问题,优化了资源分配、make span、不平衡程度、成本、执行时间和处理时间。通过与蜜蜂负载均衡器、蚁群优化、粒子群优化、蚁群和粒子群优化等调度方法的比较,评价了蝙蝠负载算法的有效性。通过综合实验和统计分析,证明了Bat Load算法在处理成本、总处理时间、不平衡程度、完成时间等方面的优越性。结果表明,该方法能够在云计算环境下实现均衡的负载分配和高效的资源分配,优于现有的调度方法,包括蚁群算法、粒子群算法以及蜜蜂负载均衡器中的蚁群算法和粒子群算法。我们的研究有助于解决云计算中的调度挑战和资源优化,为云消费者和服务提供商提供强大的解决方案。
{"title":"Honey Bee Based Improvised BAT Algorithm for Cloud Task Scheduling","authors":"Abhishek Gupta, H.S. Bhadauria","doi":"10.22247/ijcna/2023/223310","DOIUrl":"https://doi.org/10.22247/ijcna/2023/223310","url":null,"abstract":"– Delivering shared data, software, and resources across a network to computers and other devices, the cloud computing paradigm aspires to offer computing as a service rather than a product. The management of the resource allocation process is essential given the technology's rapid development. For cloud computing, task scheduling techniques are crucial. Use scheduling algorithms to distribute virtual machines to user tasks and balance the workload on each machine's capacity and overall. This task's major goal is to offer a load-balancing algorithm that can be used by both cloud consumers and service providers. In this paper, we propose the ‘Bat Load’ algorithm, which utilizes the Bat algorithm for work scheduling and the Honey Bee algorithm for load balancing. This hybrid approach efficiently addresses the load balancing problem in cloud computing, optimizing resource allocation, make span, degree of imbalance, cost, execution time, and processing time. The effectiveness of the Bat Load algorithm is evaluated in comparison to other scheduling methods, including bee load balancer, ant colony optimization, particle swarm optimization, and ant colony and particle swarm optimization. Through comprehensive experiments and statistical analysis, the Bat Load algorithm demonstrates its superiority in terms of processing cost, total processing time, imbalance degree, and completion time. The results showcase its ability to achieve balanced load distribution and efficient resource allocation in the cloud computing environment, outperforming the existing scheduling methods, including ACO, PSO, and ACO and PSO with the honey bee load balancer. Our research contributes to addressing scheduling challenges and resource optimization in cloud computing, providing a robust solution for both cloud consumers and service providers.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68278500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed Self Intermittent Fault Diagnosis in Dense Wireless Sensor Network 密集无线传感器网络的分布式自间歇故障诊断
Q4 Computer Science Pub Date : 2023-08-31 DOI: 10.22247/ijcna/2023/223315
B. S. Gouda, Sudhakar Das, Trilochan Panigrahi
– A distributed sensor network (DSN) is a grouping of low-power and low-cost sensor nodes (SNs) that are stochastically placed in a large-scale area for monitoring regions and enabling various applications. The quality of service in DSN is impacted by the sporadic appearance of defective sensor nodes, especially over the dense wireless network. Due to that, sensor nodes are affected, which reduces network performance during communication. In recent years, the majority of the fault detection techniques in use rely on the neighbor's sensing data over the dense sensor network to determine the fault state of SNs, and based on these, the self-diagnosis is done by receiving information on statistics, thresholds, majority voting, hypothetical testing, comparison, or machine learning. As a result, the false data positive rate (FDPR), detection data accuracy (DDA), and false data alarm rate (FDAR) of these defect detection algorithms are low. Due to high energy expenditure and long detection delay these approaches are not suitable for large scale. In this paper, an enhanced three-sigma edit test-based distributed self-fault dense diagnosis (DSFDD3SET) algorithm is proposed. The performance of the proposed DSFDD3SET has been evaluated using Python, and MATLAB. The experimental results of the DSFDD3SET have been compared with the existing distributed self-fault diagnosis algorithm. The experimental results efficacy outperforms the existing algorithms .
—分布式传感器网络(DSN)是一组低功耗、低成本的传感器节点(SNs),这些节点随机放置在一个大范围的区域内,用于监控区域和实现各种应用。在深空网络中,故障传感器节点的零星出现会影响服务质量,特别是在密集的无线网络中。因此会影响传感器节点,降低网络通信性能。近年来,使用的大多数故障检测技术依赖于密集传感器网络上邻居的感知数据来确定SNs的故障状态,并在此基础上通过接收统计、阈值、多数投票、假设测试、比较或机器学习等信息来完成自诊断。因此,这些缺陷检测算法的误报率(FDPR)、检测数据准确率(DDA)和误报率(FDAR)都较低。由于能量消耗大、检测延迟长,这些方法不适合大规模应用。提出了一种增强的基于三西格玛编辑测试的分布式自故障密集诊断(DSFDD3SET)算法。利用Python和MATLAB对DSFDD3SET的性能进行了评估。将DSFDD3SET的实验结果与现有的分布式自故障诊断算法进行了比较。实验结果表明,该算法的有效性优于现有算法。
{"title":"Distributed Self Intermittent Fault Diagnosis in Dense Wireless Sensor Network","authors":"B. S. Gouda, Sudhakar Das, Trilochan Panigrahi","doi":"10.22247/ijcna/2023/223315","DOIUrl":"https://doi.org/10.22247/ijcna/2023/223315","url":null,"abstract":"– A distributed sensor network (DSN) is a grouping of low-power and low-cost sensor nodes (SNs) that are stochastically placed in a large-scale area for monitoring regions and enabling various applications. The quality of service in DSN is impacted by the sporadic appearance of defective sensor nodes, especially over the dense wireless network. Due to that, sensor nodes are affected, which reduces network performance during communication. In recent years, the majority of the fault detection techniques in use rely on the neighbor's sensing data over the dense sensor network to determine the fault state of SNs, and based on these, the self-diagnosis is done by receiving information on statistics, thresholds, majority voting, hypothetical testing, comparison, or machine learning. As a result, the false data positive rate (FDPR), detection data accuracy (DDA), and false data alarm rate (FDAR) of these defect detection algorithms are low. Due to high energy expenditure and long detection delay these approaches are not suitable for large scale. In this paper, an enhanced three-sigma edit test-based distributed self-fault dense diagnosis (DSFDD3SET) algorithm is proposed. The performance of the proposed DSFDD3SET has been evaluated using Python, and MATLAB. The experimental results of the DSFDD3SET have been compared with the existing distributed self-fault diagnosis algorithm. The experimental results efficacy outperforms the existing algorithms .","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44711179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Improved Rate Adaptive Irregular Low Density Parity Check Encoding for Fifth Generation Networks Using Software Defined Radio 基于软件无线电的第五代网络改进的速率自适应不规则低密度奇偶校验编码分析
Q4 Computer Science Pub Date : 2023-06-30 DOI: 10.22247/ijcna/2023/221886
M. Ramakrishnan, Tharini Chandrapragasam
– Low Density Parity Check Codes are appropriate for high data rate applications like Internet of Things and 5G communication due to its support for bigger block size and higher code rate. In this paper an improved LDPC encoding algorithm is proposed to reduce girth 4 short cycles. This reduction helps in achieving lesser Bit Error Rate (BER) for various channel models with different code rates and modulation schemes. The proposed work is analyzed both for Pseudo Random sequence and audio messages. The simulation results demonstrate that the algorithm achieves low BER of 𝟏𝟎 −𝟖 for code rate of 0.7 when tested for various code rates. The proposed algorithm also achieves reduced short cycles when compared with conventional LDPC encoding algorithm. Simulation results were verified by implementing the proposed algorithm in NI USRP Software Defined Radio. The SDR results verify that proposed algorithm provide low BER with reduced short cycles.
—低密度奇偶校验码支持更大的块大小和更高的码率,适用于物联网、5G通信等高数据速率应用。本文提出了一种改进的LDPC编码算法,以减少4个短周期的周长。这种减少有助于在具有不同码率和调制方案的各种信道模型中实现更低的误码率(BER)。对伪随机序列和音频信息进行了分析。仿真结果表明,在各种码率测试下,该算法在码率为0.7的情况下实现了较低的误码率。与传统的LDPC编码算法相比,该算法还实现了短周期的缩短。通过在NI USRP软件定义无线电中实现该算法,验证了仿真结果。SDR实验结果表明,该算法具有较低的误码率和较短的周期。
{"title":"Analysis of Improved Rate Adaptive Irregular Low Density Parity Check Encoding for Fifth Generation Networks Using Software Defined Radio","authors":"M. Ramakrishnan, Tharini Chandrapragasam","doi":"10.22247/ijcna/2023/221886","DOIUrl":"https://doi.org/10.22247/ijcna/2023/221886","url":null,"abstract":"– Low Density Parity Check Codes are appropriate for high data rate applications like Internet of Things and 5G communication due to its support for bigger block size and higher code rate. In this paper an improved LDPC encoding algorithm is proposed to reduce girth 4 short cycles. This reduction helps in achieving lesser Bit Error Rate (BER) for various channel models with different code rates and modulation schemes. The proposed work is analyzed both for Pseudo Random sequence and audio messages. The simulation results demonstrate that the algorithm achieves low BER of 𝟏𝟎 −𝟖 for code rate of 0.7 when tested for various code rates. The proposed algorithm also achieves reduced short cycles when compared with conventional LDPC encoding algorithm. Simulation results were verified by implementing the proposed algorithm in NI USRP Software Defined Radio. The SDR results verify that proposed algorithm provide low BER with reduced short cycles.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49486637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Amended Hybrid Scheduling for Cloud Computing with Real-Time Reliability Forecasting 具有实时可靠性预测的云计算改进混合调度
Q4 Computer Science Pub Date : 2023-06-30 DOI: 10.22247/ijcna/2023/221887
Ramya Boopathi, E. S. Samundeeswari
– Cloud computing has emerged as the feasible paradigm to satisfy the computing requirements of high-performance applications by an ideal distribution of tasks to resources. But, it is problematic when attaining multiple scheduling objectives such as throughput, makespan, and resource use. To resolve this problem, many Task Scheduling Algorithms (TSAs) are recently developed using single or multi-objective metaheuristic strategies. Amongst, the TS based on a Multi-objective Grey Wolf Optimizer (TSMGWO) handles multiple objectives to discover ideal tasks and assign resources to the tasks. However, it only maximizes the resource use and throughput when reducing the makespan, whereas it is also crucial to optimize other parameters like the utilization of the memory, and bandwidth. Hence, this article proposes a hybrid TSA depending on the linear matching method and backfilling, which uses the memory and bandwidth requirements for effective TS. Initially, a Long Short-Term Memory (LSTM) network is adopted as a meta-learner to predict the task runtime reliability. Then, the tasks are divided into predictable and unpredictable queues. The tasks with higher expected runtime are scheduled by a plan-based scheduling approach based on the Tuna Swarm Optimization (TSO). The remaining tasks are backfilled by the VIKOR technique. To reduce resource use, a particular fraction of CPU cores is kept for backfilling, which is modified dynamically depending on the Resource Use Ratio (RUR) of predictable tasks among freshly submitted tasks. Finally, a general simulation reveals that the proposed algorithm outperforms the earlier metaheuristic, plan-based, and backfilling TSAs.
–云计算已经成为一种可行的范式,通过将任务理想地分配给资源来满足高性能应用程序的计算需求。但是,当实现多个调度目标(如吞吐量、完工时间和资源使用)时,这是有问题的。为了解决这个问题,最近开发了许多使用单目标或多目标元启发式策略的任务调度算法。其中,基于多目标灰太狼优化器(TSMGWO)的TS处理多个目标,以发现理想任务并为任务分配资源。然而,它只会在缩短完工时间时最大限度地提高资源使用率和吞吐量,而优化其他参数(如内存利用率和带宽)也至关重要。因此,本文提出了一种基于线性匹配方法和回填的混合TSA,该方法利用有效TSA的内存和带宽需求。最初,采用长短期内存(LSTM)网络作为元学习器来预测任务运行时的可靠性。然后,将任务划分为可预测队列和不可预测队列。基于Tuna Swarm Optimization(TSO)的基于计划的调度方法对期望运行时间较高的任务进行调度。剩余任务由VIKOR技术回填。为了减少资源使用,保留一部分特定的CPU内核用于回填,根据新提交任务中可预测任务的资源使用率(RUR)动态修改。最后,一般仿真表明,所提出的算法优于早期的元启发式、基于计划和回填TSA。
{"title":"Amended Hybrid Scheduling for Cloud Computing with Real-Time Reliability Forecasting","authors":"Ramya Boopathi, E. S. Samundeeswari","doi":"10.22247/ijcna/2023/221887","DOIUrl":"https://doi.org/10.22247/ijcna/2023/221887","url":null,"abstract":"– Cloud computing has emerged as the feasible paradigm to satisfy the computing requirements of high-performance applications by an ideal distribution of tasks to resources. But, it is problematic when attaining multiple scheduling objectives such as throughput, makespan, and resource use. To resolve this problem, many Task Scheduling Algorithms (TSAs) are recently developed using single or multi-objective metaheuristic strategies. Amongst, the TS based on a Multi-objective Grey Wolf Optimizer (TSMGWO) handles multiple objectives to discover ideal tasks and assign resources to the tasks. However, it only maximizes the resource use and throughput when reducing the makespan, whereas it is also crucial to optimize other parameters like the utilization of the memory, and bandwidth. Hence, this article proposes a hybrid TSA depending on the linear matching method and backfilling, which uses the memory and bandwidth requirements for effective TS. Initially, a Long Short-Term Memory (LSTM) network is adopted as a meta-learner to predict the task runtime reliability. Then, the tasks are divided into predictable and unpredictable queues. The tasks with higher expected runtime are scheduled by a plan-based scheduling approach based on the Tuna Swarm Optimization (TSO). The remaining tasks are backfilled by the VIKOR technique. To reduce resource use, a particular fraction of CPU cores is kept for backfilling, which is modified dynamically depending on the Resource Use Ratio (RUR) of predictable tasks among freshly submitted tasks. Finally, a general simulation reveals that the proposed algorithm outperforms the earlier metaheuristic, plan-based, and backfilling TSAs.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43653652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Resource Allocation Techniques and Key Performance Indicators (KPIs) for 5G New Radio Networks: A Review 研究5G新无线网络的资源分配技术和关键性能指标(kpi):综述
Q4 Computer Science Pub Date : 2023-06-30 DOI: 10.22247/ijcna/2023/221899
J. ., Dharmender Kumar, Amandeep .
– The demand for 5G networks is growing day by day, but there remain issues regarding resource allocation. Moreover, there is a need to focus on key performance indicators for the 5G network. This study looks at the assessment of 5G wireless communications as well as the minimal technical performance criteria for 5G network services according to the ITU-R, Next Generation Mobile, 3GPP, and Networks. 5G standards that have been created in the 3GPP, ITU-Telecommunication Standardization Sector, ITU-R Sector, Internet Engineering Task Force, and IEEE are covered. In 5G-based wireless communication systems, resource allocation is a key activity that must be done. It is essential for the new systems used in 5G wireless networks to be more dynamic and intelligent if they are going to be able to satisfy a range of network requirements at the same time. This may be accomplished via the use of new wireless technologies and methods. Key characteristics of 5G, such as waveform, dynamic slot-based frame structure, massive MIMO, and channel codecs, have been explained, along with emerging technologies in the 5G network. Previous research related to 5G networks that considered resource allocation in heterogeneous networks is elaborated, along with the requirement of KPIs for 5G networks. The functionality of 5G has been discussed, along with its common and technological challenges. The research paper has also focused on metrics, indicators, and parameters during resource allocation in 5G, along with machine learning. To move the massive amounts of data that may flow at speeds of up to 100 Gbps/km2, these devices need supplementary, well-organized, and widely deployed RATs. To accommodate the expected exponential growth in the data flow, 5G network RAN radio blocking and resource management solutions would need to be able to handle more than 1,000 times the present traffic level. In addition, all of the information that makes up this traffic must be available and shareable at any time, from any location, and using any device inside the 5G RAN and beyond 4G cellular coverage areas. The need for resource allocation is discussed, along with the existing algorithm and improvements made in technology for resource allocation.
——5G网络需求日益增长,但资源配置问题依然存在。此外,有必要关注5G网络的关键性能指标。本研究着眼于根据ITU-R、下一代移动、3GPP和网络对5G无线通信的评估以及5G网络服务的最低技术性能标准,涵盖了3GPP、itu -电信标准化部门、ITU-R部门、互联网工程任务组和IEEE创建的5G标准。在基于5g的无线通信系统中,资源分配是必须完成的关键活动。如果要同时满足一系列网络需求,5G无线网络中使用的新系统必须更加动态和智能。这可以通过使用新的无线技术和方法来实现。5G的关键特征,如波形、基于动态时隙的帧结构、大规模MIMO和信道编解码器,以及5G网络中的新兴技术都得到了解释。阐述了前人关于5G网络的研究,考虑了异构网络中的资源分配,以及对5G网络kpi的要求。讨论了5G的功能,以及它的共同和技术挑战。该研究论文还关注了5G资源分配过程中的指标、指标和参数,以及机器学习。为了移动可能以高达100 Gbps/km2的速度流动的大量数据,这些设备需要补充的、组织良好且广泛部署的rat。为了适应数据流的预期指数级增长,5G网络RAN无线电阻塞和资源管理解决方案需要能够处理当前流量水平的1000倍以上。此外,构成该流量的所有信息必须在任何时间、任何地点、使用5G RAN内和4G蜂窝覆盖区域以外的任何设备都可以获得和共享。讨论了资源分配的必要性,以及资源分配的现有算法和技术改进。
{"title":"Investigating Resource Allocation Techniques and Key Performance Indicators (KPIs) for 5G New Radio Networks: A Review","authors":"J. ., Dharmender Kumar, Amandeep .","doi":"10.22247/ijcna/2023/221899","DOIUrl":"https://doi.org/10.22247/ijcna/2023/221899","url":null,"abstract":"– The demand for 5G networks is growing day by day, but there remain issues regarding resource allocation. Moreover, there is a need to focus on key performance indicators for the 5G network. This study looks at the assessment of 5G wireless communications as well as the minimal technical performance criteria for 5G network services according to the ITU-R, Next Generation Mobile, 3GPP, and Networks. 5G standards that have been created in the 3GPP, ITU-Telecommunication Standardization Sector, ITU-R Sector, Internet Engineering Task Force, and IEEE are covered. In 5G-based wireless communication systems, resource allocation is a key activity that must be done. It is essential for the new systems used in 5G wireless networks to be more dynamic and intelligent if they are going to be able to satisfy a range of network requirements at the same time. This may be accomplished via the use of new wireless technologies and methods. Key characteristics of 5G, such as waveform, dynamic slot-based frame structure, massive MIMO, and channel codecs, have been explained, along with emerging technologies in the 5G network. Previous research related to 5G networks that considered resource allocation in heterogeneous networks is elaborated, along with the requirement of KPIs for 5G networks. The functionality of 5G has been discussed, along with its common and technological challenges. The research paper has also focused on metrics, indicators, and parameters during resource allocation in 5G, along with machine learning. To move the massive amounts of data that may flow at speeds of up to 100 Gbps/km2, these devices need supplementary, well-organized, and widely deployed RATs. To accommodate the expected exponential growth in the data flow, 5G network RAN radio blocking and resource management solutions would need to be able to handle more than 1,000 times the present traffic level. In addition, all of the information that makes up this traffic must be available and shareable at any time, from any location, and using any device inside the 5G RAN and beyond 4G cellular coverage areas. The need for resource allocation is discussed, along with the existing algorithm and improvements made in technology for resource allocation.","PeriodicalId":36485,"journal":{"name":"International Journal of Computer Networks and Applications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41537764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Computer Networks and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1