Combating fake news is a crucial endeavor, yet the complexity of the task requires multifaceted approaches that transcend singular technological solutions. Traditional fact-checking, often centralized and human-dependent, faces scalability and bias challenges. This paper introduces a novel blockchain-based framework that leverages the wisdom of the crowd for an authority-free, scalable, automated and reputation-driven fact-checking. Within this framework, stance detection acts as an automated means of opinion retrieval, while the Proof of Reputation consensus mechanism fosters an environment where reputable contributors have greater influence in shaping news credibility. Concurrently, the Hoeffding bound is used to allow the system to adapt to evolving contexts. In contrast to Machine Learning—based approaches, our framework limits the need for periodic retraining to update a model’s frozen knowledge of the world. The experimental study conducted on real-world data demonstrates that the proposed framework offers a promising and efficient solution to combat the spread of fake news.
{"title":"Connecting the dots between stance and fake news detection with blockchain, proof of reputation, and the Hoeffding bound","authors":"Ilhem Salah, Khaled Jouini, Cyril-Alexandre Pachon, Ouajdi Korbaa","doi":"10.1007/s10586-024-04637-7","DOIUrl":"https://doi.org/10.1007/s10586-024-04637-7","url":null,"abstract":"<p>Combating fake news is a crucial endeavor, yet the complexity of the task requires multifaceted approaches that transcend singular technological solutions. Traditional fact-checking, often centralized and human-dependent, faces scalability and bias challenges. This paper introduces a novel blockchain-based framework that leverages the wisdom of the crowd for an authority-free, scalable, automated and reputation-driven fact-checking. Within this framework, stance detection acts as an automated means of opinion retrieval, while the Proof of Reputation consensus mechanism fosters an environment where reputable contributors have greater influence in shaping news credibility. Concurrently, the Hoeffding bound is used to allow the system to adapt to evolving contexts. In contrast to Machine Learning—based approaches, our framework limits the need for periodic retraining to update a model’s frozen knowledge of the world. The experimental study conducted on real-world data demonstrates that the proposed framework offers a promising and efficient solution to combat the spread of fake news.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"84 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-26DOI: 10.1007/s10586-024-04612-2
Mazhar Hussain, Said Nabi, Mushtaq Hussain
The Internet of Things (IoT) is an emerging technology incorporating various hardware devices and software applications to exchange, analyze, and process a huge amount of data. IoT uses cloud and fog infrastructures, comprising different hardware and software components like computing machines, networking components, storage, and virtualization elements. They can receive, process, store, and exchange data in real time. A cloud is a centralized system containing large data centres that are far from client devices. However, as IoT generates massive amounts of data, issues like latency, response time, execution of tasks within their deadline, and bandwidth arise when data is sent to the cloud for processing. Compared to the cloud, fog computing is vital as a distributed system consisting of millions of devices located at the minimum distance from the client devices. In addition, fog infrastructure reduces bandwidth and latency because it is closer to the end-user. However, maximizing utilization of resources, minimizing response time, and ensuring the completion of deadline-constrained tasks within their deadline are important research problems in fog computing. This research proposes a task scheduling technique called Resource Aware Prioritized Task Scheduling (RAPTS) in a heterogeneous fog computing environment. The aim is to execute deadline-constrained tasks within their deadlines, minimize response time and cost, as well as makespan, and maximize resource utilization of the fog layer. The RAPTS is implemented using iFogSim and its performance is evaluated regarding response time, resource utilization, task deadlines, cost, and makespan. The results have been compared with state-of-the-art fog schedulers like RACE (CFP) and RACE (FOP). The results reveal that the RAPTS have shown up to 29%, 53%, 15%, 11%, and 43% improvement in terms of resource utilization, response time, makespan, cost, and meeting task deadlines, respectively.
{"title":"RAPTS: resource aware prioritized task scheduling technique in heterogeneous fog computing environment","authors":"Mazhar Hussain, Said Nabi, Mushtaq Hussain","doi":"10.1007/s10586-024-04612-2","DOIUrl":"https://doi.org/10.1007/s10586-024-04612-2","url":null,"abstract":"<p>The Internet of Things (IoT) is an emerging technology incorporating various hardware devices and software applications to exchange, analyze, and process a huge amount of data. IoT uses cloud and fog infrastructures, comprising different hardware and software components like computing machines, networking components, storage, and virtualization elements. They can receive, process, store, and exchange data in real time. A cloud is a centralized system containing large data centres that are far from client devices. However, as IoT generates massive amounts of data, issues like latency, response time, execution of tasks within their deadline, and bandwidth arise when data is sent to the cloud for processing. Compared to the cloud, fog computing is vital as a distributed system consisting of millions of devices located at the minimum distance from the client devices. In addition, fog infrastructure reduces bandwidth and latency because it is closer to the end-user. However, maximizing utilization of resources, minimizing response time, and ensuring the completion of deadline-constrained tasks within their deadline are important research problems in fog computing. This research proposes a task scheduling technique called Resource Aware Prioritized Task Scheduling (RAPTS) in a heterogeneous fog computing environment. The aim is to execute deadline-constrained tasks within their deadlines, minimize response time and cost, as well as makespan, and maximize resource utilization of the fog layer. The RAPTS is implemented using iFogSim and its performance is evaluated regarding response time, resource utilization, task deadlines, cost, and makespan. The results have been compared with state-of-the-art fog schedulers like RACE (CFP) and RACE (FOP). The results reveal that the RAPTS have shown up to 29%, 53%, 15%, 11%, and 43% improvement in terms of resource utilization, response time, makespan, cost, and meeting task deadlines, respectively.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"217 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-26DOI: 10.1007/s10586-024-04595-0
Yunfa Li, Hui Li
Road object detection is a key technology in intelligent transportation systems, playing a crucial role in ensuring driving safety and enhancing driving experience. However, due to factors such as weather and visual occlusions, particularly in complex traffic scenes, the recognition rate and accuracy of object detection are often less than satisfactory, far from meeting the application demands of intelligent driving. In order to address the issues of weak generalization and low regression accuracy of image similarity evaluation metrics, we propose a new anchor box calculation algorithm. Building upon this, to tackle the problem of weak graphic attention and feature capture capabilities in the backbone network,We propose an improved CA attention mechanism. In addition, to address the issues of low detection accuracy and imprecise positioning of the model in complex traffic scenarios, we propose a new image enhancement module. we select the road traffic dataset BDD(Berkeley Deep Drive)100K as the benchmark evaluation dataset and divide the training and validation sets into six new categories. Through this series of strategies, a new real-time road object detection method suitable for complex traffic scenes is formed. To validate the effectiveness of this method, we conducted a series of experiments. The experimental results demonstrate that our proposed method achieves a mean average precision improvement of 3.61% compared to the YOLOv7-tiny method.
道路物体检测是智能交通系统中的一项关键技术,在确保驾驶安全和提升驾驶体验方面发挥着至关重要的作用。然而,由于天气、视觉遮挡等因素的影响,特别是在复杂的交通场景中,物体检测的识别率和准确率往往不尽如人意,远远不能满足智能驾驶的应用需求。针对图像相似性评价指标泛化能力弱、回归精度低的问题,我们提出了一种新的锚框计算算法。在此基础上,针对骨干网络图形注意力和特征捕捉能力较弱的问题,我们提出了一种改进的 CA 注意机制。此外,针对复杂交通场景下模型检测精度低、定位不精确等问题,我们提出了新的图像增强模块。我们选择道路交通数据集 BDD(Berkeley Deep Drive)100K 作为基准评估数据集,并将训练集和验证集划分为六个新类别。通过这一系列策略,形成了一种适用于复杂交通场景的新型实时道路物体检测方法。为了验证该方法的有效性,我们进行了一系列实验。实验结果表明,与 YOLOv7-tiny 方法相比,我们提出的方法平均精度提高了 3.61%。
{"title":"A novel real-time object detection method for complex road scenes based on YOLOv7-tiny","authors":"Yunfa Li, Hui Li","doi":"10.1007/s10586-024-04595-0","DOIUrl":"https://doi.org/10.1007/s10586-024-04595-0","url":null,"abstract":"<p>Road object detection is a key technology in intelligent transportation systems, playing a crucial role in ensuring driving safety and enhancing driving experience. However, due to factors such as weather and visual occlusions, particularly in complex traffic scenes, the recognition rate and accuracy of object detection are often less than satisfactory, far from meeting the application demands of intelligent driving. In order to address the issues of weak generalization and low regression accuracy of image similarity evaluation metrics, we propose a new anchor box calculation algorithm. Building upon this, to tackle the problem of weak graphic attention and feature capture capabilities in the backbone network,We propose an improved CA attention mechanism. In addition, to address the issues of low detection accuracy and imprecise positioning of the model in complex traffic scenarios, we propose a new image enhancement module. we select the road traffic dataset BDD(Berkeley Deep Drive)100K as the benchmark evaluation dataset and divide the training and validation sets into six new categories. Through this series of strategies, a new real-time road object detection method suitable for complex traffic scenes is formed. To validate the effectiveness of this method, we conducted a series of experiments. The experimental results demonstrate that our proposed method achieves a mean average precision improvement of 3.61% compared to the YOLOv7-tiny method.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-26DOI: 10.1007/s10586-024-04603-3
Yongxing Lin, Xiaoyan Xu, Hongyun Xu
In an era dominated by network connectivity, the reliance on robust and secure networks has become paramount. With the advent of 5G and the Internet of Things, networks are expanding in both scale and complexity, rendering them susceptible to a myriad of cyber threats. This escalating risk encompasses potential breaches of user privacy, unauthorized access to transmitted data, and targeted attacks on the underlying network infrastructure. To safeguard the integrity and security of modern networked societies, the deployment of Network Intrusion Detection Systems is imperative. This paper presents a novel lightweight detection model that seamlessly integrates Spiking Neural Networks and Convolutional Neural Networks with advanced algorithmic frameworks. Leveraging this hybrid approach, the proposed model achieves superior detection accuracy while maintaining efficiency in terms of power consumption and computational resources. This paper presents a new style recognition model that seamlessly integrates spiking neural networks and convolutional neural networks with advanced algorithmic frameworks. We call this combined method Spiking-HCCN. Using this hybrid approach, Spiking-HCCN achieves superior detection accuracy while maintaining efficiency in terms of power consumption and computational resources. Comparative evaluations against state-of-the-art models, including Spiking GCN and Spike-DHS, demonstrate significant performance advantages. Spiking-HCCN outperforms these benchmarks by 24% in detection accuracy, 21% in delay, and 29% in energy efficiency, underscoring its efficacy in fortifying network security in the face of evolving cyber threats.
{"title":"A revolutionary approach to use convolutional spiking neural networks for robust intrusion detection","authors":"Yongxing Lin, Xiaoyan Xu, Hongyun Xu","doi":"10.1007/s10586-024-04603-3","DOIUrl":"https://doi.org/10.1007/s10586-024-04603-3","url":null,"abstract":"<p>In an era dominated by network connectivity, the reliance on robust and secure networks has become paramount. With the advent of 5G and the Internet of Things, networks are expanding in both scale and complexity, rendering them susceptible to a myriad of cyber threats. This escalating risk encompasses potential breaches of user privacy, unauthorized access to transmitted data, and targeted attacks on the underlying network infrastructure. To safeguard the integrity and security of modern networked societies, the deployment of Network Intrusion Detection Systems is imperative. This paper presents a novel lightweight detection model that seamlessly integrates Spiking Neural Networks and Convolutional Neural Networks with advanced algorithmic frameworks. Leveraging this hybrid approach, the proposed model achieves superior detection accuracy while maintaining efficiency in terms of power consumption and computational resources. This paper presents a new style recognition model that seamlessly integrates <b>s</b>piking neural networks and convolutional neural networks with advanced algorithmic frameworks. We call this combined method Spiking-HCCN. Using this hybrid approach, Spiking-HCCN achieves superior detection accuracy while maintaining efficiency in terms of power consumption and computational resources. Comparative evaluations against state-of-the-art models, including Spiking GCN and Spike-DHS, demonstrate significant performance advantages. Spiking-HCCN outperforms these benchmarks by 24% in detection accuracy, 21% in delay, and 29% in energy efficiency, underscoring its efficacy in fortifying network security in the face of evolving cyber threats.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A distributed archetype, the concept of fog computing relocates the storage, computation, and services closer to the network’s edge, where the data is generated. Despite these advantages, the users expect proper load management in the fog environment. This has expanded the Internet of Things (IoT) field, increasing user requests for the fog computing layer. Given the growth, Virtual Machines (VMs) in the fog layer become overburdened due to user demands. In the fog layer, it is essential to evenly and fairly distribute the workload among the segment’s current VMs. Numerous load-management strategies for fog environments have been implemented up to this point. This study aims to create a hybridized and optimized approach for load management (HOGWO), in which the population set is generated using the Invasive Weed Optimisation (IWO) algorithm. The rest of the functional part is done with the help of the Grey Wolf Optimization (GWO) algorithm. This process ensures cost optimization, increased performance, scalability, and adaptability to any domain, such as healthcare, vehicular traffic management, etc. Also, the efficiency of the enhanced approach is analyzed in various scenarios to provide a more optimal solution set. The proposed approach is well illustrated and outperforms the existing algorithms, such as Particle Swarm Optimization (PSO), Genetic Algorithm (GA), etc., in terms of cost and load management. It was found that more than 97% jobs were completed on time, according to the testing data, and the hybrid technique outperformed all other approaches in terms of fluctuation of load and makespan.
{"title":"HOGWO: a fog inspired optimized load balancing approach using hybridized grey wolf algorithm","authors":"Debashreet Das, Sayak Sengupta, Shashank Mouli Satapathy, Deepanshu Saini","doi":"10.1007/s10586-024-04625-x","DOIUrl":"https://doi.org/10.1007/s10586-024-04625-x","url":null,"abstract":"<p>A distributed archetype, the concept of fog computing relocates the storage, computation, and services closer to the network’s edge, where the data is generated. Despite these advantages, the users expect proper load management in the fog environment. This has expanded the Internet of Things (IoT) field, increasing user requests for the fog computing layer. Given the growth, Virtual Machines (VMs) in the fog layer become overburdened due to user demands. In the fog layer, it is essential to evenly and fairly distribute the workload among the segment’s current VMs. Numerous load-management strategies for fog environments have been implemented up to this point. This study aims to create a hybridized and optimized approach for load management (HOGWO), in which the population set is generated using the Invasive Weed Optimisation (IWO) algorithm. The rest of the functional part is done with the help of the Grey Wolf Optimization (GWO) algorithm. This process ensures cost optimization, increased performance, scalability, and adaptability to any domain, such as healthcare, vehicular traffic management, etc. Also, the efficiency of the enhanced approach is analyzed in various scenarios to provide a more optimal solution set. The proposed approach is well illustrated and outperforms the existing algorithms, such as Particle Swarm Optimization (PSO), Genetic Algorithm (GA), etc., in terms of cost and load management. It was found that more than 97% jobs were completed on time, according to the testing data, and the hybrid technique outperformed all other approaches in terms of fluctuation of load and makespan.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is inspired by traditional rural fishing methods and proposes a new metaheuristic optimization algorithm based on human behavior: Catch Fish Optimization Algorithm (CFOA). This algorithm simulates the process of rural fishermen fishing in ponds, which is mainly divided into two phases: the exploration phase and the exploitation phase. In the exploration phase, there are two stages to search: first, the individual capture stage based on personal experience and intuition, and second, the group capture stage based on human proficiency in using tools and collaboration. Transition from independent search to group capture during the exploration phase. Exploitation phase: All fishermen will surround the shoal of fish and work together to salvage the remaining fish, a collective capture strategy. CFOA model is based on these two phases. This paper tested the optimization performance of CFOA using IEEE CEC 2014 and IEEE CEC 2020 test functions, and compared it with 11 other optimization algorithms. We employed the IEEE CEC2017 function to evaluate the overall performance of CFOA. The experimental results indicate that CFOA exhibits excellent and stable optimization capabilities overall. Additionally, we applied CFOA to data clustering problems, and the final results demonstrate that CFOA’s overall error rate in processing clustering problems is less than 20%, resulting in a better clustering effect. The comprehensive experimental results show that CFOA exhibits excellent optimization effects when facing different optimization problems. CFOA code is open at https://github.com/Meky-1210/CFOA.git.
{"title":"Catch fish optimization algorithm: a new human behavior algorithm for solving clustering problems","authors":"Heming Jia, Qixian Wen, Yuhao Wang, Seyedali Mirjalili","doi":"10.1007/s10586-024-04618-w","DOIUrl":"https://doi.org/10.1007/s10586-024-04618-w","url":null,"abstract":"<p>This paper is inspired by traditional rural fishing methods and proposes a new metaheuristic optimization algorithm based on human behavior: Catch Fish Optimization Algorithm (CFOA). This algorithm simulates the process of rural fishermen fishing in ponds, which is mainly divided into two phases: the exploration phase and the exploitation phase. In the exploration phase, there are two stages to search: first, the individual capture stage based on personal experience and intuition, and second, the group capture stage based on human proficiency in using tools and collaboration. Transition from independent search to group capture during the exploration phase. Exploitation phase: All fishermen will surround the shoal of fish and work together to salvage the remaining fish, a collective capture strategy. CFOA model is based on these two phases. This paper tested the optimization performance of CFOA using IEEE CEC 2014 and IEEE CEC 2020 test functions, and compared it with 11 other optimization algorithms. We employed the IEEE CEC2017 function to evaluate the overall performance of CFOA. The experimental results indicate that CFOA exhibits excellent and stable optimization capabilities overall. Additionally, we applied CFOA to data clustering problems, and the final results demonstrate that CFOA’s overall error rate in processing clustering problems is less than 20%, resulting in a better clustering effect. The comprehensive experimental results show that CFOA exhibits excellent optimization effects when facing different optimization problems. CFOA code is open at https://github.com/Meky-1210/CFOA.git.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-25DOI: 10.1007/s10586-024-04577-2
Ning Sha, Xiaochun Wu, Jinpeng Wen, Jinglei Li, Chuanhuang Li
In the current era of mobile edge networks, a significant challenge lies in overcoming the limitations posed by limited edge storage and computational resources. To address these issues, accurate network traffic prediction has emerged as a promising solution. However, due to the intricate spatial and temporal dependencies inherent in mobile edge network traffic, the prediction task remains highly challenging. Recent spatio-temporal neural network algorithms based on graph convolution have shown promising results, but they often rely on pre-defined graph structures or learned parameters. This approach neglects the dynamic nature of short-term relationships, leading to limitations in prediction accuracy. To address these limitations, we introduce Ada-ASTGCN, an innovative attention-based adaptive spatio-temporal graph convolutional network. Ada-ASTGCN dynamically derives an optimal graph structure, considering both the long-term stability and short-term bursty evolution. This allows for more precise spatio-temporal network traffic prediction. In addition, we employ an alternative training approach during optimization, replacing the traditional end-to-end training method. This alternative training approach better guides the learning direction of the model, leading to improved prediction performance. To validate the effectiveness of Ada-ASTGCN, we conducted extensive traffic prediction experiments on real-world datasets. The results demonstrate the superior performance of Ada-ASTGCN compared to existing methods, highlighting its ability to accurately predict network traffic in mobile edge networks.
{"title":"Adaptive spatio-temporal graph convolutional network with attention mechanism for mobile edge network traffic prediction","authors":"Ning Sha, Xiaochun Wu, Jinpeng Wen, Jinglei Li, Chuanhuang Li","doi":"10.1007/s10586-024-04577-2","DOIUrl":"https://doi.org/10.1007/s10586-024-04577-2","url":null,"abstract":"<p>In the current era of mobile edge networks, a significant challenge lies in overcoming the limitations posed by limited edge storage and computational resources. To address these issues, accurate network traffic prediction has emerged as a promising solution. However, due to the intricate spatial and temporal dependencies inherent in mobile edge network traffic, the prediction task remains highly challenging. Recent spatio-temporal neural network algorithms based on graph convolution have shown promising results, but they often rely on pre-defined graph structures or learned parameters. This approach neglects the dynamic nature of short-term relationships, leading to limitations in prediction accuracy. To address these limitations, we introduce Ada-ASTGCN, an innovative attention-based adaptive spatio-temporal graph convolutional network. Ada-ASTGCN dynamically derives an optimal graph structure, considering both the long-term stability and short-term bursty evolution. This allows for more precise spatio-temporal network traffic prediction. In addition, we employ an alternative training approach during optimization, replacing the traditional end-to-end training method. This alternative training approach better guides the learning direction of the model, leading to improved prediction performance. To validate the effectiveness of Ada-ASTGCN, we conducted extensive traffic prediction experiments on real-world datasets. The results demonstrate the superior performance of Ada-ASTGCN compared to existing methods, highlighting its ability to accurately predict network traffic in mobile edge networks.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-24DOI: 10.1007/s10586-024-04565-6
Gholam Reza Zargar, Hamid Barati, Ali Barati
The Internet of Things (IoT) is a network where physical objects with unique addresses can connect and communicate with each other through the Internet and telecommunications networks. However, the current methods of user authentication in this environment have limitations due to the need for a lightweight authentication process and limited resources. Therefore, this paper proposes a mutual authentication protocol for IoT that uses blockchain technology. The proposed protocol has a lightweight and secure architecture by using of Elliptic-Curve Cryptography and incorporates the AVISPA tool and BAN logic for formal/informal security analysis. Compared to previous protocols, this proposed protocol is more efficient in terms of communication and computation costs and is more resistant to various attacks.
物联网(IoT)是一个具有唯一地址的物理对象可以通过互联网和电信网络相互连接和通信的网络。然而,由于需要轻量级的身份验证过程和有限的资源,目前在这种环境下的用户身份验证方法存在局限性。因此,本文提出了一种使用区块链技术的物联网相互认证协议。该协议采用椭圆曲线加密技术,具有轻量级的安全架构,并结合了 AVISPA 工具和 BAN 逻辑进行形式/形式安全分析。与之前的协议相比,该拟议协议在通信和计算成本方面更加高效,并且更能抵御各种攻击。
{"title":"An authentication mechanism based on blockchain for IoT environment","authors":"Gholam Reza Zargar, Hamid Barati, Ali Barati","doi":"10.1007/s10586-024-04565-6","DOIUrl":"https://doi.org/10.1007/s10586-024-04565-6","url":null,"abstract":"<p>The Internet of Things (IoT) is a network where physical objects with unique addresses can connect and communicate with each other through the Internet and telecommunications networks. However, the current methods of user authentication in this environment have limitations due to the need for a lightweight authentication process and limited resources. Therefore, this paper proposes a mutual authentication protocol for IoT that uses blockchain technology. The proposed protocol has a lightweight and secure architecture by using of Elliptic-Curve Cryptography and incorporates the AVISPA tool and BAN logic for formal/informal security analysis. Compared to previous protocols, this proposed protocol is more efficient in terms of communication and computation costs and is more resistant to various attacks.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the advancement of wireless communication technology, the number of wireless network terminals has exploded, and various new business scenarios have emerged. The 6G mobile communication technology not only surpasses 5G standards in terms of transmission rate, delay, power and other performances, but also extends the communication range to multiple fields such as air, ground, ocean, etc., which greatly promotes Unmanned Aerial Vehicle (UAV) communication technology research and development. Compared to terrestrial networks, UAV communication has advantages such as high flexibility and easy deployment. However, there are still many problems and challenges in practical applications. In this paper, we will first introduce the functions and application scenarios of UAV communication, then discuss the current challenges and related technical research, and finally look forward to the future development prospects.
{"title":"Unmanned aerial vehicle assisted communication: applications, challenges, and future outlook","authors":"Yilin Li, Yanxian Bi, Jian Wang, Zhiqiang Li, Hongxia Zhang, Peiying Zhang","doi":"10.1007/s10586-024-04631-z","DOIUrl":"https://doi.org/10.1007/s10586-024-04631-z","url":null,"abstract":"<p>With the advancement of wireless communication technology, the number of wireless network terminals has exploded, and various new business scenarios have emerged. The 6G mobile communication technology not only surpasses 5G standards in terms of transmission rate, delay, power and other performances, but also extends the communication range to multiple fields such as air, ground, ocean, etc., which greatly promotes Unmanned Aerial Vehicle (UAV) communication technology research and development. Compared to terrestrial networks, UAV communication has advantages such as high flexibility and easy deployment. However, there are still many problems and challenges in practical applications. In this paper, we will first introduce the functions and application scenarios of UAV communication, then discuss the current challenges and related technical research, and finally look forward to the future development prospects.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"18 7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-23DOI: 10.1007/s10586-024-04571-8
Changjun Wu, Qingzhen Li, Qiaohua Wang, Huanlong Zhang, Xiaohui Song
To address the problems that the northern goshawk optimization algorithm (NGO) has a slow convergence speed and is highly susceptible to fall into local optimal solutions, this paper proposes a hybrid northern goshawk optimization algorithm based on cluster collaboration (HHNGO), which effectively improves the convergence speed and alleviates the problem of falling into the local optimum. Firstly, piecewise chaotic mapping is used to initialize the population, which makes the initial population more evenly distributed in the search space and improves the quality of the initial solution. Secondly, the prey recognition position update formula in the harris hawk optimization algorithm is introduced to improve the exploration phase. Meanwhile, a nonlinear factor can be added to accelerate the process which reaches the minimum difference between the prey best position and the average position of the eagle group. Thus the iteration number is reduced during the search process, and the convergence speed of the algorithm is improved. Finally, the Cauchy variation strategy is used to perturb the optimal solution of the algorithm. Then, its probability jumping out of the local optimal solution is increased, and the global search capability is enhanced. The experimental comparison is carried out to analyze the 12 standard functions, CEC-2019 and CEC-2021 test functions in HHNGO and PSO, GWO, POA, HHO, NGO, INGO, DFPSO, MGLMRFO, GMPBSA algorithms, and HHNGO is applied in PID parameter rectification. The results prove the feasibility and superiority of the proposed method.
{"title":"A hybrid northern goshawk optimization algorithm based on cluster collaboration","authors":"Changjun Wu, Qingzhen Li, Qiaohua Wang, Huanlong Zhang, Xiaohui Song","doi":"10.1007/s10586-024-04571-8","DOIUrl":"https://doi.org/10.1007/s10586-024-04571-8","url":null,"abstract":"<p>To address the problems that the northern goshawk optimization algorithm (NGO) has a slow convergence speed and is highly susceptible to fall into local optimal solutions, this paper proposes a hybrid northern goshawk optimization algorithm based on cluster collaboration (HHNGO), which effectively improves the convergence speed and alleviates the problem of falling into the local optimum. Firstly, piecewise chaotic mapping is used to initialize the population, which makes the initial population more evenly distributed in the search space and improves the quality of the initial solution. Secondly, the prey recognition position update formula in the harris hawk optimization algorithm is introduced to improve the exploration phase. Meanwhile, a nonlinear factor can be added to accelerate the process which reaches the minimum difference between the prey best position and the average position of the eagle group. Thus the iteration number is reduced during the search process, and the convergence speed of the algorithm is improved. Finally, the Cauchy variation strategy is used to perturb the optimal solution of the algorithm. Then, its probability jumping out of the local optimal solution is increased, and the global search capability is enhanced. The experimental comparison is carried out to analyze the 12 standard functions, CEC-2019 and CEC-2021 test functions in HHNGO and PSO, GWO, POA, HHO, NGO, INGO, DFPSO, MGLMRFO, GMPBSA algorithms, and HHNGO is applied in PID parameter rectification. The results prove the feasibility and superiority of the proposed method.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}