Pub Date : 2024-05-27DOI: 10.1007/s00607-024-01294-x
Mouna Hadj-Kacem, Nadia Bouassida
Code smell identification is crucial in software maintenance. The existing literature mostly focuses on single code smell identification. However, in practice, a software artefact typically exhibits multiple code smells simultaneously where their diffuseness has been assessed, suggesting that 59% of smelly classes are affected by more than one smell. So to meet this complexity found in real-world projects, we propose a multi-label learning-based approach to identify eight code smells at the class-level, i.e. the most sever software artefacts that need to be prioritized in the refactoring process. In our experiments, we have used 12 algorithms from different multi-label learning methods across 30 open-source Java projects, where significant findings have been presented. We have explored co-occurrences between class code smells and examined the impact of correlations on prediction results. Additionally, we assess multi-label learning methods to compare data adaptation versus algorithm adaptation. Our findings highlight the effectiveness of the Ensemble of Classifier Chains and Binary Relevance in achieving high-performance results.
{"title":"Multi-label learning for identifying co-occurring class code smells","authors":"Mouna Hadj-Kacem, Nadia Bouassida","doi":"10.1007/s00607-024-01294-x","DOIUrl":"https://doi.org/10.1007/s00607-024-01294-x","url":null,"abstract":"<p>Code smell identification is crucial in software maintenance. The existing literature mostly focuses on single code smell identification. However, in practice, a software artefact typically exhibits multiple code smells simultaneously where their diffuseness has been assessed, suggesting that 59% of smelly classes are affected by more than one smell. So to meet this complexity found in real-world projects, we propose a multi-label learning-based approach to identify eight code smells at the class-level, i.e. the most sever software artefacts that need to be prioritized in the refactoring process. In our experiments, we have used 12 algorithms from different multi-label learning methods across 30 open-source Java projects, where significant findings have been presented. We have explored co-occurrences between class code smells and examined the impact of correlations on prediction results. Additionally, we assess multi-label learning methods to compare data adaptation versus algorithm adaptation. Our findings highlight the effectiveness of the Ensemble of Classifier Chains and Binary Relevance in achieving high-performance results.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141171585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s00607-024-01295-w
G. Sri vidhya, R. Nagarajan
The advancement of technology allows for easy adaptability with IoT devices. Internet of Things (IoT) devices can interact without human intervention, which leads to the creation of smart cities. Nevertheless, security concerns persist within IoT networks. To address this, Software Defined Networking (SDN) has been introduced as a centrally controlled network that can solve security issues in IoT devices. Although there is a security concern with integrating SDN and IoT, it specifically targets Distributed Denial of Service (DDoS) attacks. These attacks focus on the network controller since it is centrally controlled. Real-time, high-performance, and precise solutions are necessary to tackle this issue effectively. In recent years, there has been a growing interest in using intelligent deep learning techniques in Network Intrusion Detection Systems (NIDS) through a Software-Defined IoT network (SDN-IoT). The concept of a Wireless Network Intrusion Detection System (WNIDS) aims to create an SDN controller that efficiently monitors and manages smart IoT devices. The proposed WNIDS method analyzes the CSE-CIC-IDS2018 and SDN-IoT datasets to detect and categorize intrusions or attacks in the SDN-IoT network. Implementing a deep learning method called Bidirectional LSTM (BiLSTM)--based WNIDS model effectively detects intrusions in the SDN-IoT network. This model has achieved impressive accuracy rates of 99.97% and 99.96% for binary and multi-class classification using the CSE-CIC-IDS2018 dataset. Similarly, with the SDN-IoT dataset, the model has achieved 95.13% accuracy for binary classification and 92.90% accuracy for multi-class classification, showing superior performance in both datasets.
{"title":"A novel bidirectional LSTM model for network intrusion detection in SDN-IoT network","authors":"G. Sri vidhya, R. Nagarajan","doi":"10.1007/s00607-024-01295-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01295-w","url":null,"abstract":"<p>The advancement of technology allows for easy adaptability with IoT devices. Internet of Things (IoT) devices can interact without human intervention, which leads to the creation of smart cities. Nevertheless, security concerns persist within IoT networks. To address this, Software Defined Networking (SDN) has been introduced as a centrally controlled network that can solve security issues in IoT devices. Although there is a security concern with integrating SDN and IoT, it specifically targets Distributed Denial of Service (DDoS) attacks. These attacks focus on the network controller since it is centrally controlled. Real-time, high-performance, and precise solutions are necessary to tackle this issue effectively. In recent years, there has been a growing interest in using intelligent deep learning techniques in Network Intrusion Detection Systems (NIDS) through a Software-Defined IoT network (SDN-IoT). The concept of a Wireless Network Intrusion Detection System (WNIDS) aims to create an SDN controller that efficiently monitors and manages smart IoT devices. The proposed WNIDS method analyzes the CSE-CIC-IDS2018 and SDN-IoT datasets to detect and categorize intrusions or attacks in the SDN-IoT network. Implementing a deep learning method called Bidirectional LSTM (BiLSTM)--based WNIDS model effectively detects intrusions in the SDN-IoT network. This model has achieved impressive accuracy rates of 99.97% and 99.96% for binary and multi-class classification using the CSE-CIC-IDS2018 dataset. Similarly, with the SDN-IoT dataset, the model has achieved 95.13% accuracy for binary classification and 92.90% accuracy for multi-class classification, showing superior performance in both datasets.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141171512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s00607-024-01297-8
Dimitrios Papathanasiou, Kostas Kolomvatsos
Context-aware data management becomes the focus of several research efforts, which can be placed at the intersection between the Internet of Things (IoT) and Edge Computing (EC). Huge volumes of data captured by IoT devices are processed in EC environments. Even if edge nodes undertake the responsibility of data management tasks, they are characterized by limited storage and computational resources compared to Cloud. Apparently, this mobilises the introduction of intelligent data selection methods capable of deciding which of the collected data should be kept locally based on end users/applications requests. In this paper, we devise a mechanism where edge nodes learn their own data selection filters, and decide the distributed allocation of newly collected data to their peers and/or Cloud once these data are not conformed with the local data filters. Our mechanism intents to postpone final decisions on data transfer to Cloud (e.g., data centers) to pervasively keep relevant data as close and as long to end users/applications as possible. The proposed mechanism derives a data-selection map across edge nodes by learning specific data sub-spaces, which facilitate the placement of processing tasks (e.g., analytics queries). This is very critical when we target to support near real time decision making and would like to minimize all parts of the tasks allocation procedure. We evaluate and compare our approach against baselines and schemes found in the literature showcasing its applicability in pervasive edge computing environments.
{"title":"Data management and selectivity in collaborative pervasive edge computing","authors":"Dimitrios Papathanasiou, Kostas Kolomvatsos","doi":"10.1007/s00607-024-01297-8","DOIUrl":"https://doi.org/10.1007/s00607-024-01297-8","url":null,"abstract":"<p>Context-aware data management becomes the focus of several research efforts, which can be placed at the intersection between the Internet of Things (IoT) and Edge Computing (EC). Huge volumes of data captured by IoT devices are processed in EC environments. Even if edge nodes undertake the responsibility of data management tasks, they are characterized by limited storage and computational resources compared to Cloud. Apparently, this mobilises the introduction of intelligent data selection methods capable of deciding which of the collected data should be kept locally based on end users/applications requests. In this paper, we devise a mechanism where edge nodes learn their own data selection filters, and decide the distributed allocation of newly collected data to their peers and/or Cloud once these data are not conformed with the local data filters. Our mechanism intents to postpone final decisions on data transfer to Cloud (e.g., data centers) to pervasively keep relevant data as close and as long to end users/applications as possible. The proposed mechanism derives a data-selection map across edge nodes by learning specific data sub-spaces, which facilitate the placement of processing tasks (e.g., analytics queries). This is very critical when we target to support near real time decision making and would like to minimize all parts of the tasks allocation procedure. We evaluate and compare our approach against baselines and schemes found in the literature showcasing its applicability in pervasive edge computing environments.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141171578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-08DOI: 10.1007/s00607-024-01257-2
Shuyan Cheng, Peng Li, Ruchuan Wang, He Xu
In Differentially Private Federated Learning (DPFL), gradient clipping and random noise addition disproportionately affect statistically heterogeneous data. As a consequence, DPFL has a disparate impact: the accuracy of models trained with DPFL tends to decrease more on these data. If the accuracy of the original model decreases on heterogeneous data, DPFL may degrade the accuracy performance more. In this work, we study the utility loss inequality due to differential privacy and compare the convergence of the private and non-private models. Specifically, we analyze the gradient differences caused by statistically heterogeneous data and explain how statistical heterogeneity relates to the effect of privacy on model convergence. In addition, we propose an improved DPFL algorithm, called R-DPFL, to achieve differential privacy at the same cost but with good utility. R-DPFL adjusts the gradient clipping value and the number of selected users at beginning according to the degree of statistical heterogeneity of the data, and weakens the direct proportional relationship between the differential privacy and the gradient difference, thereby reducing the impact of differential privacy on the model trained on heterogeneous data. Our experimental evaluation shows the effectiveness of our elimination algorithm in achieving the same cost of differential privacy with satisfactory utility. Our code is publicly available at https://github.com/chengshuyan/R-DPFL.
{"title":"Differentially private federated learning with non-IID data","authors":"Shuyan Cheng, Peng Li, Ruchuan Wang, He Xu","doi":"10.1007/s00607-024-01257-2","DOIUrl":"https://doi.org/10.1007/s00607-024-01257-2","url":null,"abstract":"<p>In Differentially Private Federated Learning (DPFL), gradient clipping and random noise addition disproportionately affect statistically heterogeneous data. As a consequence, DPFL has a disparate impact: the accuracy of models trained with DPFL tends to decrease more on these data. If the accuracy of the original model decreases on heterogeneous data, DPFL may degrade the accuracy performance more. In this work, we study the utility loss inequality due to differential privacy and compare the convergence of the private and non-private models. Specifically, we analyze the gradient differences caused by statistically heterogeneous data and explain how statistical heterogeneity relates to the effect of privacy on model convergence. In addition, we propose an improved DPFL algorithm, called R-DPFL, to achieve differential privacy at the same cost but with good utility. R-DPFL adjusts the gradient clipping value and the number of selected users at beginning according to the degree of statistical heterogeneity of the data, and weakens the direct proportional relationship between the differential privacy and the gradient difference, thereby reducing the impact of differential privacy on the model trained on heterogeneous data. Our experimental evaluation shows the effectiveness of our elimination algorithm in achieving the same cost of differential privacy with satisfactory utility. Our code is publicly available at https://github.com/chengshuyan/R-DPFL.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140942541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The smart healthcare system advancements have introduced the Internet of Things, enabling technologies to improve the quality of medical services. The main idea of these healthcare systems is to provide data security, interaction between entities, efficient data transfer, and sustainability. However, privacy concerning patient information is a fundamental problem in smart healthcare systems. Many authentications and critical management protocols exist in the literature for healthcare systems, but ensuring security still needs to be improved. Even if security is achieved, it still requires fast communication and computations. In this paper, we have introduced a new secure privacy-enhanced fast authentication key management scheme that effectively applies to lightweight resource-constrained devices in healthcare systems to overcome the issue. The proposed framework is applicable for quick authentication, efficient key management between the entities, and minimising computation and communication overheads. We verified our proposed framework with formal and informal verification using BAN logic, Scyther simulation, and the Drozer tool. The simulation and tool verification shows that the proposed system is free from well-known attacks, reducing communication and computation costs compared to the existing healthcare systems.
智能医疗系统的进步引入了物联网,使技术能够提高医疗服务的质量。这些医疗系统的主要理念是提供数据安全、实体间互动、高效数据传输和可持续性。然而,患者信息隐私是智能医疗系统的一个基本问题。文献中有许多关于医疗保健系统的认证和关键管理协议,但确保安全性仍有待改进。即使实现了安全性,仍需要快速通信和计算。在本文中,我们介绍了一种新的安全隐私增强型快速认证密钥管理方案,它能有效地应用于医疗保健系统中的轻量级资源受限设备,以克服这一问题。所提出的框架适用于快速身份验证、实体间的高效密钥管理以及计算和通信开销最小化。我们使用 BAN 逻辑、Scyther 仿真和 Drozer 工具,通过正式和非正式验证来验证我们提出的框架。仿真和工具验证结果表明,与现有的医疗保健系统相比,所提出的系统不会受到众所周知的攻击,还能降低通信和计算成本。
{"title":"Secure privacy-enhanced fast authentication and key management for IoMT-enabled smart healthcare systems","authors":"Sriramulu Bojjagani, Denslin Brabin, Kalai Kumar, Neeraj Kumar Sharma, Umamaheswararao Batta","doi":"10.1007/s00607-024-01291-0","DOIUrl":"https://doi.org/10.1007/s00607-024-01291-0","url":null,"abstract":"<p>The smart healthcare system advancements have introduced the Internet of Things, enabling technologies to improve the quality of medical services. The main idea of these healthcare systems is to provide data security, interaction between entities, efficient data transfer, and sustainability. However, privacy concerning patient information is a fundamental problem in smart healthcare systems. Many authentications and critical management protocols exist in the literature for healthcare systems, but ensuring security still needs to be improved. Even if security is achieved, it still requires fast communication and computations. In this paper, we have introduced a new secure privacy-enhanced fast authentication key management scheme that effectively applies to lightweight resource-constrained devices in healthcare systems to overcome the issue. The proposed framework is applicable for quick authentication, efficient key management between the entities, and minimising computation and communication overheads. We verified our proposed framework with formal and informal verification using BAN logic, Scyther simulation, and the Drozer tool. The simulation and tool verification shows that the proposed system is free from well-known attacks, reducing communication and computation costs compared to the existing healthcare systems.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140941987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-05DOI: 10.1007/s00607-024-01292-z
Amro Al-Said Ahmad, Lamis F. Al-Qora’n, Ahmad Zayed
One of the most popular models that provide computer resources today is cloud computing. Today’s dynamic and successful platforms are created to take advantage of various resources available from service providers. Ensuring the performance and availability of such resources and services is a crucial problem. Any software system may be subject to faults that might propagate to cause failures. Such faults with the potential of contributing to failures are critical because they impair performance and result in a delayed reaction, which is regarded as a dependability problem. To ensure that critical faults can be discovered as soon as possible, the impact of such faults on the system must be tested. The performance and dependability of cloud-native systems are examined in this empirical study using fault injection, one of the chaos engineering techniques. The study explores the impacts and results of injecting various delay times into two cloud-native applications with diverse user numbers. The performance of the applications with various numbers of users is measured in relation to these delays, which accordingly reflects measuring the dependability of those systems. Firstly, the systems’ architecture were identified, and serverless with two Lambda functions and containerised microservices applications were chosen, which depend on utilising and incorporating cloud-native services. Secondly, faults are injected in order to quantify performance attributes such as throughput and latency. The results of several controlled experiments carried out in real-world cloud environments provide exploratory empirical data, which promoted comparisons and statistical analysis that we utilised to identify the behaviour of the application while experiencing stress. Typical results from this investigation include an overall reduction in performance that is embodied in an increase in latency with injecting delays. However, a remarkable result is noticed at a particular delay in which defects and availability problems appear out of nowhere. These findings assist in highlighting the value of using chaos engineering in general and fault injection in particular to assess the dependability of cloud-native applications and to find unpredicted failures that could arise quickly from defects that aren’t supposed to spread and result in dependability issues.
{"title":"Exploring the impact of chaos engineering with various user loads on cloud native applications: an exploratory empirical study","authors":"Amro Al-Said Ahmad, Lamis F. Al-Qora’n, Ahmad Zayed","doi":"10.1007/s00607-024-01292-z","DOIUrl":"https://doi.org/10.1007/s00607-024-01292-z","url":null,"abstract":"<p>One of the most popular models that provide computer resources today is cloud computing. Today’s dynamic and successful platforms are created to take advantage of various resources available from service providers. Ensuring the performance and availability of such resources and services is a crucial problem. Any software system may be subject to faults that might propagate to cause failures. Such faults with the potential of contributing to failures are critical because they impair performance and result in a delayed reaction, which is regarded as a dependability problem. To ensure that critical faults can be discovered as soon as possible, the impact of such faults on the system must be tested. The performance and dependability of cloud-native systems are examined in this empirical study using fault injection, one of the chaos engineering techniques. The study explores the impacts and results of injecting various delay times into two cloud-native applications with diverse user numbers. The performance of the applications with various numbers of users is measured in relation to these delays, which accordingly reflects measuring the dependability of those systems. Firstly, the systems’ architecture were identified, and serverless with two Lambda functions and containerised microservices applications were chosen, which depend on utilising and incorporating cloud-native services. Secondly, faults are injected in order to quantify performance attributes such as throughput and latency. The results of several controlled experiments carried out in real-world cloud environments provide exploratory empirical data, which promoted comparisons and statistical analysis that we utilised to identify the behaviour of the application while experiencing stress. Typical results from this investigation include an overall reduction in performance that is embodied in an increase in latency with injecting delays. However, a remarkable result is noticed at a particular delay in which defects and availability problems appear out of nowhere. These findings assist in highlighting the value of using chaos engineering in general and fault injection in particular to assess the dependability of cloud-native applications and to find unpredicted failures that could arise quickly from defects that aren’t supposed to spread and result in dependability issues.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-22DOI: 10.1007/s00607-024-01288-9
Xiu Susie Fang, Xianzhi Wang, Quan Z. Sheng, Lina Yao
{"title":"Generalizing truth discovery by incorporating multi-truth features","authors":"Xiu Susie Fang, Xianzhi Wang, Quan Z. Sheng, Lina Yao","doi":"10.1007/s00607-024-01288-9","DOIUrl":"https://doi.org/10.1007/s00607-024-01288-9","url":null,"abstract":"","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140677879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-18DOI: 10.1007/s00607-024-01287-w
Hussam N. Fakhouri, Feras M. Awaysheh, Sadi Alawadi, Mohannad Alkhalaileh, Faten Hamad
Swarm intelligence (SI) algorithms represent a class of Artificial Intelligence (AI) optimization metaheuristics used for solving complex optimization problems. However, a key challenge in solving complex problems is maintaining the balance between exploration and exploitation to find the optimal global solution and avoid local minima. This paper proposes an innovative Swarm Intelligence (SI) algorithm called the Four Vector Intelligent Metaheuristic (FVIM) to address the aforementioned problem. FVIM’s search strategy is guided by four top-performing leaders within a swarm, ensuring a balanced exploration-exploitation trade-off in the search space, avoiding local minima, and mitigating low convergence issues. The efficacy of FVIM is evaluated through extensive experiments conducted over two datasets, incorporating both qualitative and quantitative statistical measurements. One dataset contains twenty-three well-known single-objective optimization functions, such as fixed-dimensional and multi-modal functions, while the other dataset comprises the CEC2017 functions. Additionally, the Wilcoxon test was computed to validate the result’s significance. The results illustrate FVIM’s effectiveness in addressing diverse optimization challenges. Moreover, FVIM has been successfully applied to tackle engineering design problems, such as weld beam and truss engineering design.
{"title":"Four vector intelligent metaheuristic for data optimization","authors":"Hussam N. Fakhouri, Feras M. Awaysheh, Sadi Alawadi, Mohannad Alkhalaileh, Faten Hamad","doi":"10.1007/s00607-024-01287-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01287-w","url":null,"abstract":"<p>Swarm intelligence (SI) algorithms represent a class of Artificial Intelligence (AI) optimization metaheuristics used for solving complex optimization problems. However, a key challenge in solving complex problems is maintaining the balance between exploration and exploitation to find the optimal global solution and avoid local minima. This paper proposes an innovative Swarm Intelligence (SI) algorithm called the Four Vector Intelligent Metaheuristic (FVIM) to address the aforementioned problem. FVIM’s search strategy is guided by four top-performing leaders within a swarm, ensuring a balanced exploration-exploitation trade-off in the search space, avoiding local minima, and mitigating low convergence issues. The efficacy of FVIM is evaluated through extensive experiments conducted over two datasets, incorporating both qualitative and quantitative statistical measurements. One dataset contains twenty-three well-known single-objective optimization functions, such as fixed-dimensional and multi-modal functions, while the other dataset comprises the CEC2017 functions. Additionally, the Wilcoxon test was computed to validate the result’s significance. The results illustrate FVIM’s effectiveness in addressing diverse optimization challenges. Moreover, FVIM has been successfully applied to tackle engineering design problems, such as weld beam and truss engineering design.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140626804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}