Pub Date : 2024-07-01DOI: 10.1007/s10586-024-04643-9
Yiğit Çağatay Kuyu, Fahri Vatansever
The vehicle routing problem (VRP) with capacity constraints is a challenging problem that falls into the category of non-deterministic polynomial-time hard (NP-hard) problems. Finding an optimal solution to this problem is difficult as it involves numerous possible route combinations and constraints. The Adaptive Large Neighborhood Search (ALNS) has been widely employed to solve VRPs by searching for optimal solutions using a variety of dynamic destroy and repair operators, which gradually improve the initial solution. This study investigates six alternative initialization mechanisms and one distinct acceptance criterion for ALNS as the selection of an initial solution in ALNS is a crucial factor affecting the efficiency of the search for feasible regions. The process combines ALNS with the aforementioned procedures, resulting in a hybrid of seven methods. To evaluate the performance of the initialization mechanism and acceptance criterion in ALNS, 50 capacitated vehicle routing benchmark instances are employed. High-dimensional problems are also included for more comprehensive analysis. The improvement in the accuracy of the solutions achieved by each variant is reported.
{"title":"A hybrid approach of ALNS with alternative initialization and acceptance mechanisms for capacitated vehicle routing problems","authors":"Yiğit Çağatay Kuyu, Fahri Vatansever","doi":"10.1007/s10586-024-04643-9","DOIUrl":"https://doi.org/10.1007/s10586-024-04643-9","url":null,"abstract":"<p>The vehicle routing problem (VRP) with capacity constraints is a challenging problem that falls into the category of non-deterministic polynomial-time hard (NP-hard) problems. Finding an optimal solution to this problem is difficult as it involves numerous possible route combinations and constraints. The Adaptive Large Neighborhood Search (ALNS) has been widely employed to solve VRPs by searching for optimal solutions using a variety of dynamic destroy and repair operators, which gradually improve the initial solution. This study investigates six alternative initialization mechanisms and one distinct acceptance criterion for ALNS as the selection of an initial solution in ALNS is a crucial factor affecting the efficiency of the search for feasible regions. The process combines ALNS with the aforementioned procedures, resulting in a hybrid of seven methods. To evaluate the performance of the initialization mechanism and acceptance criterion in ALNS, 50 capacitated vehicle routing benchmark instances are employed. High-dimensional problems are also included for more comprehensive analysis. The improvement in the accuracy of the solutions achieved by each variant is reported.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1007/s10586-024-04629-7
Feng Xia, Wenhao Cheng
Federated learning (FL) is designed to protect privacy of participants by not allowing direct access to the participants’ local datasets and training processes. This limitation hinders the server’s ability to verify the authenticity of the model updates sent by participants, making FL vulnerable to poisoning attacks. In addition, gradients in FL process can reveal private information about the local dataset of the participants. However, there is a contradiction between improving robustness against poisoning attacks and preserving privacy of participants. Privacy-preserving techniques aim to make their data indistinguishable from each other, which hinders the detection of abnormal data based on similarity. It is challenging to enhance both aspects simultaneously. The growing concern for data security and privacy protection has inspired us to undertake this research and compile this survey. In this survey, we investigate existing privacy-preserving defense strategies against poisoning attacks in FL. First, we introduce two important classifications of poisoning attacks: data poisoning attack and model poisoning attack. Second, we study plaintext-based defense strategies and classify them into two categories: poisoning tolerance and poisoning detection. Third, we investigate how the combination of privacy techniques and traditional detection strategies can be achieved to defend against poisoning attacks while protecting the privacy of the participants. Finally, we also discuss the challenges faced in the area of security and privacy.
{"title":"A survey on privacy-preserving federated learning against poisoning attacks","authors":"Feng Xia, Wenhao Cheng","doi":"10.1007/s10586-024-04629-7","DOIUrl":"https://doi.org/10.1007/s10586-024-04629-7","url":null,"abstract":"<p>Federated learning (FL) is designed to protect privacy of participants by not allowing direct access to the participants’ local datasets and training processes. This limitation hinders the server’s ability to verify the authenticity of the model updates sent by participants, making FL vulnerable to poisoning attacks. In addition, gradients in FL process can reveal private information about the local dataset of the participants. However, there is a contradiction between improving robustness against poisoning attacks and preserving privacy of participants. Privacy-preserving techniques aim to make their data indistinguishable from each other, which hinders the detection of abnormal data based on similarity. It is challenging to enhance both aspects simultaneously. The growing concern for data security and privacy protection has inspired us to undertake this research and compile this survey. In this survey, we investigate existing privacy-preserving defense strategies against poisoning attacks in FL. First, we introduce two important classifications of poisoning attacks: data poisoning attack and model poisoning attack. Second, we study plaintext-based defense strategies and classify them into two categories: poisoning tolerance and poisoning detection. Third, we investigate how the combination of privacy techniques and traditional detection strategies can be achieved to defend against poisoning attacks while protecting the privacy of the participants. Finally, we also discuss the challenges faced in the area of security and privacy.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1007/s10586-024-04634-w
Atul Barve, Pushpinder Singh Patheja
Advances in autonomous transportation technologies have profoundly influenced the evolution of daily commuting and travel. These innovations rely heavily on seamless connectivity, facilitated by applications within intelligent transportation systems that make effective use of vehicular Ad- hoc Network (VANET) technology. However, the susceptibility of VANETs to malicious activities necessitates the implementation of robust security measures, notably intrusion detection systems (IDS). The article proposed a model for an IDS capable of collaboratively collecting network data from both vehicular nodes and Roadside Units (RSUs). The proposed IDS makes use of the VANET distributed denial of service dataset. Additionally, the proposed IDS uses a K-means clustering method to find clear groups in the simulated VANET architecture. To mitigate the risk of model overfitting, we meticulously curated test data, ensuring its divergence from the training set. Consequently, a hybrid deep learning approach is proposed by integrating Convolutional Neural Networks (CNN) and Bidirectional Long Short-Term Memory (BiLSTM) networks. which results in the highest training, testing, and validation accuracy of 99.56, 99.49, and 99.65% respectively. The results of the proposed methodology is compared with the existing state-of-the-art in the same domain, the accuracy of the proposed method is raised by maximum of 4.65% and minimum by 0.20%.
{"title":"A hybrid deep learning based enhanced and reliable approach for VANET intrusion detection system","authors":"Atul Barve, Pushpinder Singh Patheja","doi":"10.1007/s10586-024-04634-w","DOIUrl":"https://doi.org/10.1007/s10586-024-04634-w","url":null,"abstract":"<p>Advances in autonomous transportation technologies have profoundly influenced the evolution of daily commuting and travel. These innovations rely heavily on seamless connectivity, facilitated by applications within intelligent transportation systems that make effective use of vehicular Ad- hoc Network (VANET) technology. However, the susceptibility of VANETs to malicious activities necessitates the implementation of robust security measures, notably intrusion detection systems (IDS). The article proposed a model for an IDS capable of collaboratively collecting network data from both vehicular nodes and Roadside Units (RSUs). The proposed IDS makes use of the VANET distributed denial of service dataset. Additionally, the proposed IDS uses a K-means clustering method to find clear groups in the simulated VANET architecture. To mitigate the risk of model overfitting, we meticulously curated test data, ensuring its divergence from the training set. Consequently, a hybrid deep learning approach is proposed by integrating Convolutional Neural Networks (CNN) and Bidirectional Long Short-Term Memory (BiLSTM) networks. which results in the highest training, testing, and validation accuracy of 99.56, 99.49, and 99.65% respectively. The results of the proposed methodology is compared with the existing state-of-the-art in the same domain, the accuracy of the proposed method is raised by maximum of 4.65% and minimum by 0.20%.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"239 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-29DOI: 10.1007/s10586-024-04640-y
Gaoqiang Dong, Jia Wang, Mingjing Wang, Tingting Su
Various resources as the essential elements of data centers, and their utilization is vital to resource managers. In terms of the persistence, the periodicity and the spatial-temporal dependence of stream workload, a new Storm scheduler with Advantage Actor-Critic is proposed to improve resource utilization for minimizing the completion time. A new weighted embedding with a Graph Neural Network is designed to depend on the features of a job comprehensively, which includes the dependence, the types and the positions of tasks in a job. An improved Advantage Actor-Critic integrating task chosen and executor assignment is proposed to schedule tasks to executors in order to better resource utilization. Then the status of tasks and executors are updated for the next scheduling. Compared to existing methods, experimental results show that the proposed Storm scheduler improves resource utilization. The completion time is reduced by almost 17% on the TPC-H data set and reduced by almost 25% on the Alibaba data set.
{"title":"An improved scheduling with advantage actor-critic for Storm workloads","authors":"Gaoqiang Dong, Jia Wang, Mingjing Wang, Tingting Su","doi":"10.1007/s10586-024-04640-y","DOIUrl":"https://doi.org/10.1007/s10586-024-04640-y","url":null,"abstract":"<p>Various resources as the essential elements of data centers, and their utilization is vital to resource managers. In terms of the persistence, the periodicity and the spatial-temporal dependence of stream workload, a new Storm scheduler with Advantage Actor-Critic is proposed to improve resource utilization for minimizing the completion time. A new weighted embedding with a Graph Neural Network is designed to depend on the features of a job comprehensively, which includes the dependence, the types and the positions of tasks in a job. An improved Advantage Actor-Critic integrating task chosen and executor assignment is proposed to schedule tasks to executors in order to better resource utilization. Then the status of tasks and executors are updated for the next scheduling. Compared to existing methods, experimental results show that the proposed Storm scheduler improves resource utilization. The completion time is reduced by almost 17% on the TPC-H data set and reduced by almost 25% on the Alibaba data set.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"239 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-29DOI: 10.1007/s10586-024-04646-6
Muhammed Tmeizeh, Carlos Rodríguez-Domínguez, María Visitación Hurtado-Torres
The growing popularity of the most current wave of decentralized systems, powered by blockchain technology, which act as data vaults and preserve data, ensures that, once stored, it stays preserved, considered to be one of the most promising safe and immutable storage methods. The authors of this research suggest an on-chain storage framework that stores files inside blockchain transactions using file transforming, chunking, and encoding techniques. This study investigates the performance of on-chain file storage using a simulated network blockchain environment. Test files of varying sizes were deployed. Performance metrics, including consumed time in chunking, encoding, and distributing chunks among block transactions, were measured and analyzed. An analysis of the collected data was conducted to assess the framework’s performance. The result showed that selecting the appropriate chunk size significantly influences the overall performance of the system. We also explored the implications of our findings and offered suggestions for improving performance within the framework.
{"title":"File chunking towards on-chain storage: a blockchain-based data preservation framework","authors":"Muhammed Tmeizeh, Carlos Rodríguez-Domínguez, María Visitación Hurtado-Torres","doi":"10.1007/s10586-024-04646-6","DOIUrl":"https://doi.org/10.1007/s10586-024-04646-6","url":null,"abstract":"<p>The growing popularity of the most current wave of decentralized systems, powered by blockchain technology, which act as data vaults and preserve data, ensures that, once stored, it stays preserved, considered to be one of the most promising safe and immutable storage methods. The authors of this research suggest an on-chain storage framework that stores files inside blockchain transactions using file transforming, chunking, and encoding techniques. This study investigates the performance of on-chain file storage using a simulated network blockchain environment. Test files of varying sizes were deployed. Performance metrics, including consumed time in chunking, encoding, and distributing chunks among block transactions, were measured and analyzed. An analysis of the collected data was conducted to assess the framework’s performance. The result showed that selecting the appropriate chunk size significantly influences the overall performance of the system. We also explored the implications of our findings and offered suggestions for improving performance within the framework.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-28DOI: 10.1007/s10586-024-04641-x
Sabina Priyadarshini, Tukaram Namdev Sawant, Gitanjali Bhimrao Yadav, J. Premalatha, Sanjay R. Pawar
The pervasive adoption of Artificial Intelligence (AI) and Machine Learning (ML) applications has exponentially increased the demand for efficient resource allocation, workload scheduling, and parallel computing capabilities in cloud environments. This research addresses the critical need for enhancing both the scalability and security of AI/ML workloads in cloud computing settings. The study emphasizes the optimization of resource allocation strategies to accommodate the diverse requirements of AI/ML workloads. Efficient resource allocation ensures that computational resources are utilized judiciously, avoiding bottlenecks and latency issues that could hinder the performance of AI/ML applications. The research explores advanced parallel computing techniques to harness the full possible cloud infrastructure, enhancing the speed and efficiency of AI/ML computations. The integration of robust security measures is crucial to safeguard sensitive data and models processed in the cloud. The research delves into secure multi-party computation and encryption techniques like the Hybrid Heft Pso Ga algorithm, Heuristic Function for Adaptive Batch Stream Scheduling Module (ABSS) and allocation of resources parallel computing and Kuhn–Munkres algorithm tailored for AI/ML workloads, ensuring confidentiality and integrity throughout the computation lifecycle. To validate the proposed methodologies, the research employs extensive simulations and real-world experiments. The proposed ABSS_SSMM method achieves the highest accuracy and throughput values of 98% and 94%, respectively. The contributions of this research extend to the broader cloud computing and AI/ML communities. By providing scalable and secure solutions, the study aims to empower cloud service providers, enterprises, and researchers to leverage AI/ML technologies with confidence. The findings are anticipated to inform the design and implementation of next-generation cloud platforms that seamlessly support the evolving landscape of AI/ML applications, fostering innovation and driving the adoption of intelligent technologies in diverse domains.
{"title":"Enhancing security and scalability by AI/ML workload optimization in the cloud","authors":"Sabina Priyadarshini, Tukaram Namdev Sawant, Gitanjali Bhimrao Yadav, J. Premalatha, Sanjay R. Pawar","doi":"10.1007/s10586-024-04641-x","DOIUrl":"https://doi.org/10.1007/s10586-024-04641-x","url":null,"abstract":"<p>The pervasive adoption of Artificial Intelligence (AI) and Machine Learning (ML) applications has exponentially increased the demand for efficient resource allocation, workload scheduling, and parallel computing capabilities in cloud environments. This research addresses the critical need for enhancing both the scalability and security of AI/ML workloads in cloud computing settings. The study emphasizes the optimization of resource allocation strategies to accommodate the diverse requirements of AI/ML workloads. Efficient resource allocation ensures that computational resources are utilized judiciously, avoiding bottlenecks and latency issues that could hinder the performance of AI/ML applications. The research explores advanced parallel computing techniques to harness the full possible cloud infrastructure, enhancing the speed and efficiency of AI/ML computations. The integration of robust security measures is crucial to safeguard sensitive data and models processed in the cloud. The research delves into secure multi-party computation and encryption techniques like the Hybrid Heft Pso Ga algorithm, Heuristic Function for Adaptive Batch Stream Scheduling Module (ABSS) and allocation of resources parallel computing and Kuhn–Munkres algorithm tailored for AI/ML workloads, ensuring confidentiality and integrity throughout the computation lifecycle. To validate the proposed methodologies, the research employs extensive simulations and real-world experiments. The proposed ABSS_SSMM method achieves the highest accuracy and throughput values of 98% and 94%, respectively. The contributions of this research extend to the broader cloud computing and AI/ML communities. By providing scalable and secure solutions, the study aims to empower cloud service providers, enterprises, and researchers to leverage AI/ML technologies with confidence. The findings are anticipated to inform the design and implementation of next-generation cloud platforms that seamlessly support the evolving landscape of AI/ML applications, fostering innovation and driving the adoption of intelligent technologies in diverse domains.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"155 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141531734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-28DOI: 10.1007/s10586-024-04636-8
Jie Zhou, Runxin Zhang
Multi-view clustering considers the diversity of different views and fuses these views to produce a more accurate and robust partition than single-view clustering. It is a key problem of multi-view clustering research to allocate each view reasonably based on its contribution value. In this paper, we propose a weighted multi-view clustering model via sparse graph learning to cope with allocation of different views. The proposed idea is to assign different view weights instead of equal view weights to learn a high-quality shared similarity matrix for multi-view clustering. In our new proposed method, it can consider the clustering capacity heterogeneity of different views in fusion by assigning a weight for each view so that each view special feature are fully excavated, and improve the performance of multi-view clustering. Moreover, our proposed method can directly obtained cluster indicators by imposing low rank constraints without any post-processing operations. In addition, our model is proposed based on sparse graph, so that the outliers and noise in each view data are well handled and the robustness of the algorithm is effectively guaranteed. Finally, numerous experimental results are conducted on different sizes benchmark datasets, and show that the performance of our algorithm is quite satisfactory. The code of our proposed method is publicly available at https://github.com/zhoujie05/A-weighted-multi-view-clustering-via-sparse-graph-learning.
{"title":"A weighted multi-view clustering via sparse graph learning","authors":"Jie Zhou, Runxin Zhang","doi":"10.1007/s10586-024-04636-8","DOIUrl":"https://doi.org/10.1007/s10586-024-04636-8","url":null,"abstract":"<p>Multi-view clustering considers the diversity of different views and fuses these views to produce a more accurate and robust partition than single-view clustering. It is a key problem of multi-view clustering research to allocate each view reasonably based on its contribution value. In this paper, we propose a weighted multi-view clustering model via sparse graph learning to cope with allocation of different views. The proposed idea is to assign different view weights instead of equal view weights to learn a high-quality shared similarity matrix for multi-view clustering. In our new proposed method, it can consider the clustering capacity heterogeneity of different views in fusion by assigning a weight for each view so that each view special feature are fully excavated, and improve the performance of multi-view clustering. Moreover, our proposed method can directly obtained cluster indicators by imposing low rank constraints without any post-processing operations. In addition, our model is proposed based on sparse graph, so that the outliers and noise in each view data are well handled and the robustness of the algorithm is effectively guaranteed. Finally, numerous experimental results are conducted on different sizes benchmark datasets, and show that the performance of our algorithm is quite satisfactory. The code of our proposed method is publicly available at https://github.com/zhoujie05/A-weighted-multi-view-clustering-via-sparse-graph-learning.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-28DOI: 10.1007/s10586-024-04602-4
Mohammad H. Nadimi-Shahraki, Shokooh Taghian, Danial Javaheri, Ali Safaa Sadiq, Nima Khodadadi, Seyedali Mirjalili
The sine cosine algorithm (SCA) is a metaheuristic algorithm that employs the characteristics of sine and cosine trigonometric functions. SCA’s deficiencies include a tendency to get trapped in local optima, exploration–exploitation imbalance, and poor accuracy, which limit its effectiveness in solving complex optimization problems. To address these limitations, a multi-trial vector-based sine cosine algorithm (MTV-SCA) is proposed in this study. In MTV-SCA, a sufficient number of search strategies incorporating three control parameters are adapted through a multi-trial vector (MTV) approach to achieve specific objectives during the search process. The major contribution of this study is employing four distinct search strategies, each adapted to preserve the equilibrium between exploration and exploitation and avoid premature convergence during optimization. The strategies utilize different sinusoidal and cosinusoidal parameters to improve the algorithm’s performance. The effectiveness of MTV-SCA was evaluated using benchmark functions of CEC 2018 and compared to state-of-the-art, well-established, CEC 2017 winner algorithms and recent optimization algorithms. The results demonstrate that the MTV-SCA outperforms the traditional SCA and other optimization algorithms in terms of convergence speed, accuracy, and the capability to avoid premature convergence. Moreover, the Friedman and Wilcoxon signed-rank tests were employed to statistically analyze the experimental results, validating that the MTV-SCA significantly surpasses other comparative algorithms. The real-world applicability of this algorithm is also demonstrated by optimizing six non-convex constrained optimization problems in engineering design. The experimental results indicate that MTV-SCA can effectively handle complex optimization challenges.
{"title":"MTV-SCA: multi-trial vector-based sine cosine algorithm","authors":"Mohammad H. Nadimi-Shahraki, Shokooh Taghian, Danial Javaheri, Ali Safaa Sadiq, Nima Khodadadi, Seyedali Mirjalili","doi":"10.1007/s10586-024-04602-4","DOIUrl":"https://doi.org/10.1007/s10586-024-04602-4","url":null,"abstract":"<p>The sine cosine algorithm (SCA) is a metaheuristic algorithm that employs the characteristics of sine and cosine trigonometric functions. SCA’s deficiencies include a tendency to get trapped in local optima, exploration–exploitation imbalance, and poor accuracy, which limit its effectiveness in solving complex optimization problems. To address these limitations, a multi-trial vector-based sine cosine algorithm (MTV-SCA) is proposed in this study. In MTV-SCA, a sufficient number of search strategies incorporating three control parameters are adapted through a multi-trial vector (MTV) approach to achieve specific objectives during the search process. The major contribution of this study is employing four distinct search strategies, each adapted to preserve the equilibrium between exploration and exploitation and avoid premature convergence during optimization. The strategies utilize different sinusoidal and cosinusoidal parameters to improve the algorithm’s performance. The effectiveness of MTV-SCA was evaluated using benchmark functions of CEC 2018 and compared to state-of-the-art, well-established, CEC 2017 winner algorithms and recent optimization algorithms. The results demonstrate that the MTV-SCA outperforms the traditional SCA and other optimization algorithms in terms of convergence speed, accuracy, and the capability to avoid premature convergence. Moreover, the Friedman and Wilcoxon signed-rank tests were employed to statistically analyze the experimental results, validating that the MTV-SCA significantly surpasses other comparative algorithms. The real-world applicability of this algorithm is also demonstrated by optimizing six non-convex constrained optimization problems in engineering design. The experimental results indicate that MTV-SCA can effectively handle complex optimization challenges.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"290 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-27DOI: 10.1007/s10586-024-04594-1
Mostafa Raeisi-Varzaneh, Omar Dakkak, Yousef Fazea, Mohammed Golam Kaosar
Cloud computing has emerged as an efficient distribution platform in modern distributed computing offering scalability and flexibility. Task scheduling is considered as one of the main crucial aspects of cloud computing. The primary purpose of the task scheduling mechanism is to reduce the cost and makespan and determine which virtual machine (VM) needs to be selected to execute the task. It is widely acknowledged as a nondeterministic polynomial-time complete problem, necessitating the development of an efficient solution. This paper presents an innovative approach to task scheduling and allocation within cloud computing systems. Our focus lies on improving both the efficiency and cost-effectiveness of task execution, with a specific emphasis on optimizing makespan and resource utilization. This is achieved through the introduction of an Advanced Max–Min Algorithm, which builds upon traditional methodologies to significantly enhance performance metrics such as makespan, waiting time, and resource utilization. The selection of the Max–Min algorithm is rooted in its ability to strike a balance between task execution time and resource utilization, making it a suitable candidate for addressing the challenges of cloud task scheduling. Furthermore, a key contribution of this work is the integration of a cost-aware algorithm into the scheduling framework. This algorithm enables the effective management of task execution costs, ensuring alignment with user requirements while operating within the constraints of cloud service providers. The proposed method adjusts task allocation based on cost considerations dynamically. Additionally, the presented approach enhances the overall economic efficiency of cloud computing deployments. The findings demonstrate that the proposed Advanced Max–Min Algorithm outperforms the traditional Max–Min, Min–Min, and SJF algorithms with respect to makespan, waiting time, and resource utilization.
{"title":"Advanced cost-aware Max–Min workflow tasks allocation and scheduling in cloud computing systems","authors":"Mostafa Raeisi-Varzaneh, Omar Dakkak, Yousef Fazea, Mohammed Golam Kaosar","doi":"10.1007/s10586-024-04594-1","DOIUrl":"https://doi.org/10.1007/s10586-024-04594-1","url":null,"abstract":"<p>Cloud computing has emerged as an efficient distribution platform in modern distributed computing offering scalability and flexibility. Task scheduling is considered as one of the main crucial aspects of cloud computing. The primary purpose of the task scheduling mechanism is to reduce the cost and makespan and determine which virtual machine (VM) needs to be selected to execute the task. It is widely acknowledged as a nondeterministic polynomial-time complete problem, necessitating the development of an efficient solution. This paper presents an innovative approach to task scheduling and allocation within cloud computing systems. Our focus lies on improving both the efficiency and cost-effectiveness of task execution, with a specific emphasis on optimizing makespan and resource utilization. This is achieved through the introduction of an Advanced Max–Min Algorithm, which builds upon traditional methodologies to significantly enhance performance metrics such as makespan, waiting time, and resource utilization. The selection of the Max–Min algorithm is rooted in its ability to strike a balance between task execution time and resource utilization, making it a suitable candidate for addressing the challenges of cloud task scheduling. Furthermore, a key contribution of this work is the integration of a cost-aware algorithm into the scheduling framework. This algorithm enables the effective management of task execution costs, ensuring alignment with user requirements while operating within the constraints of cloud service providers. The proposed method adjusts task allocation based on cost considerations dynamically. Additionally, the presented approach enhances the overall economic efficiency of cloud computing deployments. The findings demonstrate that the proposed Advanced Max–Min Algorithm outperforms the traditional Max–Min, Min–Min, and SJF algorithms with respect to makespan, waiting time, and resource utilization.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-27DOI: 10.1007/s10586-024-04606-0
Junnan Liu, Yifan Liu, Yongkang Ding
Heterogeneous multi-core processor has the ability to switch between different types of cores to perform tasks, which provides more space and possibility for realizing efficient operation of computer system and improving computer computing power. Current research focuses on heterogeneous multiprocessor systems with high performance or low power consumption to reduce system energy consumption. However, some studies have shown that excessive voltage reduction may lead to an increase in transient failure rates, reducing system reliability. This paper studies the energy optimal scheduling problem of HMSS with DVFS under the constraints of minimum time and reliability, and proposes an improved wild horse optimization algorithm (OIWHO), which improves the efficiency of heterogeneous task scheduling and shortens the task completion time. The algorithm uses the learning and chaos perturbation strategies based on opposition and crossover strategies to balance the search and utilization capabilities, and can further improve the performance of OIWHO. Compared with previous work, our proposed algorithm has more advantages than existing algorithms. Experimental results show that the average computing time of OIWHO algorithm is 12.58%, 11.42%, 7.53%, 4.20% and 3.21% faster than DRNN-BWO, PSO, GWO-GA, GACSH and OIWOAH, respectively. Especially when solving large-scale problems, our algorithm takes less time than other algorithms.
{"title":"Research and optimization of task scheduling algorithm based on heterogeneous multi-core processor","authors":"Junnan Liu, Yifan Liu, Yongkang Ding","doi":"10.1007/s10586-024-04606-0","DOIUrl":"https://doi.org/10.1007/s10586-024-04606-0","url":null,"abstract":"<p>Heterogeneous multi-core processor has the ability to switch between different types of cores to perform tasks, which provides more space and possibility for realizing efficient operation of computer system and improving computer computing power. Current research focuses on heterogeneous multiprocessor systems with high performance or low power consumption to reduce system energy consumption. However, some studies have shown that excessive voltage reduction may lead to an increase in transient failure rates, reducing system reliability. This paper studies the energy optimal scheduling problem of HMSS with DVFS under the constraints of minimum time and reliability, and proposes an improved wild horse optimization algorithm (OIWHO), which improves the efficiency of heterogeneous task scheduling and shortens the task completion time. The algorithm uses the learning and chaos perturbation strategies based on opposition and crossover strategies to balance the search and utilization capabilities, and can further improve the performance of OIWHO. Compared with previous work, our proposed algorithm has more advantages than existing algorithms. Experimental results show that the average computing time of OIWHO algorithm is 12.58%, 11.42%, 7.53%, 4.20% and 3.21% faster than DRNN-BWO, PSO, GWO-GA, GACSH and OIWOAH, respectively. Especially when solving large-scale problems, our algorithm takes less time than other algorithms.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"111 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}