In the IIoT, billions of devices continually provide information that is extremely diverse, variable, and large-scale and presents significant hurdles for interpretation and analysis. Additionally, issues about data transmission, scaling, computation, and storage can result in data anomalies that significantly affect IIoT applications. This work presents a novel anomaly detection framework for the IIoT in the context of the challenges posed by vast, heterogeneous, and complex data streams. This paper proposes a two-staged multi-variate approach employing a composition of long short-term memory (LSTM) and a random forest (RF) Classifier. Our approach leverages the LSTM’s superior temporal pattern recognition capabilities in multi-variate time-series data and the exceptional classification accuracy of the RF model. By integrating the strengths of LSTM and RF models, our method provides not only precise predictions but also effectively discriminates between anomalies and normal occurrences, even in imbalanced datasets. We evaluated our model on two real-world datasets comprising periodic and non-periodic, short-term, and long-term temporal dependencies. Comparative studies indicate that our proposed method outperforms well-established alternatives in anomaly detection, highlighting its potential application in the IIoT environment.
{"title":"PULSE: Proactive uncovering of latent severe anomalous events in IIoT using LSTM-RF model","authors":"Sangeeta Sharma, Priyanka Verma, Nitesh Bharot, Amish Ranpariya, Rakesh Porika","doi":"10.1007/s10586-024-04653-7","DOIUrl":"https://doi.org/10.1007/s10586-024-04653-7","url":null,"abstract":"<p>In the IIoT, billions of devices continually provide information that is extremely diverse, variable, and large-scale and presents significant hurdles for interpretation and analysis. Additionally, issues about data transmission, scaling, computation, and storage can result in data anomalies that significantly affect IIoT applications. This work presents a novel anomaly detection framework for the IIoT in the context of the challenges posed by vast, heterogeneous, and complex data streams. This paper proposes a two-staged multi-variate approach employing a composition of long short-term memory (LSTM) and a random forest (RF) Classifier. Our approach leverages the LSTM’s superior temporal pattern recognition capabilities in multi-variate time-series data and the exceptional classification accuracy of the RF model. By integrating the strengths of LSTM and RF models, our method provides not only precise predictions but also effectively discriminates between anomalies and normal occurrences, even in imbalanced datasets. We evaluated our model on two real-world datasets comprising periodic and non-periodic, short-term, and long-term temporal dependencies. Comparative studies indicate that our proposed method outperforms well-established alternatives in anomaly detection, highlighting its potential application in the IIoT environment.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141549528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The study of continuous natural or industrial phenomena in time and space requires the emergence of new wireless sensor networks. A virtual sensor network (VSN) is a wireless camera network that appears to overcome the limitations of traditional wireless sensor networks in terms of the ability to store, process, and communicate data in a 3D region of interest. In this paper, we proposed a Starlings Model-based virtual Sensor Network for Coverage, Connectivity, and Data Communication (SM-VSN-3C) to ensure 3D coverage of temporally and spatially continuous 3D phenomena. Starlings are a good example of a VSN in nature. We therefore simulate the 3D movement of the stars in the sky, ensure the associated permanent coverage, and communicate with the Renault model. Use the behavioral model proposed by Reynolds to simulate herd movement. We’ve demonstrated the efficiency of the proposed network (SM-VSN-3C) in terms of communication and continuous coverage in time and space (3D). When simulating large and dense VSNs, there are two challenges in terms of coverage and communication: How to efficiently track a set of VSN-Starlings (VSN-Birds) in terms of coverage? In such a dense environment, how can a single Starling be tracked in terms of communication and data routing?
{"title":"Sm-vsn-3c: a new Starlings model-based virtual sensor networks for coverage, connectivity, and data ccommunication","authors":"Adda Boualem, Marwane Ayaida, Cyril de Runz, Hisham Kholidy, Hichem Sedjelmaci","doi":"10.1007/s10586-024-04554-9","DOIUrl":"https://doi.org/10.1007/s10586-024-04554-9","url":null,"abstract":"<p>The study of continuous natural or industrial phenomena in time and space requires the emergence of new wireless sensor networks. A virtual sensor network (VSN) is a wireless camera network that appears to overcome the limitations of traditional wireless sensor networks in terms of the ability to store, process, and communicate data in a 3D region of interest. In this paper, we proposed a Starlings Model-based virtual Sensor Network for Coverage, Connectivity, and Data Communication (SM-VSN-3C) to ensure 3D coverage of temporally and spatially continuous 3D phenomena. Starlings are a good example of a VSN in nature. We therefore simulate the 3D movement of the stars in the sky, ensure the associated permanent coverage, and communicate with the Renault model. Use the behavioral model proposed by Reynolds to simulate herd movement. We’ve demonstrated the efficiency of the proposed network (SM-VSN-3C) in terms of communication and continuous coverage in time and space (3D). When simulating large and dense VSNs, there are two challenges in terms of coverage and communication: How to efficiently track a set of VSN-Starlings (VSN-Birds) in terms of coverage? In such a dense environment, how can a single Starling be tracked in terms of communication and data routing?\u0000</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141549599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phishing attacks are the biggest cybersecurity threats in the digital world. Attackers exploit users by impersonating real, authentic websites to obtain sensitive information such as passwords and bank statements. One common technique in these attacks is using malicious URLs. These malicious URLs mimic legitimate URLs, misleading users into interacting with malicious websites. This practice, URL phishing, presents a big threat to internet security, emphasizing the need for advanced detection methods. So we aim to enhance phishing URL detection by using machine learning and deep learning models, leveraging a set of low-level URL features derived from n-gram analysis. In this paper, we present a method for detecting malicious URLs using statistical features extracted from n-grams. These n-grams are extracted from the hexadecimal representation of URLs. We employed 4 experiments in our paper. The first 3 experiments used machine learning with the statistical features extracted from these n-grams, and the fourth experiment used these grams directly with deep learning models to evaluate their effectiveness. Also, we used Explainable AI (XAI) to explore the extracted features and evaluate their importance and role in phishing detection. A key advantage of our method is its ability to reduce the number of features required and reduce the training time by using fewer features after applying XAI techniques. This stands in contrast to the previous study, which relies on high-level URL features and needs pre-processing and a high number of features (87 high-level URL-based features). So our technique only uses statistical features extracted from n-grams and the n-gram itself, without the need for any high-level features. Our method is evaluated across different n-gram lengths (2, 4, 6, and 8), aiming to optimize detection accuracy. We conducted four experiments in our study. In the first experiment, we focused on extracting and using 12 common statistical features like mean, median, etc. In the first experiment, the XGBoost model achieved the highest accuracy using 8-gram features with 82.41%. In the second experiment, we expanded the feature set and extracted an additional 13 features, so our feature count became 25. XGBoost in the second experiment achieved the highest accuracy with 86.40%. Accuracy improvement continued in the third experiment, we extracted an additional 16 features (character count features), and these features increased XGBoost accuracy to 88.15% in the third experiment. In the fourth experiment, we directly fed n-gram representations into deep learning models. The Convolutional Neural Network (CNN) model achieved the highest accuracy of 94.09% in experiment four. Also, we applied XAI techniques, SHapley Additive exPlanations (SHAP), and Local Interpretable Model-agnostic Explanations (LIME). Through the explanation provided by XAI methods, we were able to determine the most important features in our feature set, enabling a reductio
{"title":"Exploring low-level statistical features of n-grams in phishing URLs: a comparative analysis with high-level features","authors":"Yahya Tashtoush, Moayyad Alajlouni, Firas Albalas, Omar Darwish","doi":"10.1007/s10586-024-04655-5","DOIUrl":"https://doi.org/10.1007/s10586-024-04655-5","url":null,"abstract":"<p>Phishing attacks are the biggest cybersecurity threats in the digital world. Attackers exploit users by impersonating real, authentic websites to obtain sensitive information such as passwords and bank statements. One common technique in these attacks is using malicious URLs. These malicious URLs mimic legitimate URLs, misleading users into interacting with malicious websites. This practice, URL phishing, presents a big threat to internet security, emphasizing the need for advanced detection methods. So we aim to enhance phishing URL detection by using machine learning and deep learning models, leveraging a set of low-level URL features derived from n-gram analysis. In this paper, we present a method for detecting malicious URLs using statistical features extracted from n-grams. These n-grams are extracted from the hexadecimal representation of URLs. We employed 4 experiments in our paper. The first 3 experiments used machine learning with the statistical features extracted from these n-grams, and the fourth experiment used these grams directly with deep learning models to evaluate their effectiveness. Also, we used Explainable AI (XAI) to explore the extracted features and evaluate their importance and role in phishing detection. A key advantage of our method is its ability to reduce the number of features required and reduce the training time by using fewer features after applying XAI techniques. This stands in contrast to the previous study, which relies on high-level URL features and needs pre-processing and a high number of features (87 high-level URL-based features). So our technique only uses statistical features extracted from n-grams and the n-gram itself, without the need for any high-level features. Our method is evaluated across different n-gram lengths (2, 4, 6, and 8), aiming to optimize detection accuracy. We conducted four experiments in our study. In the first experiment, we focused on extracting and using 12 common statistical features like mean, median, etc. In the first experiment, the XGBoost model achieved the highest accuracy using 8-gram features with 82.41%. In the second experiment, we expanded the feature set and extracted an additional 13 features, so our feature count became 25. XGBoost in the second experiment achieved the highest accuracy with 86.40%. Accuracy improvement continued in the third experiment, we extracted an additional 16 features (character count features), and these features increased XGBoost accuracy to 88.15% in the third experiment. In the fourth experiment, we directly fed n-gram representations into deep learning models. The Convolutional Neural Network (CNN) model achieved the highest accuracy of 94.09% in experiment four. Also, we applied XAI techniques, SHapley Additive exPlanations (SHAP), and Local Interpretable Model-agnostic Explanations (LIME). Through the explanation provided by XAI methods, we were able to determine the most important features in our feature set, enabling a reductio","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141549530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-03DOI: 10.1007/s10586-024-04567-4
Nouha Arfaoui, Amel Ksibi, Nouf Abdullah Almujally, Ridha Ejbali
Federated learning (FL) is a decentralized approach to training machine learning model. In the traditional architecture, the training requires getting the whole data what causes a threat to the privacy of the sensitive data. FL was proposed to overcome the cited limits. The principal of FL revolves around training machine learning models locally on individual devices instead of gathering all the data in a central server, and only the updated models are shared and aggregated. Concerning e-learning, it is about using electronic/digital technology to deliver educational content in order to facilitate the learning. It becomes popular with the advancement of the internet and digital devices mainly after the COVID-19. In this work, we propose an e-learning recommendation system based on FL architecture where we can propose suitable courses to the learner. Because of the important number of connected learners looking for online courses, the FL encounters a problem: bottleneck communication. This situation can cause the increase of the computational load, the longer time of the aggregation, the saturation of the resources, etc. As solution, we propose using the edge computing potentials so that the aggregation will be performed first in the edge layer then in the central server, reducing hence, the need for continuous data transmission to the server and enabling a faster inference while keeping the security and privacy of the data. The experiments carried out prove the effectiveness of our approach in solving the problem addressed in this work.
{"title":"Empowering e-learning approach by the use of federated edge computing","authors":"Nouha Arfaoui, Amel Ksibi, Nouf Abdullah Almujally, Ridha Ejbali","doi":"10.1007/s10586-024-04567-4","DOIUrl":"https://doi.org/10.1007/s10586-024-04567-4","url":null,"abstract":"<p>Federated learning (FL) is a decentralized approach to training machine learning model. In the traditional architecture, the training requires getting the whole data what causes a threat to the privacy of the sensitive data. FL was proposed to overcome the cited limits. The principal of FL revolves around training machine learning models locally on individual devices instead of gathering all the data in a central server, and only the updated models are shared and aggregated. Concerning e-learning, it is about using electronic/digital technology to deliver educational content in order to facilitate the learning. It becomes popular with the advancement of the internet and digital devices mainly after the COVID-19. In this work, we propose an e-learning recommendation system based on FL architecture where we can propose suitable courses to the learner. Because of the important number of connected learners looking for online courses, the FL encounters a problem: bottleneck communication. This situation can cause the increase of the computational load, the longer time of the aggregation, the saturation of the resources, etc. As solution, we propose using the edge computing potentials so that the aggregation will be performed first in the edge layer then in the central server, reducing hence, the need for continuous data transmission to the server and enabling a faster inference while keeping the security and privacy of the data. The experiments carried out prove the effectiveness of our approach in solving the problem addressed in this work.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141549596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s10586-024-04617-x
Pranav Shrivastava, Bashir Alam, Mansaf Alam
Nowadays, the need for cloud computing has increased due to the exponential growth in information transmission. Cybercriminals are persistent in their efforts to breach cloud environments, even with security measures in place to protect data stored in the cloud. To address this challenge, an enhanced authentication approach is needed for enhanced security. In order to protect user privacy and anonymity in cloud environments, the study presents a novel technique called Hyperelliptic Curve-based Anonymous Ring Signature (HCARS). Moreover, Blockchain technology is utilized to securely record timestamps and cryptographic keys. The hashing functions in the Blockchain system employ SHA 256 and SHA 512 algorithms. Furthermore, utilizing Ring Learning with Error (RLWE) problems, an Nth degree Truncated Polynomial Ring Units (NTRU)-Based Fully Homomorphic Encryption (NTRU-FHE) Scheme encrypts sensitive data and ensures its integrity. A comparative study between the proposed method and current approaches is done through experimental verification utilizing Java. The results demonstrate that the proposed approach outperforms existing techniques, achieving an encryption time of 6.75 s for an input size of 75 and a decryption time of 5.128 s for the same input size. Similarly, the signature generation time is 125 ms for 100 received messages, block generation time of 10.8 s for 450 blocks, throughput of 98 MB/sec for a record size of 16,384, and total computational time of 403 ms for 20 messages. The results demonstrate the superior performance of the HCARS approach, with significantly reduced encryption, decryption, and signature generation times, as well as improved throughput and computational efficiency. Securing the security and privacy of cloud-based systems in the face of changing cyber threats has been made much easier with the help of the HCARS approach.
{"title":"An anonymous authentication with blockchain assisted ring-based homomorphic encryption for enhancing security in cloud computing","authors":"Pranav Shrivastava, Bashir Alam, Mansaf Alam","doi":"10.1007/s10586-024-04617-x","DOIUrl":"https://doi.org/10.1007/s10586-024-04617-x","url":null,"abstract":"<p>Nowadays, the need for cloud computing has increased due to the exponential growth in information transmission. Cybercriminals are persistent in their efforts to breach cloud environments, even with security measures in place to protect data stored in the cloud. To address this challenge, an enhanced authentication approach is needed for enhanced security. In order to protect user privacy and anonymity in cloud environments, the study presents a novel technique called Hyperelliptic Curve-based Anonymous Ring Signature (HCARS). Moreover, Blockchain technology is utilized to securely record timestamps and cryptographic keys. The hashing functions in the Blockchain system employ SHA 256 and SHA 512 algorithms. Furthermore, utilizing Ring Learning with Error (RLWE) problems, an Nth degree Truncated Polynomial Ring Units (NTRU)-Based Fully Homomorphic Encryption (NTRU-FHE) Scheme encrypts sensitive data and ensures its integrity. A comparative study between the proposed method and current approaches is done through experimental verification utilizing Java. The results demonstrate that the proposed approach outperforms existing techniques, achieving an encryption time of 6.75 s for an input size of 75 and a decryption time of 5.128 s for the same input size. Similarly, the signature generation time is 125 ms for 100 received messages, block generation time of 10.8 s for 450 blocks, throughput of 98 MB/sec for a record size of 16,384, and total computational time of 403 ms for 20 messages. The results demonstrate the superior performance of the HCARS approach, with significantly reduced encryption, decryption, and signature generation times, as well as improved throughput and computational efficiency. Securing the security and privacy of cloud-based systems in the face of changing cyber threats has been made much easier with the help of the HCARS approach.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"355 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s10586-024-04647-5
Yingying Ning, Jing Li, Ming Zhu, Chuanxi Liu
The rapid integration of cloud computing and edge computing has brought the cloud-edge environment into the spotlight in information technology. Within this context, the selection of high-quality and reliable services is crucial to meet the needs of users. However, ensuring the reliability of service information is a challenge due to its vulnerability to tampering. This research paper proposes a method for service selection in the cloud-edge environment based on blockchain smart contracts. By leveraging blockchain technology, this method achieves decentralized and trustworthy service selection. Through smart contracts, user interactions are securely recorded, significantly reducing the risk of information tampering and enhancing information reliability. Additionally, the Arithmetic Optimization Algorithm is improved for service selection on the blockchain by introducing mutation and crossover operations. Experimental results demonstrate that this method effectively prevents tampering with service information and improves the utility value of selected services compared to traditional methods and metaheuristic algorithms mentioned.
{"title":"Service selection based on blockchain smart contracts in cloud-edge environment","authors":"Yingying Ning, Jing Li, Ming Zhu, Chuanxi Liu","doi":"10.1007/s10586-024-04647-5","DOIUrl":"https://doi.org/10.1007/s10586-024-04647-5","url":null,"abstract":"<p>The rapid integration of cloud computing and edge computing has brought the cloud-edge environment into the spotlight in information technology. Within this context, the selection of high-quality and reliable services is crucial to meet the needs of users. However, ensuring the reliability of service information is a challenge due to its vulnerability to tampering. This research paper proposes a method for service selection in the cloud-edge environment based on blockchain smart contracts. By leveraging blockchain technology, this method achieves decentralized and trustworthy service selection. Through smart contracts, user interactions are securely recorded, significantly reducing the risk of information tampering and enhancing information reliability. Additionally, the Arithmetic Optimization Algorithm is improved for service selection on the blockchain by introducing mutation and crossover operations. Experimental results demonstrate that this method effectively prevents tampering with service information and improves the utility value of selected services compared to traditional methods and metaheuristic algorithms mentioned.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"136 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s10586-024-04619-9
Essam H. Houssein, Mohammed R. Saad, Youcef Djenouri, Gang Hu, Abdelmgeid A. Ali, Hassan Shaban
Metaheuristic algorithms have wide applicability, particularly in wireless sensor networks (WSNs), due to their superior skill in solving and optimizing many issues in different domains. However, WSNs suffer from several issues, such as deployment, localization, sink node placement, energy efficiency, and clustering. Unfortunately, these issues negatively affect the already limited energy of the WSNs; therefore, the need to employ metaheuristic algorithms is inevitable to alleviate the harm imposed by these issues on the lifespan and performance of the network. Some associated issues regarding WSNs are modelled as single and multi-objective optimization issues. Single-objective issues have one optimal solution, and the other has multiple desirable solutions that compete, the so-called non-dominated solutions. Several optimization strategies based on metaheuristic algorithms are available to address various types of optimization concerns relating to WSN deployment, localization, sink node placement, energy efficiency, and clustering. This review reports and discusses the literature research on single and multi-objective metaheuristics and their evaluation criteria, WSN architectures and definitions, and applications of metaheuristics in WSN deployment, localization, sink node placement, energy efficiency, and clustering. It also proposes definitions for these terms and reports on some ongoing difficulties linked to these topics. Furthermore, this review outlines the open issues, challenge paths, and future trends that can be applied to metaheuristic algorithms (single and multi-objective) and WSN difficulties, as well as the significant efforts that are necessary to improve WSN efficiency.
{"title":"Metaheuristic algorithms and their applications in wireless sensor networks: review, open issues, and challenges","authors":"Essam H. Houssein, Mohammed R. Saad, Youcef Djenouri, Gang Hu, Abdelmgeid A. Ali, Hassan Shaban","doi":"10.1007/s10586-024-04619-9","DOIUrl":"https://doi.org/10.1007/s10586-024-04619-9","url":null,"abstract":"<p>Metaheuristic algorithms have wide applicability, particularly in wireless sensor networks (WSNs), due to their superior skill in solving and optimizing many issues in different domains. However, WSNs suffer from several issues, such as deployment, localization, sink node placement, energy efficiency, and clustering. Unfortunately, these issues negatively affect the already limited energy of the WSNs; therefore, the need to employ metaheuristic algorithms is inevitable to alleviate the harm imposed by these issues on the lifespan and performance of the network. Some associated issues regarding WSNs are modelled as single and multi-objective optimization issues. Single-objective issues have one optimal solution, and the other has multiple desirable solutions that compete, the so-called non-dominated solutions. Several optimization strategies based on metaheuristic algorithms are available to address various types of optimization concerns relating to WSN deployment, localization, sink node placement, energy efficiency, and clustering. This review reports and discusses the literature research on single and multi-objective metaheuristics and their evaluation criteria, WSN architectures and definitions, and applications of metaheuristics in WSN deployment, localization, sink node placement, energy efficiency, and clustering. It also proposes definitions for these terms and reports on some ongoing difficulties linked to these topics. Furthermore, this review outlines the open issues, challenge paths, and future trends that can be applied to metaheuristic algorithms (single and multi-objective) and WSN difficulties, as well as the significant efforts that are necessary to improve WSN efficiency.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The complex networks of manufacturers, suppliers, retailers, and customers that make up today’s pharmaceutical supply chain span worldwide. As it is, there needs to be more transparency in the traditional pharma supply chain. Also, the global nature of this industry makes it vulnerable to problems caused by a lack of transparency, distrust among involved entities, and reluctance to share data. Such lack of transparency mainly causes concerns regarding pharmaceutical product supply record forgery and counterfeiting of drugs. Supply chain traceability, which means following a product’s journey from its manufacturing facility to its final consumers, is critically important, as it necessitates traceability, authenticity, and efficiency at a high level. This study proposes a blockchain-based secure and efficient traceable supply chain infrastructure for pharmaceutical products. Smart contracts are at the heart of the proposed solution, which tracks how all entities supply and record relevant events. Thus, all the involved entities can stay up-to-date on the latest state and guarantee a secure supply against any supply record forgery and counterfeit pharmaceutical products. In addition, we replicate the records in many chunks and use parallel search to achieve efficient traceability, which searches the stored records efficiently on the blockchain network. The comprehensive security analysis with standard theoretical proofs ensures the computational infeasibility of the proposed model. Further, the detailed performance analysis with test simulations shows the practicability of the proposed model.
{"title":"Blockchain enabled secure pharmaceutical supply chain framework with traceability: an efficient searchable pharmachain approach","authors":"Rahul Mishra, Dharavath Ramesh, Nazeeruddin Mohammad, Bhaskar Mondal","doi":"10.1007/s10586-024-04626-w","DOIUrl":"https://doi.org/10.1007/s10586-024-04626-w","url":null,"abstract":"<p>The complex networks of manufacturers, suppliers, retailers, and customers that make up today’s pharmaceutical supply chain span worldwide. As it is, there needs to be more transparency in the traditional pharma supply chain. Also, the global nature of this industry makes it vulnerable to problems caused by a lack of transparency, distrust among involved entities, and reluctance to share data. Such lack of transparency mainly causes concerns regarding pharmaceutical product supply record forgery and counterfeiting of drugs. Supply chain traceability, which means following a product’s journey from its manufacturing facility to its final consumers, is critically important, as it necessitates traceability, authenticity, and efficiency at a high level. This study proposes a blockchain-based secure and efficient traceable supply chain infrastructure for pharmaceutical products. Smart contracts are at the heart of the proposed solution, which tracks how all entities supply and record relevant events. Thus, all the involved entities can stay up-to-date on the latest state and guarantee a secure supply against any supply record forgery and counterfeit pharmaceutical products. In addition, we replicate the records in many chunks and use parallel search to achieve efficient traceability, which searches the stored records efficiently on the blockchain network. The comprehensive security analysis with standard theoretical proofs ensures the computational infeasibility of the proposed model. Further, the detailed performance analysis with test simulations shows the practicability of the proposed model.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1007/s10586-024-04615-z
Yi Jiang, Jin Xue, Kun Hu, Tianxiang Chen, Tong Wu
As container technology and microservices mature, applications increasingly shift to microservices and cloud deployment. Growing microservices scale complicates resource scheduling. Traditional methods, based on fixed thresholds, are simple but lead to resource waste and poor adaptability to traffic spikes. To address this problem, we design a new resource scheduling strategy Saver based on the container cloud platform, which combines a microservice request prediction model with a microservice performance evaluation model that predicts SLO (Service Level Objective) violations and a heuristic algorithm to solve the optimal resource scheduling for the cluster. We deploy the microservices open-source project sock-shop in a Kubernetes cluster to evaluate Saver. Experimental results show that Saver saves 7.9% of CPU resources, 13% of the instances, and reduces the SLO violation rate by 31.2% compared to K8s autoscaler.
{"title":"Saver: a proactive microservice resource scheduling strategy based on STGCN","authors":"Yi Jiang, Jin Xue, Kun Hu, Tianxiang Chen, Tong Wu","doi":"10.1007/s10586-024-04615-z","DOIUrl":"https://doi.org/10.1007/s10586-024-04615-z","url":null,"abstract":"<p>As container technology and microservices mature, applications increasingly shift to microservices and cloud deployment. Growing microservices scale complicates resource scheduling. Traditional methods, based on fixed thresholds, are simple but lead to resource waste and poor adaptability to traffic spikes. To address this problem, we design a new resource scheduling strategy Saver based on the container cloud platform, which combines a microservice request prediction model with a microservice performance evaluation model that predicts SLO (Service Level Objective) violations and a heuristic algorithm to solve the optimal resource scheduling for the cluster. We deploy the microservices open-source project sock-shop in a Kubernetes cluster to evaluate Saver. Experimental results show that Saver saves 7.9% of CPU resources, 13% of the instances, and reduces the SLO violation rate by 31.2% compared to K8s autoscaler.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1007/s10586-024-04630-0
Mohammad Fraiwan, Natheer Khasawneh
The electroencephalogram (EEG) is a tool utilized to capture the intricate electrical dynamics within the brain, offering invaluable insights into neural activity. This method is pivotal in identifying potential disruptions in brain cell communication, aiding in the diagnosis of various neurological conditions such as epilepsy and sleep disorders. The examination of EEG waveform morphology and associated characteristics serves as a cornerstone in this diagnostic process. Of particular significance within EEG analysis are sleep spindles, intricate patterns of brain waves implicated in crucial cognitive functions including brain plasticity, learning, memory consolidation, and motor skills. Traditionally, the task of analyzing EEG data has rested upon neurologists, neurosurgeons, or trained medical technicians, a laborious and error-prone endeavor. This study endeavors to revolutionize EEG analysis by leveraging artificial intelligence (AI) methodologies, specifically deep learning object detection techniques, to visually identify and locate sleep spindles within EEG waveform images. The You Only Look Once (YOLOv4) methodology is employed for this purpose. A diverse array of convolutional neural network architectures is meticulously customized, trained, and evaluated to facilitate feature extraction for the YOLOv4 detector. Furthermore, novel YOLOX detection models are introduced and extensively compared against YOLOv4-based counterparts. The results reveal outstanding performance across various metrics, with both YOLOX and YOLOv4 demonstrating exceptional average precision (AP) scores ranging between 98% to 100% at a 50% bounding box overlap threshold. Notably, when scrutinized under higher threshold values, YOLOX emerges as the superior model, exhibiting heightened accuracy in bounding box predictions with an 84% AP score at an 80% overlap threshold, compared to 72.48% AP for YOLOv4. This remarkable performance, particularly at the standard 50% overlap threshold, signifies a significant stride towards meeting the stringent clinical requisites for integrating AI-based solutions into clinical EEG analysis workflows.
{"title":"Visual identification of sleep spindles in EEG waveform images using deep learning object detection (YOLOv4 vs YOLOX)","authors":"Mohammad Fraiwan, Natheer Khasawneh","doi":"10.1007/s10586-024-04630-0","DOIUrl":"https://doi.org/10.1007/s10586-024-04630-0","url":null,"abstract":"<p>The electroencephalogram (EEG) is a tool utilized to capture the intricate electrical dynamics within the brain, offering invaluable insights into neural activity. This method is pivotal in identifying potential disruptions in brain cell communication, aiding in the diagnosis of various neurological conditions such as epilepsy and sleep disorders. The examination of EEG waveform morphology and associated characteristics serves as a cornerstone in this diagnostic process. Of particular significance within EEG analysis are sleep spindles, intricate patterns of brain waves implicated in crucial cognitive functions including brain plasticity, learning, memory consolidation, and motor skills. Traditionally, the task of analyzing EEG data has rested upon neurologists, neurosurgeons, or trained medical technicians, a laborious and error-prone endeavor. This study endeavors to revolutionize EEG analysis by leveraging artificial intelligence (AI) methodologies, specifically deep learning object detection techniques, to visually identify and locate sleep spindles within EEG waveform images. The You Only Look Once (YOLOv4) methodology is employed for this purpose. A diverse array of convolutional neural network architectures is meticulously customized, trained, and evaluated to facilitate feature extraction for the YOLOv4 detector. Furthermore, novel YOLOX detection models are introduced and extensively compared against YOLOv4-based counterparts. The results reveal outstanding performance across various metrics, with both YOLOX and YOLOv4 demonstrating exceptional average precision (AP) scores ranging between 98% to 100% at a 50% bounding box overlap threshold. Notably, when scrutinized under higher threshold values, YOLOX emerges as the superior model, exhibiting heightened accuracy in bounding box predictions with an 84% AP score at an 80% overlap threshold, compared to 72.48% AP for YOLOv4. This remarkable performance, particularly at the standard 50% overlap threshold, signifies a significant stride towards meeting the stringent clinical requisites for integrating AI-based solutions into clinical EEG analysis workflows.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}