首页 > 最新文献

Cluster Computing最新文献

英文 中文
Secure speech-recognition data transfer in the internet of things using a power system and a tried-and-true key generation technique 利用电力系统和久经考验的密钥生成技术确保物联网中语音识别数据传输的安全性
Pub Date : 2024-07-29 DOI: 10.1007/s10586-024-04649-3
Zhe Wang, Shuangbai He, Guoan Li

To secure the privacy, confidentiality, and integrity of Speech Data (SD), the concept of secure Speech Recognition (SR) involves accurately recording and comprehending spoken language while employing diverse security processes. As the Internet of Things (IoT) rapidly evolves, the integration of SR capabilities into IoT devices gains significance. However, ensuring the security and privacy of private SD post-integration remains a critical concern. Despite the potential benefits, implementing the proposed Reptile Search Optimized Hidden Markov Model (RSO-HMM) for SR and integrating it with IoT devices may encounter complexities due to diverse device types. Moreover, the challenge of maintaining data security and privacy for assigned SD in practical IoT settings poses a significant hurdle. Ensuring seamless interoperability and robust security measures is essential. We introduce the Reptile Search Optimized Hidden Markov Model (RSO-HMM) for SR, utilizing retrieved aspects as speech data. Gathering a diverse range of SD from speakers with varying linguistic backgrounds enhances the accuracy of the SR system. Preprocessing involves Z-score normalization for robustness and mitigation of outlier effects. The Perceptual Linear Prediction (PLP) technique facilitates efficient extraction of essential acoustic data from speech sources. Addressing data security, Elliptic Curve Cryptography (ECC) is employed for encryption, particularly suited for resource-constrained scenarios. Our study evaluates the SR system, employing key performance metrics including accuracy, precision, recall, and F1 score. The thorough assessment demonstrates the system's remarkable performance, achieving an impressive accuracy of 96%. The primary objective revolves around appraising the system's capacity and dependability in accurately transcribing speech signals. By proposing a comprehensive approach that combines the RSO-HMM for SR, data preprocessing techniques, and ECC encryption, this study advocates for the wider adoption of SR technology within the IoT ecosystem. By tackling critical data security concerns, this approach paves the way for a safer and more efficient globally interconnected society, encouraging the broader utilization of SR technology in various applications.

为了确保语音数据(SD)的隐私性、保密性和完整性,安全语音识别(SR)的概念涉及准确记录和理解有声语言,同时采用多种安全流程。随着物联网(IoT)的快速发展,将 SR 功能集成到物联网设备中变得越来越重要。然而,确保集成后私人 SD 的安全性和隐私性仍然是一个关键问题。尽管存在潜在的优势,但由于设备类型的多样性,为 SR 实现拟议的爬行搜索优化隐马尔可夫模型(RSO-HMM)并将其与物联网设备集成可能会遇到复杂的问题。此外,在实际物联网环境中维护分配的 SD 的数据安全和隐私也是一大挑战。确保无缝互操作性和强大的安全措施至关重要。我们为 SR 引入了爬行动物搜索优化隐马尔可夫模型(RSO-HMM),将检索到的方面作为语音数据加以利用。从具有不同语言背景的说话者那里收集各种 SD 数据可提高 SR 系统的准确性。预处理包括 Z 分数归一化,以提高鲁棒性并减轻离群效应。感知线性预测(PLP)技术有助于从语音源中高效提取重要的声学数据。在数据安全方面,采用了椭圆曲线加密技术(ECC)进行加密,特别适用于资源有限的情况。我们的研究采用准确度、精确度、召回率和 F1 分数等关键性能指标对 SR 系统进行了评估。全面的评估证明了该系统的卓越性能,准确率达到了令人印象深刻的 96%。主要目标是评估系统在准确转录语音信号方面的能力和可靠性。通过提出一种将 RSO-HMM 用于 SR、数据预处理技术和 ECC 加密相结合的综合方法,本研究倡导在物联网生态系统中更广泛地采用 SR 技术。通过解决关键的数据安全问题,这种方法为建立一个更安全、更高效的全球互联社会铺平了道路,鼓励在各种应用中更广泛地使用语音识别技术。
{"title":"Secure speech-recognition data transfer in the internet of things using a power system and a tried-and-true key generation technique","authors":"Zhe Wang, Shuangbai He, Guoan Li","doi":"10.1007/s10586-024-04649-3","DOIUrl":"https://doi.org/10.1007/s10586-024-04649-3","url":null,"abstract":"<p>To secure the privacy, confidentiality, and integrity of Speech Data (SD), the concept of secure Speech Recognition (SR) involves accurately recording and comprehending spoken language while employing diverse security processes. As the Internet of Things (IoT) rapidly evolves, the integration of SR capabilities into IoT devices gains significance. However, ensuring the security and privacy of private SD post-integration remains a critical concern. Despite the potential benefits, implementing the proposed Reptile Search Optimized Hidden Markov Model (RSO-HMM) for SR and integrating it with IoT devices may encounter complexities due to diverse device types. Moreover, the challenge of maintaining data security and privacy for assigned SD in practical IoT settings poses a significant hurdle. Ensuring seamless interoperability and robust security measures is essential. We introduce the Reptile Search Optimized Hidden Markov Model (RSO-HMM) for SR, utilizing retrieved aspects as speech data. Gathering a diverse range of SD from speakers with varying linguistic backgrounds enhances the accuracy of the SR system. Preprocessing involves Z-score normalization for robustness and mitigation of outlier effects. The Perceptual Linear Prediction (PLP) technique facilitates efficient extraction of essential acoustic data from speech sources. Addressing data security, Elliptic Curve Cryptography (ECC) is employed for encryption, particularly suited for resource-constrained scenarios. Our study evaluates the SR system, employing key performance metrics including accuracy, precision, recall, and F1 score. The thorough assessment demonstrates the system's remarkable performance, achieving an impressive accuracy of 96%. The primary objective revolves around appraising the system's capacity and dependability in accurately transcribing speech signals. By proposing a comprehensive approach that combines the RSO-HMM for SR, data preprocessing techniques, and ECC encryption, this study advocates for the wider adoption of SR technology within the IoT ecosystem. By tackling critical data security concerns, this approach paves the way for a safer and more efficient globally interconnected society, encouraging the broader utilization of SR technology in various applications.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141868394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DDoS attack detection techniques in IoT networks: a survey 物联网网络中的 DDoS 攻击检测技术:一项调查
Pub Date : 2024-07-26 DOI: 10.1007/s10586-024-04662-6
Amir Pakmehr, Andreas Aßmuth, Negar Taheri, Ali Ghaffari

The Internet of Things (IoT) is a rapidly emerging technology that has become more valuable and vital in our daily lives. This technology enables connection and communication between objects and devices and allows these objects to exchange information and perform intelligent operations with each other. However, due to the scale of the network, the heterogeneity of the network, the insecurity of many of these devices, and privacy protection, it faces several challenges. In the last decade, distributed DDoS attacks in IoT networks have become one of the growing challenges that require serious attention and investigation. DDoS attacks take advantage of the limited resources available on IoT devices, which disrupts the functionality of IoT-connected applications and services. This article comprehensively examines the effects of DDoS attacks in the context of the IoT, which cause significant harm to existing systems. Also, this paper investigates several solutions to identify and deal with this type of attack. Finally, this study suggests a broad line of research in the field of IoT security, dedicated to examining how to adapt to current challenges and predicting future trends.

物联网(IoT)是一项迅速崛起的技术,在我们的日常生活中变得越来越有价值和重要。这项技术实现了物体和设备之间的连接和通信,并允许这些物体相互交换信息和执行智能操作。然而,由于网络的规模、网络的异构性、其中许多设备的不安全性以及隐私保护等原因,它面临着一些挑战。近十年来,物联网网络中的分布式 DDoS 攻击已成为日益严峻的挑战之一,需要认真关注和研究。DDoS 攻击利用物联网设备上的有限资源,破坏了与物联网连接的应用程序和服务的功能。本文全面研究了 DDoS 攻击在物联网背景下的影响,这些攻击对现有系统造成了重大损害。此外,本文还研究了几种识别和应对此类攻击的解决方案。最后,本研究提出了物联网安全领域的广泛研究方向,致力于研究如何适应当前挑战和预测未来趋势。
{"title":"DDoS attack detection techniques in IoT networks: a survey","authors":"Amir Pakmehr, Andreas Aßmuth, Negar Taheri, Ali Ghaffari","doi":"10.1007/s10586-024-04662-6","DOIUrl":"https://doi.org/10.1007/s10586-024-04662-6","url":null,"abstract":"<p>The Internet of Things (IoT) is a rapidly emerging technology that has become more valuable and vital in our daily lives. This technology enables connection and communication between objects and devices and allows these objects to exchange information and perform intelligent operations with each other. However, due to the scale of the network, the heterogeneity of the network, the insecurity of many of these devices, and privacy protection, it faces several challenges. In the last decade, distributed DDoS attacks in IoT networks have become one of the growing challenges that require serious attention and investigation. DDoS attacks take advantage of the limited resources available on IoT devices, which disrupts the functionality of IoT-connected applications and services. This article comprehensively examines the effects of DDoS attacks in the context of the IoT, which cause significant harm to existing systems. Also, this paper investigates several solutions to identify and deal with this type of attack. Finally, this study suggests a broad line of research in the field of IoT security, dedicated to examining how to adapt to current challenges and predicting future trends.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141771523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing image encryption using chaotic maps: a multi-map approach for robust security and performance optimization 利用混沌映射增强图像加密:实现稳健安全和性能优化的多映射方法
Pub Date : 2024-07-26 DOI: 10.1007/s10586-024-04672-4
Mostafa Abodawood, Abeer Twakol Khalil, Hanan M. Amer, Mohamed Maher Ata

This paper proposes a model for encrypted images that depend on chaotic maps. This scheme uses eight chaotic maps to perform the encryption process: Logistic, Gauss, Circle, Sine, Singer, Piecewise, Tent, and Chebyshev. The two major processes of the suggested model are chaotic confusion and pixel diffusion. Chaotic maps are used to permute the pixel positions during the confusion process. In the diffusion process, the value of the image pixel is changed. To evaluate the suggested model, some performance metrics were used, such as execution time, peak signal-to-noise ratio, entropy, key sensitivity, noise attack, the number of pixels change rate (NPCR), unified average changing intensity (UACI), histogram analysis, and cross-correlation. According to experimental analysis, images encrypted with the suggested system have correlation coefficient values that are almost zero, NPCR of 99.6%, UACI of 32.9%, the key space of 10^(80), the histogram analysis showed that the encrypted images have almost similar pixels, an execution time of 0.1563 ms, the, and entropy of 7.9973. All prior results have verified the robustness and efficiency of the suggested algorithm.

本文提出了一种依赖于混沌图的加密图像模型。该方案使用八种混沌图来执行加密过程:逻辑图、高斯图、圆图、正弦图、辛格图、片状图、帐篷图和切比雪夫图。建议模型的两个主要过程是混沌混淆和像素扩散。在混淆过程中,混沌图被用来改变像素的位置。在扩散过程中,图像像素的值会发生变化。为了评估所建议的模型,使用了一些性能指标,如执行时间、峰值信噪比、熵、密钥灵敏度、噪声攻击、像素变化率(NPCR)、统一平均变化强度(UACI)、直方图分析和交叉相关性。根据实验分析,使用建议系统加密的图像的相关系数值几乎为零、NPCR 为 99.6%、UACI 为 32.9%、密钥空间为 10^(80)、直方图分析表明加密图像的像素几乎相似、执行时间为 0.1563 毫秒、熵为 7.9973。所有先前的结果都验证了所建议算法的鲁棒性和高效性。
{"title":"Enhancing image encryption using chaotic maps: a multi-map approach for robust security and performance optimization","authors":"Mostafa Abodawood, Abeer Twakol Khalil, Hanan M. Amer, Mohamed Maher Ata","doi":"10.1007/s10586-024-04672-4","DOIUrl":"https://doi.org/10.1007/s10586-024-04672-4","url":null,"abstract":"<p>This paper proposes a model for encrypted images that depend on chaotic maps. This scheme uses eight chaotic maps to perform the encryption process: Logistic, Gauss, Circle, Sine, Singer, Piecewise, Tent, and Chebyshev. The two major processes of the suggested model are chaotic confusion and pixel diffusion. Chaotic maps are used to permute the pixel positions during the confusion process. In the diffusion process, the value of the image pixel is changed. To evaluate the suggested model, some performance metrics were used, such as execution time, peak signal-to-noise ratio, entropy, key sensitivity, noise attack, the number of pixels change rate (NPCR), unified average changing intensity (UACI), histogram analysis, and cross-correlation. According to experimental analysis, images encrypted with the suggested system have correlation coefficient values that are almost zero, NPCR of 99.6%, UACI of 32.9%, the key space of 10^(80), the histogram analysis showed that the encrypted images have almost similar pixels, an execution time of 0.1563 ms, the, and entropy of 7.9973. All prior results have verified the robustness and efficiency of the suggested algorithm.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141771749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empowering bonobo optimizer for global optimization and cloud scheduling problem 为全局优化和云调度问题赋能的 bonobo 优化器
Pub Date : 2024-07-24 DOI: 10.1007/s10586-024-04671-5
Reham R. Mostafa, Fatma A. Hashim, Amit Chhabra, Ghaith Manita, Yaning Xiao

Task scheduling in cloud computing systems is an important and challenging NP-Hard problem that involves the decision to allocate resources to tasks in a way that optimizes a performance metric. The complexity of this problem rises due to the size and scale of cloud systems, the heterogeneity of cloud resources and tasks, and the dynamic nature of cloud resources. Metaheuristics are a class of algorithms that have been used effectively to solve NP-Hard cloud scheduling problems (CSP). Bonobo optimizer (BO) is a recent metaheuristic-based optimization algorithm, which mimics several interesting reproductive strategies and social behaviour of Bonobos and has shown competitive performance against several state-of-the-art metaheuristics for many optimization problems. Besides its good performance, it still suffers from inherent deficiencies such as imbalanced exploration-exploitation and trapping in local optima. This paper proposes a modified version of the BO algorithm called mBO to solve the cloud scheduling problem to minimize two important scheduling objectives; makespan and energy consumption. We have incorporated four modifications namely Dimension Learning-Based Hunting (DLH) search strategy, (2) Transition Factor (TF), (3) Control Randomization (DR), and 4) Control Randomization Direction in the traditional BO to improve the performance, which helps it to escape local optima and balance exploration-exploitation. The efficacy of mBO is initially tested on the popular standard CEC’20 benchmarks followed by its application on the CSP problem using real-world supercomputing workloads namely CEA-Curie and HPC2N. Results and observations reveal the supremacy of the proposed mBO algorithm over many contemporary metaheuristics by a competitive margin for both CEC’20 benchmarks and the CSP problem. Quantitatively for the CSP problem, mBO was able to reduce makespan and energy consumption by 8.20–23.73% and 2.57–11.87%, respectively against tested algorithms. For HPC2N workloads, mBO achieved a makespan reduction of 10.99–29.48% and an energy consumption reduction of 3.55–30.65% over the compared metaheuristics.

云计算系统中的任务调度是一个重要而又具有挑战性的 NP-Hard问题,它涉及以优化性能指标的方式为任务分配资源的决策。由于云系统的规模和尺度、云资源和任务的异构性以及云资源的动态性,该问题的复杂性不断上升。元启发式算法是一类有效用于解决 NP-Hard云调度问题(CSP)的算法。Bonobo optimizer(BO)是最近出现的一种基于元启发式的优化算法,它模仿了倭黑猩猩几种有趣的繁殖策略和社会行为,在许多优化问题上与几种最先进的元启发式算法相比表现出了很强的竞争力。除了良好的性能外,该算法仍存在一些固有缺陷,如探索-开发不平衡和陷入局部最优等。本文提出了一种名为 mBO 的 BO 算法改进版,用于解决云调度问题,以最小化两个重要的调度目标:时间跨度和能耗。我们在传统 BO 算法中加入了四项改进,即基于维度学习的狩猎(DLH)搜索策略、(2)过渡因子(TF)、(3)控制随机化(DR)和(4)控制随机化方向,以提高其性能,帮助其摆脱局部最优并平衡探索与开发。mBO 的功效首先在流行的标准 CEC'20 基准上进行了测试,然后利用实际超级计算工作负载(即 CEA-Curie 和 HPC2N)将其应用于 CSP 问题。研究结果和观察结果表明,在 CEC'20 基准和 CSP 问题上,所提出的 mBO 算法比许多当代元启发式算法更有竞争力。在CSP问题上,mBO与测试算法相比,能将时间跨度和能耗分别减少8.20%-23.73%和2.57%-11.87%。在 HPC2N 工作负载中,mBO 与其他元启发式算法相比,时间跨度减少了 10.99%-29.48%,能耗减少了 3.55%-30.65%。
{"title":"Empowering bonobo optimizer for global optimization and cloud scheduling problem","authors":"Reham R. Mostafa, Fatma A. Hashim, Amit Chhabra, Ghaith Manita, Yaning Xiao","doi":"10.1007/s10586-024-04671-5","DOIUrl":"https://doi.org/10.1007/s10586-024-04671-5","url":null,"abstract":"<p>Task scheduling in cloud computing systems is an important and challenging NP-Hard problem that involves the decision to allocate resources to tasks in a way that optimizes a performance metric. The complexity of this problem rises due to the size and scale of cloud systems, the heterogeneity of cloud resources and tasks, and the dynamic nature of cloud resources. Metaheuristics are a class of algorithms that have been used effectively to solve NP-Hard cloud scheduling problems (CSP). Bonobo optimizer (BO) is a recent metaheuristic-based optimization algorithm, which mimics several interesting reproductive strategies and social behaviour of Bonobos and has shown competitive performance against several state-of-the-art metaheuristics for many optimization problems. Besides its good performance, it still suffers from inherent deficiencies such as imbalanced exploration-exploitation and trapping in local optima. This paper proposes a modified version of the BO algorithm called mBO to solve the cloud scheduling problem to minimize two important scheduling objectives; makespan and energy consumption. We have incorporated four modifications namely Dimension Learning-Based Hunting (DLH) search strategy, (2) Transition Factor (TF), (3) Control Randomization (DR), and 4) Control Randomization Direction in the traditional BO to improve the performance, which helps it to escape local optima and balance exploration-exploitation. The efficacy of mBO is initially tested on the popular standard CEC’20 benchmarks followed by its application on the CSP problem using real-world supercomputing workloads namely CEA-Curie and HPC2N. Results and observations reveal the supremacy of the proposed mBO algorithm over many contemporary metaheuristics by a competitive margin for both CEC’20 benchmarks and the CSP problem. Quantitatively for the CSP problem, mBO was able to reduce makespan and energy consumption by 8.20–23.73% and 2.57–11.87%, respectively against tested algorithms. For HPC2N workloads, mBO achieved a makespan reduction of 10.99–29.48% and an energy consumption reduction of 3.55–30.65% over the compared metaheuristics.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141771751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vehicle edge server deployment based on reinforcement learning in cloud-edge collaborative environment 云边缘协作环境中基于强化学习的车辆边缘服务器部署
Pub Date : 2024-07-24 DOI: 10.1007/s10586-024-04659-1
Feiyan Guo, Bing Tang, Ying Wang, Xiaoqing Luo

The rapid development of Internet of Vehicles (IoV) technology has led to a sharp increase in vehicle data. Traditional cloud computing is no longer sufficient to meet the high bandwidth and low latency requirements of IoV tasks. Ensuring the service quality of applications on in-vehicle devices has become challenging. Edge computing technology moves computing tasks from the cloud to edge servers with sufficient computing resources, effectively reducing network congestion and data propagation latency. The integration of edge computing and IoV technology is an effective approach to realizing intelligent applications in IoV.This paper investigates the deployment of vehicle edge servers in cloud-edge collaborative environment. Taking into consideration the vehicular mobility and the computational demands of IoV applications, the vehicular edge server deployment within the cloud-edge collaborative framework is formulated as a multi-objective optimization problem. This problem aims to achieve two primary objectives: minimizing service access latency and balancing server workload. To address this problem, a model is established for optimizing the deployment of vehicle edge servers and a deployment approach named VSPR is proposed. This method integrates hierarchical clustering and reinforcement learning techniques to effectively achieve the desired multi-objective optimization. Experiments are conducted using a real datasets from Shanghai Telecom to comprehensively evaluate the performance of workload balance and service access latency of vehicle edge servers under different deploy methods. Experimental results demonstrate that VSPR achieves an optimized balance between low latency and workload balancing while ensuring service quality, and outperforms SRL, CQP, K-means and Random algorithm by 4.76%, 44.59%, 40.78% and 69.33%, respectively.

车联网(IoV)技术的快速发展导致车辆数据急剧增加。传统的云计算已不足以满足 IoV 任务对高带宽和低延迟的要求。确保车载设备上应用的服务质量已成为一项挑战。边缘计算技术可将计算任务从云端转移到拥有充足计算资源的边缘服务器上,从而有效减少网络拥塞和数据传播延迟。边缘计算与物联网技术的融合是实现物联网智能应用的有效途径。考虑到车辆的移动性和物联网应用的计算需求,本文将云边协同框架下的车载边缘服务器部署设计为一个多目标优化问题。该问题旨在实现两个主要目标:最小化服务访问延迟和平衡服务器工作量。为解决这一问题,建立了优化车载边缘服务器部署的模型,并提出了一种名为 VSPR 的部署方法。该方法集成了分层聚类和强化学习技术,可有效实现所需的多目标优化。利用上海电信的真实数据集进行了实验,全面评估了不同部署方法下车载边缘服务器的工作量平衡和服务访问延迟性能。实验结果表明,VSPR 在保证服务质量的前提下,实现了低延迟和工作负载平衡之间的优化平衡,性能分别优于 SRL、CQP、K-means 和随机算法 4.76%、44.59%、40.78% 和 69.33%。
{"title":"Vehicle edge server deployment based on reinforcement learning in cloud-edge collaborative environment","authors":"Feiyan Guo, Bing Tang, Ying Wang, Xiaoqing Luo","doi":"10.1007/s10586-024-04659-1","DOIUrl":"https://doi.org/10.1007/s10586-024-04659-1","url":null,"abstract":"<p>The rapid development of Internet of Vehicles (IoV) technology has led to a sharp increase in vehicle data. Traditional cloud computing is no longer sufficient to meet the high bandwidth and low latency requirements of IoV tasks. Ensuring the service quality of applications on in-vehicle devices has become challenging. Edge computing technology moves computing tasks from the cloud to edge servers with sufficient computing resources, effectively reducing network congestion and data propagation latency. The integration of edge computing and IoV technology is an effective approach to realizing intelligent applications in IoV.This paper investigates the deployment of vehicle edge servers in cloud-edge collaborative environment. Taking into consideration the vehicular mobility and the computational demands of IoV applications, the vehicular edge server deployment within the cloud-edge collaborative framework is formulated as a multi-objective optimization problem. This problem aims to achieve two primary objectives: minimizing service access latency and balancing server workload. To address this problem, a model is established for optimizing the deployment of vehicle edge servers and a deployment approach named VSPR is proposed. This method integrates hierarchical clustering and reinforcement learning techniques to effectively achieve the desired multi-objective optimization. Experiments are conducted using a real datasets from Shanghai Telecom to comprehensively evaluate the performance of workload balance and service access latency of vehicle edge servers under different deploy methods. Experimental results demonstrate that VSPR achieves an optimized balance between low latency and workload balancing while ensuring service quality, and outperforms SRL, CQP, K-means and Random algorithm by 4.76%, 44.59%, 40.78% and 69.33%, respectively.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141771750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Migration of containers on the basis of load prediction with dynamic inertia weight based PSO algorithm 利用基于动态惯性权的 PSO 算法,根据载荷预测迁移集装箱
Pub Date : 2024-07-24 DOI: 10.1007/s10586-024-04676-0
Shabnam Bawa, Prashant Singh Rana, RajKumar Tekchandani

Due to the necessity of virtualization in a fog environment with limited resources, service providers are challenged to reduce the energy consumption of hosts. The consolidation of virtual machines (VMs) has led to a significant amount of research into the effective management of energy usage. Due to their high computational overhead, the existing virtualization techniques may not be suited to minimize the energy consumption of fog devices. As containers have recently gained popularity for encapsulating fog services, they are an ideal candidate for addressing this issue, particularly for fog devices. In the proposed work, an ensemble model is used for load prediction on hosts to classify them as overloaded, underloaded, or balanced. A container selection algorithm identifies containers for migration when a host becomes overloaded. Additionally, an energy-efficient container migration strategy facilitated by a dynamic inertia weight-based particle swarm optimization (DIWPSO) algorithm is introduced to meet resource demands. This approach entails migrating containers from overloaded hosts to others in order to balance the load and reduce the energy consumption of hosts located on fog nodes. Experimental results demonstrate that load balancing can be achieved at a lower migration cost. The proposed DIWPSO algorithm significantly reduces energy consumption by 10.89% through container migration. Moreover, compared to meta-heuristic solutions such as PSO, ABC (Artificial Bee Colony), and E-ABC (Enhanced Artificial Bee Colony), the proposed DIWPSO algorithm shows superior performance across various evaluation parameters.

由于必须在资源有限的雾环境中实现虚拟化,服务提供商面临着降低主机能耗的挑战。虚拟机(VM)的整合引发了大量关于有效管理能源使用的研究。由于计算开销大,现有的虚拟化技术可能无法最大限度地降低雾设备的能耗。最近,用于封装雾服务的容器越来越受欢迎,因此容器是解决这一问题的理想候选方案,特别是对于雾设备而言。在提议的工作中,使用集合模型对主机进行负载预测,将其分类为过载、欠载或平衡。当主机过载时,容器选择算法会识别需要迁移的容器。此外,还引入了一种高能效的容器迁移策略,通过基于动态惯性权重的粒子群优化算法(DIWPSO)来满足资源需求。这种方法需要将容器从过载的主机迁移到其他主机,以平衡负载并降低位于雾节点上的主机的能耗。实验结果表明,可以以较低的迁移成本实现负载平衡。所提出的 DIWPSO 算法通过容器迁移将能耗显著降低了 10.89%。此外,与 PSO、ABC(人工蜂群)和 E-ABC(增强型人工蜂群)等元启发式解决方案相比,所提出的 DIWPSO 算法在各种评估参数上都表现出更优越的性能。
{"title":"Migration of containers on the basis of load prediction with dynamic inertia weight based PSO algorithm","authors":"Shabnam Bawa, Prashant Singh Rana, RajKumar Tekchandani","doi":"10.1007/s10586-024-04676-0","DOIUrl":"https://doi.org/10.1007/s10586-024-04676-0","url":null,"abstract":"<p>Due to the necessity of virtualization in a fog environment with limited resources, service providers are challenged to reduce the energy consumption of hosts. The consolidation of virtual machines (VMs) has led to a significant amount of research into the effective management of energy usage. Due to their high computational overhead, the existing virtualization techniques may not be suited to minimize the energy consumption of fog devices. As containers have recently gained popularity for encapsulating fog services, they are an ideal candidate for addressing this issue, particularly for fog devices. In the proposed work, an ensemble model is used for load prediction on hosts to classify them as overloaded, underloaded, or balanced. A container selection algorithm identifies containers for migration when a host becomes overloaded. Additionally, an energy-efficient container migration strategy facilitated by a dynamic inertia weight-based particle swarm optimization (DIWPSO) algorithm is introduced to meet resource demands. This approach entails migrating containers from overloaded hosts to others in order to balance the load and reduce the energy consumption of hosts located on fog nodes. Experimental results demonstrate that load balancing can be achieved at a lower migration cost. The proposed DIWPSO algorithm significantly reduces energy consumption by 10.89% through container migration. Moreover, compared to meta-heuristic solutions such as PSO, ABC (Artificial Bee Colony), and E-ABC (Enhanced Artificial Bee Colony), the proposed DIWPSO algorithm shows superior performance across various evaluation parameters.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hybrid butterfly and Newton–Raphson swarm intelligence algorithm based on opposition-based learning 基于对立学习的蝴蝶和牛顿-拉夫逊混合群智能算法
Pub Date : 2024-07-21 DOI: 10.1007/s10586-024-04678-y
Chuan Li, Yanjie Zhu

In response to the issues of local optima entrapment, slow convergence, and low optimization accuracy in Butterfly optimization algorithm (BOA), this paper proposes a hybrid Butterfly and Newton–Raphson swarm intelligence algorithm based on Opposition-based learning (BOANRBO). Firstly, by Opposition-based learning, the initialization strategy of the butterfly algorithm is improved to accelerate convergence. Secondly, adaptive perception modal factors are introduced into the original butterfly algorithm, controlling the adjustment rate through the adjustment factor α to enhance the algorithm's global search capability. Then, the exploration probability (p) is dynamically adjusted based on the algorithm's runtime, increasing or decreasing exploration probability by examining changes in fitness to achieve a balance between exploration and exploitation. Finally, the exploration capability of BOA is enhanced by incorporating the Newton–Raphson-based optimizer (NRBO) to help BOA avoid local optima traps. The optimization performance of BOANRBO is evaluated on 65 standard benchmark functions from CEC-2005, CEC-2017, and CEC-2022, and the obtained optimization results are compared with the performance of 17 other well-known algorithms. Simulation results indicate that in the 12 test functions of CEC-2022, the BOANRBO algorithm achieved 8 optimal results (66.7%). In CEC-2017, out of 30 test functions, it obtained 27 optimal results (90%). In CEC-2005, among 23 test functions, it secured 22 optimal results (95.6%). Additionally, experiments have validated the algorithm’s practicality and superior performance in 5 engineering design optimization problems and 2 real-world problems.

针对蝶式优化算法(BOA)中存在的局部最优陷阱、收敛速度慢、优化精度低等问题,本文提出了一种基于对立学习的蝶式和牛顿-拉夫逊混合群智能算法(BOANRBO)。首先,通过基于对立面的学习,改进了蝴蝶算法的初始化策略,加快了收敛速度。其次,在原有的蝶式算法中引入自适应感知模态因子,通过调整因子α控制调整率,增强算法的全局搜索能力。然后,根据算法的运行时间动态调整探索概率(p),通过考察适合度的变化来增加或减少探索概率,从而实现探索和利用之间的平衡。最后,通过加入基于牛顿-拉弗森的优化器(NRBO)来增强 BOA 的探索能力,帮助 BOA 避免局部最优陷阱。在 CEC-2005、CEC-2017 和 CEC-2022 中的 65 个标准基准函数上评估了 BOANRBO 的优化性能,并将所获得的优化结果与其他 17 种著名算法的性能进行了比较。仿真结果表明,在 CEC-2022 的 12 个测试函数中,BOANRBO 算法取得了 8 个最优结果(66.7%)。在 CEC-2017 中,在 30 个测试功能中,BOANRBO 算法获得了 27 个最优结果(90%)。在 CEC-2005 中,在 23 个测试功能中,它获得了 22 个最佳结果(95.6%)。此外,实验还验证了该算法在 5 个工程设计优化问题和 2 个实际问题中的实用性和优越性能。
{"title":"A hybrid butterfly and Newton–Raphson swarm intelligence algorithm based on opposition-based learning","authors":"Chuan Li, Yanjie Zhu","doi":"10.1007/s10586-024-04678-y","DOIUrl":"https://doi.org/10.1007/s10586-024-04678-y","url":null,"abstract":"<p>In response to the issues of local optima entrapment, slow convergence, and low optimization accuracy in Butterfly optimization algorithm (BOA), this paper proposes a hybrid Butterfly and Newton–Raphson swarm intelligence algorithm based on Opposition-based learning (BOANRBO). Firstly, by Opposition-based learning, the initialization strategy of the butterfly algorithm is improved to accelerate convergence. Secondly, adaptive perception modal factors are introduced into the original butterfly algorithm, controlling the adjustment rate through the adjustment factor α to enhance the algorithm's global search capability. Then, the exploration probability <span>(p)</span> is dynamically adjusted based on the algorithm's runtime, increasing or decreasing exploration probability by examining changes in fitness to achieve a balance between exploration and exploitation. Finally, the exploration capability of BOA is enhanced by incorporating the Newton–Raphson-based optimizer (NRBO) to help BOA avoid local optima traps. The optimization performance of BOANRBO is evaluated on 65 standard benchmark functions from CEC-2005, CEC-2017, and CEC-2022, and the obtained optimization results are compared with the performance of 17 other well-known algorithms. Simulation results indicate that in the 12 test functions of CEC-2022, the BOANRBO algorithm achieved 8 optimal results (66.7%). In CEC-2017, out of 30 test functions, it obtained 27 optimal results (90%). In CEC-2005, among 23 test functions, it secured 22 optimal results (95.6%). Additionally, experiments have validated the algorithm’s practicality and superior performance in 5 engineering design optimization problems and 2 real-world problems.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141745968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel approach for energy consumption management in cloud centers based on adaptive fuzzy neural systems 基于自适应模糊神经系统的云中心能耗管理新方法
Pub Date : 2024-07-21 DOI: 10.1007/s10586-024-04665-3
Hong Huang, Yu Wang, Yue Cai, Hong Wang

Cloud computing enables global access to tool-based IT services, accommodating a wide range of applications across consumer, scientific, and commercial sectors, operating on a pay-per-use model. However, the substantial energy consumption of data centers hosting cloud applications leads to significant operational costs and environmental impact due to carbon emissions. Each day, these centers handle numerous requests from diverse users, necessitating powerful servers that consume substantial energy and associated peripherals. Efficient resource utilization is essential for mitigating energy consumption in cloud centers. In our research, we adopted a novel hybrid approach to dynamically allocate resources in the cloud, focusing on energy reduction and load prediction. Specifically, we employed neural fuzzy systems for load prediction and the ant colony optimization algorithm for virtual machine migration. Comparative analysis against existing literature demonstrates the effectiveness of our approach. Across 810 time periods, our method exhibits an average resource loss reduction of 21.3% and a 5.6% lower average request denial rate compared to alternative strategies. Using the PlanetLab workload and the created CloudSim simulator, the suggested methods have been assessed. Moreover, our methodology was validated through comprehensive experiments using the SPECpower benchmark, achieving over 98% accuracy in forecasting energy consumption for the proposed model. These results underscore the practicality and efficiency of our strategy in optimizing cloud resource management while addressing energy efficiency challenges in data center operations.

云计算使全球都能获得基于工具的 IT 服务,可满足消费、科学和商业领域的广泛应用,并以按使用付费的模式运行。然而,托管云应用的数据中心能耗巨大,导致运营成本和碳排放对环境造成严重影响。每天,这些数据中心都要处理来自不同用户的大量请求,需要消耗大量能源的强大服务器和相关外围设备。有效利用资源对于降低云中心的能耗至关重要。在我们的研究中,我们采用了一种新颖的混合方法来动态分配云中的资源,重点是降低能耗和负载预测。具体来说,我们采用神经模糊系统进行负载预测,并采用蚁群优化算法进行虚拟机迁移。与现有文献的对比分析表明了我们方法的有效性。在 810 个时间段内,与其他策略相比,我们的方法平均减少了 21.3% 的资源损失,平均请求拒绝率降低了 5.6%。我们使用 PlanetLab 工作负载和创建的 CloudSim 模拟器对建议的方法进行了评估。此外,我们的方法还通过使用 SPECpower 基准的综合实验进行了验证,所建议模型的能耗预测准确率超过 98%。这些结果凸显了我们的策略在优化云资源管理、应对数据中心运营中的能效挑战方面的实用性和效率。
{"title":"A novel approach for energy consumption management in cloud centers based on adaptive fuzzy neural systems","authors":"Hong Huang, Yu Wang, Yue Cai, Hong Wang","doi":"10.1007/s10586-024-04665-3","DOIUrl":"https://doi.org/10.1007/s10586-024-04665-3","url":null,"abstract":"<p>Cloud computing enables global access to tool-based IT services, accommodating a wide range of applications across consumer, scientific, and commercial sectors, operating on a pay-per-use model. However, the substantial energy consumption of data centers hosting cloud applications leads to significant operational costs and environmental impact due to carbon emissions. Each day, these centers handle numerous requests from diverse users, necessitating powerful servers that consume substantial energy and associated peripherals. Efficient resource utilization is essential for mitigating energy consumption in cloud centers. In our research, we adopted a novel hybrid approach to dynamically allocate resources in the cloud, focusing on energy reduction and load prediction. Specifically, we employed neural fuzzy systems for load prediction and the ant colony optimization algorithm for virtual machine migration. Comparative analysis against existing literature demonstrates the effectiveness of our approach. Across 810 time periods, our method exhibits an average resource loss reduction of 21.3% and a 5.6% lower average request denial rate compared to alternative strategies. Using the PlanetLab workload and the created CloudSim simulator, the suggested methods have been assessed. Moreover, our methodology was validated through comprehensive experiments using the SPECpower benchmark, achieving over 98% accuracy in forecasting energy consumption for the proposed model. These results underscore the practicality and efficiency of our strategy in optimizing cloud resource management while addressing energy efficiency challenges in data center operations.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141739079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Localization of try block and generation of catch block to handle exception using an improved LSTM 使用改进型 LSTM 定位 try 块并生成 catch 块以处理异常
Pub Date : 2024-07-20 DOI: 10.1007/s10586-024-04633-x
Preetesh Purohit, Anuradha Purohit, Vrinda Tokekar

Several contemporary programming languages, including Java, have exception management as a crucial built-in feature. By employing try-catch blocks, it enables developers to handle unusual or unexpected conditions that might arise at runtime beforehand. If exception management is neglected or applied improperly, it may result in serious incidents like equipment failure. Exception handling mechanisms are difficult to implement and time expensive with the preceding methodologies. This research introduces an efficient Long Short Term Memory (LSTM) technique for handling the exceptions automatically, which can identify the locations of the try blocks and automatically create the catch blocks. Bulky java code is collected from GitHub and splitted into several different fragments. For localization of the try block, Bidirectional LSTM (BiLSTM) is used initially as a token level encoder and then as a statement-level encoder. Then, the Support Vector Machine (SVM) is used to predict the try block present in the given source code. For generating a catch block, BiLSTM is initially used as an encoder, and LSTM is used as a decoder. Then, SVM is used here to predict the noisy tokens. The loss functions of this encoder-decoder model have been trained to be as small as possible. The trained model then uses the black widow method to forecast the following tokens one by one and then generates the entire catch block. The proposed work reaches 85% accuracy for try block localization and 50% accuracy for catch block generation. An improved LSTM with an attention mechanism method produces an optimal solution compared to the existing techniques. Thus the proposed method is the best choice for handling the exceptions.

包括 Java 在内的多种现代编程语言都将异常管理作为一项重要的内置功能。通过使用 try-catch 块,开发人员可以事先处理运行时可能出现的异常或意外情况。如果忽视异常管理或应用不当,可能会导致设备故障等严重事故。以往的方法很难实现异常处理机制,而且耗时耗力。本研究引入了一种高效的长短期记忆(LSTM)技术来自动处理异常,它可以识别 try 块的位置并自动创建 catch 块。我们从 GitHub 收集了大量 Java 代码,并将其分割成多个不同的片段。为了定位 try 块,双向 LSTM(BiLSTM)最初用作标记级编码器,然后用作语句级编码器。然后,使用支持向量机(SVM)来预测给定源代码中存在的 try 代码块。在生成 catch 块时,BiLSTM 最初用作编码器,LSTM 用作解码器。然后,这里使用 SVM 来预测有噪声的标记。该编码器-解码器模型的损失函数已被训练得尽可能小。训练好的模型会使用黑寡妇方法逐个预测后面的标记,然后生成整个捕获块。所提议的工作在尝试块定位方面达到了 85% 的准确率,在捕捉块生成方面达到了 50% 的准确率。与现有技术相比,带有注意力机制的改进型 LSTM 方法能产生最佳解决方案。因此,所提出的方法是处理异常的最佳选择。
{"title":"Localization of try block and generation of catch block to handle exception using an improved LSTM","authors":"Preetesh Purohit, Anuradha Purohit, Vrinda Tokekar","doi":"10.1007/s10586-024-04633-x","DOIUrl":"https://doi.org/10.1007/s10586-024-04633-x","url":null,"abstract":"<p>Several contemporary programming languages, including Java, have exception management as a crucial built-in feature. By employing try-catch blocks, it enables developers to handle unusual or unexpected conditions that might arise at runtime beforehand. If exception management is neglected or applied improperly, it may result in serious incidents like equipment failure. Exception handling mechanisms are difficult to implement and time expensive with the preceding methodologies. This research introduces an efficient Long Short Term Memory (LSTM) technique for handling the exceptions automatically, which can identify the locations of the try blocks and automatically create the catch blocks. Bulky java code is collected from GitHub and splitted into several different fragments. For localization of the try block, Bidirectional LSTM (BiLSTM) is used initially as a token level encoder and then as a statement-level encoder. Then, the Support Vector Machine (SVM) is used to predict the try block present in the given source code. For generating a catch block, BiLSTM is initially used as an encoder, and LSTM is used as a decoder. Then, SVM is used here to predict the noisy tokens. The loss functions of this encoder-decoder model have been trained to be as small as possible. The trained model then uses the black widow method to forecast the following tokens one by one and then generates the entire catch block. The proposed work reaches 85% accuracy for try block localization and 50% accuracy for catch block generation. An improved LSTM with an attention mechanism method produces an optimal solution compared to the existing techniques. Thus the proposed method is the best choice for handling the exceptions.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141739080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Causal network construction based on KICA-ECCM for root cause diagnosis of industrial processes 基于 KICA-ECCM 的因果网络构建,用于工业流程的根本原因诊断
Pub Date : 2024-07-20 DOI: 10.1007/s10586-024-04663-5
Yayin He, Xiangshun Li

Root cause diagnosis is able to find the propagation path of faults timely when the fault occurs. Therefore, it is of key significance in the maintenance and fault diagnosis of industrial systems. A commonly used method for root cause diagnosis is causal analysis method. In this work, a causal analysis method Extended Convergent Cross Mapping (ECCM) algorithm is used for root cause diagnosis of industry, however, it has difficulties in dealing with large amounts of steady state data and obtaining accurate propagation paths. Therefore, a causal analysis method based on Kernel Independent Component Analysis (KICA) and ECCM is proposed in this study to deal with the above problems. First, the KICA algorithm is used to detect faults to get the transition process data. Second, the ECCM algorithm is used to construct causal relationship among variables based on the transition process data to construct the fault propagation path diagram. Finally, the effectiveness of the proposed KICA-ECCM algorithm is tested by using the Tennessee Eastman Process and Industrial Process Control Test Facility platform. Compared with the ECCM and GC algorithm, the KICA-ECCM algorithm performs better in terms of accuracy and efficiency.

根源诊断能够在故障发生时及时发现故障的传播路径。因此,它对工业系统的维护和故障诊断具有重要意义。常用的根源诊断方法是因果分析法。在这项工作中,因果分析方法扩展聚合交叉映射(ECCM)算法被用于工业系统的根本原因诊断,但它在处理大量稳态数据和获取准确的传播路径方面存在困难。因此,本研究提出了一种基于核独立分量分析(KICA)和 ECCM 的因果分析方法来解决上述问题。首先,使用 KICA 算法检测故障,以获得过渡过程数据。其次,利用 ECCM 算法根据过渡过程数据构建变量之间的因果关系,从而构建故障传播路径图。最后,利用田纳西州伊士曼过程和工业过程控制测试设施平台测试了所提出的 KICA-ECCM 算法的有效性。与 ECCM 和 GC 算法相比,KICA-ECCM 算法在准确性和效率方面表现更佳。
{"title":"Causal network construction based on KICA-ECCM for root cause diagnosis of industrial processes","authors":"Yayin He, Xiangshun Li","doi":"10.1007/s10586-024-04663-5","DOIUrl":"https://doi.org/10.1007/s10586-024-04663-5","url":null,"abstract":"<p>Root cause diagnosis is able to find the propagation path of faults timely when the fault occurs. Therefore, it is of key significance in the maintenance and fault diagnosis of industrial systems. A commonly used method for root cause diagnosis is causal analysis method. In this work, a causal analysis method Extended Convergent Cross Mapping (ECCM) algorithm is used for root cause diagnosis of industry, however, it has difficulties in dealing with large amounts of steady state data and obtaining accurate propagation paths. Therefore, a causal analysis method based on Kernel Independent Component Analysis (KICA) and ECCM is proposed in this study to deal with the above problems. First, the KICA algorithm is used to detect faults to get the transition process data. Second, the ECCM algorithm is used to construct causal relationship among variables based on the transition process data to construct the fault propagation path diagram. Finally, the effectiveness of the proposed KICA-ECCM algorithm is tested by using the Tennessee Eastman Process and Industrial Process Control Test Facility platform. Compared with the ECCM and GC algorithm, the KICA-ECCM algorithm performs better in terms of accuracy and efficiency.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141739192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Cluster Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1