首页 > 最新文献

Engineering Applications of Artificial Intelligence最新文献

英文 中文
Multi-view deep reciprocal nonnegative matrix factorization 多视角深度倒数非负矩阵因式分解
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-10-30 DOI: 10.1016/j.engappai.2024.109508
Multi-view deep matrix factorization has recently gained popularity for extracting high-quality representations from multi-view data to improve the processing performance of multi-view data in pattern recognition, data mining, and machine learning. It explores the hierarchical semantics of data by performing a multi-layer decomposition on representation matrices after decomposing the data into basis and representation matrices but ignoring the basis matrices, which also contain valuable information about data. Extracting high-quality bases during the deep representation learning process can facilitate the learning of high-quality representations for multi-view data. To this end, this paper proposes a novel deep nonnegative matrix factorization architecture, named Multi-view Deep Reciprocal Nonnegative Matrix Factorization (MDRNMF), that collaborates with high-quality basis extraction, allowing the deep representation learning and high-quality basis extraction to promote each other. Based on the representations at the top layer, this paper adaptively learns the intrinsic local similarities of data within each view to capture the view-specific information. In addition, to explore the high-order data consistency across views, this paper introduces a Schatten p-norm-based low-rank regularization on the similarity tensor stacked by the view-specific similarity matrices. In this way, the proposed method can effectively explore and leverage the view-specific and consistent information of multi-view data simultaneously. Finally, extensive experiments demonstrate the superiority of the proposed model over several state-of-the-art methods.
多视图深度矩阵因式分解最近很受欢迎,它可以从多视图数据中提取高质量的表示,从而在模式识别、数据挖掘和机器学习中提高多视图数据的处理性能。它在将数据分解为基础矩阵和表示矩阵后,通过对表示矩阵进行多层分解来探索数据的层次语义,但忽略了基础矩阵,而基础矩阵也包含数据的宝贵信息。在深度表示学习过程中提取高质量的基,可以促进多视图数据的高质量表示学习。为此,本文提出了一种新颖的深度非负矩阵因式分解架构,名为多视图深度互易非负矩阵因式分解(MDRNMF),该架构与高质量基提取相结合,使深度表示学习与高质量基提取相互促进。在顶层表示的基础上,本文自适应地学习每个视图中数据的内在局部相似性,以捕捉视图的特定信息。此外,为了探索跨视图的高阶数据一致性,本文在由视图特定相似性矩阵堆叠的相似性张量上引入了基于 Schatten p-norm 的低秩正则化。这样,本文提出的方法就能同时有效地探索和利用多视图数据的视图特定信息和一致性信息。最后,大量实验证明了所提出的模型优于几种最先进的方法。
{"title":"Multi-view deep reciprocal nonnegative matrix factorization","authors":"","doi":"10.1016/j.engappai.2024.109508","DOIUrl":"10.1016/j.engappai.2024.109508","url":null,"abstract":"<div><div>Multi-view deep matrix factorization has recently gained popularity for extracting high-quality representations from multi-view data to improve the processing performance of multi-view data in pattern recognition, data mining, and machine learning. It explores the hierarchical semantics of data by performing a multi-layer decomposition on representation matrices after decomposing the data into basis and representation matrices but ignoring the basis matrices, which also contain valuable information about data. Extracting high-quality bases during the deep representation learning process can facilitate the learning of high-quality representations for multi-view data. To this end, this paper proposes a novel deep nonnegative matrix factorization architecture, named <em><strong>M</strong>ulti-view <strong>D</strong>eep <strong>R</strong>eciprocal <strong>N</strong>onnegative <strong>M</strong>atrix <strong>F</strong>actorization</em> (<strong>MDRNMF</strong>), that collaborates with high-quality basis extraction, allowing the deep representation learning and high-quality basis extraction to promote each other. Based on the representations at the top layer, this paper adaptively learns the intrinsic local similarities of data within each view to capture the view-specific information. In addition, to explore the high-order data consistency across views, this paper introduces a Schatten <span><math><mi>p</mi></math></span>-norm-based low-rank regularization on the similarity tensor stacked by the view-specific similarity matrices. In this way, the proposed method can effectively explore and leverage the view-specific and consistent information of multi-view data simultaneously. Finally, extensive experiments demonstrate the superiority of the proposed model over several state-of-the-art methods.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing particulate matter risk assessment with novel machine learning-driven toxicity threshold prediction 利用新型机器学习驱动的毒性阈值预测加强颗粒物风险评估
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-10-30 DOI: 10.1016/j.engappai.2024.109531
Airborne particulate matter (PM) poses significant health risks, necessitating accurate toxicity threshold determination for effective risk assessment. This study introduces a novel machine-learning (ML) approach to predict PM toxicity thresholds and identify the key physico-chemical and exposure characteristics. Five machine learning algorithms — logistic regression, support vector classifier, decision tree, random forest, and extreme gradient boosting — were employed to develop predictive models using a comprehensive dataset from existing studies. We developed models using the initial dataset and a class weight approach to address data imbalance. For the imbalanced data, the Random Forest classifier outperformed others with 87% accuracy, 81% recall, and the fewest false negatives (23). In the class weight approach, the Support Vector Classifier minimized false negatives (21), while the Random Forest model achieved superior overall performance with 86% accuracy, 80% recall, and an F1-score of 82%. Furthermore, eXplainable Artificial Intelligence (XAI) techniques, specifically SHAP (SHapley Additive exPlanations) values, were utilized to quantify feature contributions to predictions, offering insights beyond traditional laboratory approaches. This study represents the first application of machine learning for predicting PM toxicity thresholds, providing a robust tool for health risk assessment. The proposed methodology offers a time- and cost-effective alternative to classical laboratory tests, potentially revolutionizing PM toxicity threshold determination in scientific and epidemiological research. This innovative approach has significant implications for shaping regulatory policies and designing targeted interventions to mitigate health risks associated with airborne PM.
空气中的颗粒物(PM)对健康构成重大风险,因此必须准确确定毒性阈值才能进行有效的风险评估。本研究引入了一种新颖的机器学习(ML)方法来预测可吸入颗粒物的毒性阈值,并确定关键的物理化学和暴露特征。我们采用了五种机器学习算法--逻辑回归、支持向量分类器、决策树、随机森林和极端梯度提升--利用现有研究的综合数据集开发预测模型。我们使用初始数据集和类别权重法来开发模型,以解决数据不平衡的问题。对于不平衡数据,随机森林分类器的准确率为 87%,召回率为 81%,误判率最低(23),表现优于其他分类器。在类权重方法中,支持向量分类器最大限度地减少了假阴性(21),而随机森林模型的准确率为 86%,召回率为 80%,F1 分数为 82%,总体性能优越。此外,还利用了可解释人工智能(XAI)技术,特别是 SHAP(SHapley Additive exPlanations)值来量化特征对预测的贡献,从而提供了超越传统实验室方法的见解。这项研究首次将机器学习应用于预测可吸入颗粒物的毒性阈值,为健康风险评估提供了一个强大的工具。所提出的方法为传统的实验室测试提供了一种省时、经济的替代方法,有可能彻底改变科学和流行病学研究中的可吸入颗粒物毒性阈值测定方法。这种创新方法对于制定监管政策和设计有针对性的干预措施以降低空气中可吸入颗粒物的健康风险具有重要意义。
{"title":"Enhancing particulate matter risk assessment with novel machine learning-driven toxicity threshold prediction","authors":"","doi":"10.1016/j.engappai.2024.109531","DOIUrl":"10.1016/j.engappai.2024.109531","url":null,"abstract":"<div><div>Airborne particulate matter (PM) poses significant health risks, necessitating accurate toxicity threshold determination for effective risk assessment. This study introduces a novel machine-learning (ML) approach to predict PM toxicity thresholds and identify the key physico-chemical and exposure characteristics. Five machine learning algorithms — logistic regression, support vector classifier, decision tree, random forest, and extreme gradient boosting — were employed to develop predictive models using a comprehensive dataset from existing studies. We developed models using the initial dataset and a class weight approach to address data imbalance. For the imbalanced data, the Random Forest classifier outperformed others with 87% accuracy, 81% recall, and the fewest false negatives (23). In the class weight approach, the Support Vector Classifier minimized false negatives (21), while the Random Forest model achieved superior overall performance with 86% accuracy, 80% recall, and an F1-score of 82%. Furthermore, eXplainable Artificial Intelligence (XAI) techniques, specifically SHAP (SHapley Additive exPlanations) values, were utilized to quantify feature contributions to predictions, offering insights beyond traditional laboratory approaches. This study represents the first application of machine learning for predicting PM toxicity thresholds, providing a robust tool for health risk assessment. The proposed methodology offers a time- and cost-effective alternative to classical laboratory tests, potentially revolutionizing PM toxicity threshold determination in scientific and epidemiological research. This innovative approach has significant implications for shaping regulatory policies and designing targeted interventions to mitigate health risks associated with airborne PM.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hybrid Convolutional Autoencoder training algorithm for unsupervised bearing health indicator construction 用于构建无监督轴承健康指标的混合卷积自动编码器训练算法
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-10-30 DOI: 10.1016/j.engappai.2024.109477
Conventional Deep Learning (DL) methods for bearing health indicator (HI) adopt supervised approaches, requiring expert knowledge of the component degradation trend. Since bearings experience various failure modes, assuming a particular degradation trend for HI is suboptimal. Unsupervised DL methods are scarce in this domain. They generally maximise the HI monotonicity built in the middle layer of an Autoencoder (AE) trained to reconstruct the run-to-failure signals. The backpropagation (BP) training algorithm is unable to perform this maximisation since the monotonicity of HI subsections corresponding to input sample batches does not guarantee the monotonicity of the whole HI. Therefore, existing methods achieve this by searching AE hyperparameters so that its BP training to minimise the reconstruction error also leads to a highly monotonic HI in its middle layer. This is done using expensive search algorithms where the AE is trained numerous times using various hyperparameter settings, rendering them impractical for large datasets. To address this limitation, a small Convolutional Autoencoder (CAE) architecture and a hybrid training algorithm combining Particle Swarm Optimisation and BP are proposed in this work to enable simultaneous maximisation of the HI monotonicity and minimisation of the reconstruction error. As a result, the HI is built by training the CAE only once. The results from three case studies demonstrate this method’s lower computational burden compared to other unsupervised DL methods. Furthermore, the CAE-based HIs outperform the indicators built by equivalent and significantly larger models trained with a BP-based supervised approach, leading to 85% lower remaining useful life prediction errors.
用于轴承健康指标(HI)的传统深度学习(DL)方法采用监督式方法,需要有关部件退化趋势的专家知识。由于轴承的失效模式多种多样,因此假设特定的退化趋势对轴承健康指标来说是次优的。在这一领域,无监督 DL 方法很少见。这些方法通常最大化自动编码器(AE)中间层的 HI 单调性,该自动编码器经过训练,可重建运行至故障信号。反向传播(BP)训练算法无法实现这种最大化,因为与输入样本批次相对应的 HI 子部分的单调性并不能保证整个 HI 的单调性。因此,现有的方法通过搜索 AE 超参数来实现这一目标,以便通过 BP 训练使重建误差最小化,从而在中间层获得高度单调的 HI。要做到这一点,需要使用昂贵的搜索算法,使用各种超参数设置对 AE 进行无数次训练,因此对于大型数据集来说并不实用。为了解决这一局限性,本研究提出了一种小型卷积自动编码器(CAE)架构和一种结合了粒子群优化和 BP 的混合训练算法,以同时实现 HI 单调性最大化和重建误差最小化。因此,只需对 CAE 进行一次训练即可建立 HI。三个案例研究的结果表明,与其他无监督 DL 方法相比,这种方法的计算负担更低。此外,基于 CAE 的 HI 优于通过基于 BP 的监督方法训练的同等且更大的模型所建立的指标,从而将剩余使用寿命预测误差降低了 85%。
{"title":"A hybrid Convolutional Autoencoder training algorithm for unsupervised bearing health indicator construction","authors":"","doi":"10.1016/j.engappai.2024.109477","DOIUrl":"10.1016/j.engappai.2024.109477","url":null,"abstract":"<div><div>Conventional Deep Learning (DL) methods for bearing health indicator (HI) adopt supervised approaches, requiring expert knowledge of the component degradation trend. Since bearings experience various failure modes, assuming a particular degradation trend for HI is suboptimal. Unsupervised DL methods are scarce in this domain. They generally maximise the HI monotonicity built in the middle layer of an Autoencoder (AE) trained to reconstruct the run-to-failure signals. The backpropagation (BP) training algorithm is unable to perform this maximisation since the monotonicity of HI subsections corresponding to input sample batches does not guarantee the monotonicity of the whole HI. Therefore, existing methods achieve this by searching AE hyperparameters so that its BP training to minimise the reconstruction error also leads to a highly monotonic HI in its middle layer. This is done using expensive search algorithms where the AE is trained numerous times using various hyperparameter settings, rendering them impractical for large datasets. To address this limitation, a small Convolutional Autoencoder (CAE) architecture and a hybrid training algorithm combining Particle Swarm Optimisation and BP are proposed in this work to enable simultaneous maximisation of the HI monotonicity and minimisation of the reconstruction error. As a result, the HI is built by training the CAE only once. The results from three case studies demonstrate this method’s lower computational burden compared to other unsupervised DL methods. Furthermore, the CAE-based HIs outperform the indicators built by equivalent and significantly larger models trained with a BP-based supervised approach, leading to 85% lower remaining useful life prediction errors.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-stage guided code generation for Large Language Models 大型语言模型的多阶段引导代码生成
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-10-30 DOI: 10.1016/j.engappai.2024.109491
Currently, although Large Language Models (LLMs) have shown significant performance in the field of code generation, their effectiveness in handling complex programming tasks remains limited. This is primarily due to the substantial distance between the problem description and the correct code, making it difficult to ensure accuracy when directly generating code. Human programmers, when faced with a complex programming problem, usually use multiple stages to solve it in order to reduce the difficulty of development. First, they analyze the problem and think about a solution plan, then they design a code architecture based on that plan, and finally they finish writing the detailed code. Based on this, we propose a multi-stage guided code generation strategy that aims to gradually shorten the transformation distance between the problem description and the correct code, thus improving the accuracy of code generation. Specifically, the approach consists of three stages: planning, design and implementation. In the planning phase, the Large Language Model (LLM) generates a solution plan based on the problem description; in the design phase, the code architecture is further designed based on the solution plan; and in the implementation phase, the previous solution plan and code architecture are utilized to guide the LLM in generating the final code. Additionally, we found that existing competition-level code generation benchmarks may overlap with the training data of the Chat Generative Pre-trained Transformer (ChatGPT), posing a risk of data leakage. To validate the above findings and circumvent this risk, we created a competition-level code generation dataset named CodeC, which contains data never used for training ChatGPT. Experimental results show that our method outperforms the most advanced baselines. On the CodeC dataset, our approach achieves a 34.7% relative improvement on the Pass@1 metric compared to the direct generation method of ChatGPT. We have published the relevant dataset at https://github.com/hcode666/MSG for further academic research and validation.
目前,尽管大型语言模型(LLM)在代码生成领域表现出了显著的性能,但其在处理复杂编程任务方面的有效性仍然有限。这主要是由于问题描述与正确代码之间存在很大距离,因此在直接生成代码时很难确保准确性。人类程序员在面对复杂的编程问题时,通常会通过多个阶段来解决,以降低开发难度。首先,他们分析问题并思考解决计划,然后根据该计划设计代码架构,最后完成详细代码的编写。在此基础上,我们提出了一种多阶段引导代码生成策略,旨在逐步缩短问题描述与正确代码之间的转换距离,从而提高代码生成的准确性。具体来说,该方法包括三个阶段:规划、设计和实施。在规划阶段,大语言模型(LLM)根据问题描述生成解决方案计划;在设计阶段,根据解决方案计划进一步设计代码架构;在实施阶段,利用之前的解决方案计划和代码架构指导 LLM 生成最终代码。此外,我们还发现现有的竞赛级代码生成基准可能与聊天生成预训练转换器(ChatGPT)的训练数据重叠,存在数据泄露的风险。为了验证上述发现并规避这一风险,我们创建了一个竞赛级代码生成数据集,名为 CodeC,其中包含了从未用于训练 ChatGPT 的数据。实验结果表明,我们的方法优于最先进的基线方法。在 CodeC 数据集上,与 ChatGPT 的直接生成方法相比,我们的方法在 Pass@1 指标上实现了 34.7% 的相对改进。我们已在 https://github.com/hcode666/MSG 上发布了相关数据集,供进一步的学术研究和验证。
{"title":"Multi-stage guided code generation for Large Language Models","authors":"","doi":"10.1016/j.engappai.2024.109491","DOIUrl":"10.1016/j.engappai.2024.109491","url":null,"abstract":"<div><div>Currently, although Large Language Models (LLMs) have shown significant performance in the field of code generation, their effectiveness in handling complex programming tasks remains limited. This is primarily due to the substantial distance between the problem description and the correct code, making it difficult to ensure accuracy when directly generating code. Human programmers, when faced with a complex programming problem, usually use multiple stages to solve it in order to reduce the difficulty of development. First, they analyze the problem and think about a solution plan, then they design a code architecture based on that plan, and finally they finish writing the detailed code. Based on this, we propose a multi-stage guided code generation strategy that aims to gradually shorten the transformation distance between the problem description and the correct code, thus improving the accuracy of code generation. Specifically, the approach consists of three stages: planning, design and implementation. In the planning phase, the Large Language Model (LLM) generates a solution plan based on the problem description; in the design phase, the code architecture is further designed based on the solution plan; and in the implementation phase, the previous solution plan and code architecture are utilized to guide the LLM in generating the final code. Additionally, we found that existing competition-level code generation benchmarks may overlap with the training data of the Chat Generative Pre-trained Transformer (ChatGPT), posing a risk of data leakage. To validate the above findings and circumvent this risk, we created a competition-level code generation dataset named CodeC, which contains data never used for training ChatGPT. Experimental results show that our method outperforms the most advanced baselines. On the CodeC dataset, our approach achieves a 34.7% relative improvement on the Pass@1 metric compared to the direct generation method of ChatGPT. We have published the relevant dataset at <span><span>https://github.com/hcode666/MSG</span><svg><path></path></svg></span> for further academic research and validation.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crude oil price forecasting with multivariate selection, machine learning, and a nonlinear combination strategy 利用多变量选择、机器学习和非线性组合策略预测原油价格
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-10-30 DOI: 10.1016/j.engappai.2024.109510
Crude oil price forecasting has been one of the research hotspots in the field of energy economics, which plays a crucial role in energy supply and economic development. However, numerous influencing factors bring serious challenges to crude oil price forecasting, and existing research has room for further improvement in terms of an integrated research roadmap that combines impact factor analysis with predictive modelling. This study aims to examine the impact of financial market factors on the crude oil market and to propose a nonlinear combined forecasting framework based on common variables. Four types of daily exogenous financial market variables are introduced: commodity prices, exchange rates, stock market indices, and macroeconomic indicators for ten indicators. First, various variable selection methods generate different variable subsets, providing more diversity and reliability. Next, common variables in the subset of variables are selected as key features for subsequent models. Then, four models predict crude oil prices using common features as inputs and obtain the prediction results for each model. Finally, the nonlinear mechanism of the deep learning technology is introduced to combine above single prediction results. Experimental results reveal that commodity and foreign exchange factors in financial markets are critical determinants of crude oil market volatility over the long term, as observed in experiments conducted on the West Texas Intermediate and Brent oil price datasets. The proposed model demonstrates strong performance regarding average absolute percentage error, recorded at 2.9962% and 2.4314%, respectively, indicating high forecasting accuracy and robustness. This forecasting framework offers an effective methodology for predicting crude oil prices and enhances understanding the crude oil market.
原油价格预测一直是能源经济学领域的研究热点之一,对能源供应和经济发展起着至关重要的作用。然而,众多的影响因素给原油价格预测带来了严峻的挑战,现有研究在影响因素分析与预测模型相结合的综合研究路线图方面还有进一步改进的空间。本研究旨在研究金融市场因素对原油市场的影响,并提出基于共同变量的非线性组合预测框架。本文引入了四类每日外生金融市场变量:商品价格、汇率、股市指数和宏观经济指标,共十个指标。首先,各种变量选择方法会产生不同的变量子集,从而提供更多的多样性和可靠性。接着,在变量子集中选择共同变量作为后续模型的关键特征。然后,四个模型以共同特征为输入对原油价格进行预测,并得出各模型的预测结果。最后,引入深度学习技术的非线性机制,将上述单一预测结果进行组合。实验结果表明,从长期来看,金融市场中的商品和外汇因素是原油市场波动的关键决定因素。所提出的模型在平均绝对百分比误差方面表现出色,分别为 2.9962% 和 2.4314%,表明预测准确性和稳健性都很高。该预测框架为预测原油价格提供了一种有效的方法,并增强了对原油市场的了解。
{"title":"Crude oil price forecasting with multivariate selection, machine learning, and a nonlinear combination strategy","authors":"","doi":"10.1016/j.engappai.2024.109510","DOIUrl":"10.1016/j.engappai.2024.109510","url":null,"abstract":"<div><div>Crude oil price forecasting has been one of the research hotspots in the field of energy economics, which plays a crucial role in energy supply and economic development. However, numerous influencing factors bring serious challenges to crude oil price forecasting, and existing research has room for further improvement in terms of an integrated research roadmap that combines impact factor analysis with predictive modelling. This study aims to examine the impact of financial market factors on the crude oil market and to propose a nonlinear combined forecasting framework based on common variables. Four types of daily exogenous financial market variables are introduced: commodity prices, exchange rates, stock market indices, and macroeconomic indicators for ten indicators. First, various variable selection methods generate different variable subsets, providing more diversity and reliability. Next, common variables in the subset of variables are selected as key features for subsequent models. Then, four models predict crude oil prices using common features as inputs and obtain the prediction results for each model. Finally, the nonlinear mechanism of the deep learning technology is introduced to combine above single prediction results. Experimental results reveal that commodity and foreign exchange factors in financial markets are critical determinants of crude oil market volatility over the long term, as observed in experiments conducted on the West Texas Intermediate and Brent oil price datasets. The proposed model demonstrates strong performance regarding average absolute percentage error, recorded at 2.9962% and 2.4314%, respectively, indicating high forecasting accuracy and robustness. This forecasting framework offers an effective methodology for predicting crude oil prices and enhances understanding the crude oil market.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-driven drift detection and diagnosis framework for predictive maintenance of heterogeneous production processes: Application to a multiple tapping process 用于异构生产流程预测性维护的数据驱动漂移检测和诊断框架:应用于多重分接工艺
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-10-30 DOI: 10.1016/j.engappai.2024.109552
The rise of Industry 4.0 technologies has revolutionized industries, enabled seamless data access, and fostered data-driven methodologies for improving key production processes such as maintenance. Predictive maintenance has notably advanced by aligning decisions with real-time system degradation. However, data-driven approaches confront challenges such as data availability and complexity, particularly at the system level. Most approaches address component-level issues, but system complexity exacerbates problems. In the realm of predictive maintenance, this paper proposes a framework for addressing drift detection and diagnosis in heterogeneous manufacturing processes. The originality of the paper is twofold. First, this paper proposes algorithms for handling drift detection and diagnosing heterogeneous processes. Second, the proposed framework leverages several machine learning techniques (e.g., novelty detection, ensemble learning, and continuous learning) and algorithms (e.g., K-Nearest Neighbors, Support Vector Machine, Random Forest and Long-Short Term Memory) for enabling the concrete implementation and scalability of drift detection and diagnostics on industrial processes. The effectiveness of the proposed framework is validated through metrics such as accuracy, precision, recall, F1-score, and variance. Furthermore, this paper demonstrates the relevance of combining machine learning and deep learning algorithms in a production process of SEW USOCOME, a French manufacturer of electric gearmotors and a market leader. The results indicate a satisfactory level of accuracy in detecting and diagnosing drifts, and the adaptive learning loop effectively identifies new drift and nominal profiles, thereby validating the robustness of the framework in real industrial settings.
工业 4.0 技术的兴起彻底改变了各行各业,实现了无缝数据访问,并促进了以数据为驱动的方法来改进维护等关键生产流程。通过使决策与实时系统退化保持一致,预测性维护取得了显著进步。然而,数据驱动方法面临着数据可用性和复杂性等挑战,尤其是在系统层面。大多数方法都能解决组件层面的问题,但系统的复杂性会加剧问题的严重性。在预测性维护领域,本文提出了一个框架,用于解决异构制造过程中的漂移检测和诊断问题。本文的独创性体现在两个方面。首先,本文提出了处理漂移检测和诊断异构流程的算法。其次,本文提出的框架利用了多种机器学习技术(如新颖性检测、集合学习和持续学习)和算法(如 K-近邻、支持向量机、随机森林和长短期记忆),实现了工业流程漂移检测和诊断的具体实施和可扩展性。本文通过准确度、精确度、召回率、F1-分数和方差等指标验证了拟议框架的有效性。此外,本文还展示了将机器学习和深度学习算法结合到 SEW USOCOME 生产流程中的相关性,SEW USOCOME 是一家法国电动齿轮减速机制造商,在市场上处于领先地位。结果表明,该框架在检测和诊断漂移方面具有令人满意的准确性,而且自适应学习环路能够有效识别新的漂移和额定曲线,从而验证了该框架在实际工业环境中的鲁棒性。
{"title":"Data-driven drift detection and diagnosis framework for predictive maintenance of heterogeneous production processes: Application to a multiple tapping process","authors":"","doi":"10.1016/j.engappai.2024.109552","DOIUrl":"10.1016/j.engappai.2024.109552","url":null,"abstract":"<div><div>The rise of Industry 4.0 technologies has revolutionized industries, enabled seamless data access, and fostered data-driven methodologies for improving key production processes such as maintenance. Predictive maintenance has notably advanced by aligning decisions with real-time system degradation. However, data-driven approaches confront challenges such as data availability and complexity, particularly at the system level. Most approaches address component-level issues, but system complexity exacerbates problems. In the realm of predictive maintenance, this paper proposes a framework for addressing drift detection and diagnosis in heterogeneous manufacturing processes. The originality of the paper is twofold. First, this paper proposes algorithms for handling drift detection and diagnosing heterogeneous processes. Second, the proposed framework leverages several machine learning techniques (e.g., novelty detection, ensemble learning, and continuous learning) and algorithms (e.g., K-Nearest Neighbors, Support Vector Machine, Random Forest and Long-Short Term Memory) for enabling the concrete implementation and scalability of drift detection and diagnostics on industrial processes. The effectiveness of the proposed framework is validated through metrics such as accuracy, precision, recall, F1-score, and variance. Furthermore, this paper demonstrates the relevance of combining machine learning and deep learning algorithms in a production process of SEW USOCOME, a French manufacturer of electric gearmotors and a market leader. The results indicate a satisfactory level of accuracy in detecting and diagnosing drifts, and the adaptive learning loop effectively identifies new drift and nominal profiles, thereby validating the robustness of the framework in real industrial settings.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seafloor topography inversion from multi-source marine gravity data using multi-channel convolutional neural network 利用多通道卷积神经网络从多源海洋重力数据反演海底地形
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-10-30 DOI: 10.1016/j.engappai.2024.109567
Seafloor topography is extremely important for marine scientific surveys and research. Current physical methods have difficulties in integrating multi-source marine gravity data and recovering non-linear features. To overcome this challenge, a multi-channel convolutional neural network (MCCNN) is employed to establish the seafloor topography model. Firstly, the MCCNN model is trained using the input data from the 64 × 64 grid points centered around the control points. The input data includes the differences in position between calculation points and surrounding grid points, gravity anomaly, vertical gravity gradient, east component of deflection of the vertical and north component of deflection of the vertical, as well as the reference terrain information. Then, the data from the 64 × 64 grid points centered around the predicted points is inputted into the trained MCCNN model to obtain the predicted depth at those points. Finally, the predicted depth is utilized to establish the seafloor topography model of the study area. This method is tested in a local area located in the southern part of the Emperor Seamount Chain in the Northwest Pacific (31°N −37°N, 169°E −175°E). The root mean square of the differences between the resultant seafloor topography model and ship-borne bathymetric values at the check points is 88.48 m. This performance is commendable compared to existing models.
海底地形对于海洋科学调查和研究极为重要。目前的物理方法在整合多源海洋重力数据和恢复非线性特征方面存在困难。为了克服这一难题,我们采用了多通道卷积神经网络(MCCNN)来建立海底地形模型。首先,利用以控制点为中心的 64 × 64 网格点的输入数据训练 MCCNN 模型。输入数据包括计算点与周围网格点的位置差、重力异常、重力垂直梯度、垂直偏转的东分量和垂直偏转的北分量以及参考地形信息。然后,将以预测点为中心的 64 × 64 网格点的数据输入训练有素的 MCCNN 模型,以获得这些点的预测深度。最后,利用预测深度建立研究区域的海底地形模型。该方法在西北太平洋皇帝海山链南部(31°N -37°N,169°E -175°E)的一个局部区域进行了测试。结果显示,海底地形模型与检查点船载测深值之差的均方根为 88.48 米。
{"title":"Seafloor topography inversion from multi-source marine gravity data using multi-channel convolutional neural network","authors":"","doi":"10.1016/j.engappai.2024.109567","DOIUrl":"10.1016/j.engappai.2024.109567","url":null,"abstract":"<div><div>Seafloor topography is extremely important for marine scientific surveys and research. Current physical methods have difficulties in integrating multi-source marine gravity data and recovering non-linear features. To overcome this challenge, a multi-channel convolutional neural network (MCCNN) is employed to establish the seafloor topography model. Firstly, the MCCNN model is trained using the input data from the 64 × 64 grid points centered around the control points. The input data includes the differences in position between calculation points and surrounding grid points, gravity anomaly, vertical gravity gradient, east component of deflection of the vertical and north component of deflection of the vertical, as well as the reference terrain information. Then, the data from the 64 × 64 grid points centered around the predicted points is inputted into the trained MCCNN model to obtain the predicted depth at those points. Finally, the predicted depth is utilized to establish the seafloor topography model of the study area. This method is tested in a local area located in the southern part of the Emperor Seamount Chain in the Northwest Pacific (31°N −37°N, 169°E −175°E). The root mean square of the differences between the resultant seafloor topography model and ship-borne bathymetric values at the check points is 88.48 m. This performance is commendable compared to existing models.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrated metaheuristic approaches for estimation of fracture porosity derived from fullbore formation micro-imager logs: Reaping the benefits of stand-alone and ensemble machine learning models 根据全孔地层微型成像仪测井结果估算裂缝孔隙度的综合元启发式方法:收获独立和集合机器学习模型的益处
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-10-30 DOI: 10.1016/j.engappai.2024.109545
Fracture porosity is one of the most effective parameters for reservoir productivity and recovery efficiency. This study aims to predict and improve the accuracy of fracture porosity estimation through the application of advanced machine learning (ML) algorithms. A novel approach was introduced for the first time to estimate fracture porosity by reaping the benefits of petrophysical and fullbore formation micro-imager (FMI) data based on employing various stand-alone, ensemble, optimisation and multi-variable linear regression (MVLR) algorithms. This study proposes a ground-breaking two-step committee machine (CM) model. Petrophysical data containing compressional sonic-log travel time, deep resistivity, neutron porosity and bulk density (inputs), along with FMI-derived fracture porosity values (outputs), were employed. Nine stand-alone ML algorithms, including back-propagation neural network, Takagi and Sugeno fuzzy system, adaptive neuro-fuzzy inference system, decision tree, radial basis function, extreme gradient boosting, least-squares boosting, least squares support vector regression and k-nearest neighbours, were trained for initial estimation. To improve the efficacy of stand-alone algorithms, their outputs were combined in CM structures using optimisation algorithms. This integration was applied through five optimisation algorithms, including genetic algorithm, ant colony, particle swarm, covariance matrix adaptation evolution strategy (CMA-ES) and Coyote optimisation algorithm. Considering the lowest error, the CM with CMA-ES showed superior performance. Subsequently, MVLR was employed to improve the CMs further. Employing MVLR to combine the CMs yielded a 57.85% decline in mean squared error and a 4.502% improvement in the correlation coefficient compared to the stand-alone algorithms. The results of the benchmark analysis validated the efficacy of this approach.
裂缝孔隙度是影响储层产能和采收效率的最有效参数之一。本研究旨在通过应用先进的机器学习(ML)算法来预测和提高裂缝孔隙度估算的准确性。研究首次引入了一种新方法,通过利用岩石物理和全孔道地层显微成像仪(FMI)数据的优势,采用各种独立、集合、优化和多变量线性回归(MVLR)算法来估算裂缝孔隙度。本研究提出了一种开创性的两步委员会机(CM)模型。采用的岩石物理数据包括压缩声波记录旅行时间、深层电阻率、中子孔隙度和体积密度(输入),以及 FMI 导出的裂缝孔隙度值(输出)。训练了九种独立的 ML 算法,包括反向传播神经网络、Takagi 和 Sugeno 模糊系统、自适应神经模糊推理系统、决策树、径向基函数、极梯度提升、最小二乘提升、最小二乘支持向量回归和 k 近邻,用于初始估计。为了提高独立算法的效率,使用优化算法将它们的输出合并到 CM 结构中。这种整合应用了五种优化算法,包括遗传算法、蚁群算法、粒子群算法、协方差矩阵适应进化策略(CMA-ES)和 Coyote 优化算法。从误差最小的角度考虑,采用 CMA-ES 的 CM 性能更优。随后,采用 MVLR 进一步改进了 CM。与独立算法相比,采用 MVLR 组合 CM 的均方误差下降了 57.85%,相关系数提高了 4.502%。基准分析的结果验证了这种方法的有效性。
{"title":"Integrated metaheuristic approaches for estimation of fracture porosity derived from fullbore formation micro-imager logs: Reaping the benefits of stand-alone and ensemble machine learning models","authors":"","doi":"10.1016/j.engappai.2024.109545","DOIUrl":"10.1016/j.engappai.2024.109545","url":null,"abstract":"<div><div>Fracture porosity is one of the most effective parameters for reservoir productivity and recovery efficiency. This study aims to predict and improve the accuracy of fracture porosity estimation through the application of advanced machine learning (ML) algorithms. A novel approach was introduced for the first time to estimate fracture porosity by reaping the benefits of petrophysical and fullbore formation micro-imager (FMI) data based on employing various stand-alone, ensemble, optimisation and multi-variable linear regression (MVLR) algorithms. This study proposes a ground-breaking two-step committee machine (CM) model. Petrophysical data containing compressional sonic-log travel time, deep resistivity, neutron porosity and bulk density (inputs), along with FMI-derived fracture porosity values (outputs), were employed. Nine stand-alone ML algorithms, including back-propagation neural network, Takagi and Sugeno fuzzy system, adaptive neuro-fuzzy inference system, decision tree, radial basis function, extreme gradient boosting, least-squares boosting, least squares support vector regression and k-nearest neighbours, were trained for initial estimation. To improve the efficacy of stand-alone algorithms, their outputs were combined in CM structures using optimisation algorithms. This integration was applied through five optimisation algorithms, including genetic algorithm, ant colony, particle swarm, covariance matrix adaptation evolution strategy (CMA-ES) and Coyote optimisation algorithm. Considering the lowest error, the CM with CMA-ES showed superior performance. Subsequently, MVLR was employed to improve the CMs further. Employing MVLR to combine the CMs yielded a 57.85% decline in mean squared error and a 4.502% improvement in the correlation coefficient compared to the stand-alone algorithms. The results of the benchmark analysis validated the efficacy of this approach.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A graph convolutional neural network model based on fused multi-subgraph as input and fused feature information as output 以融合多子图为输入、融合特征信息为输出的图卷积神经网络模型
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-10-30 DOI: 10.1016/j.engappai.2024.109542
The graph convolution neural network (GCN)-based node classification model tackles the challenge of classifying nodes in graph data through learned feature representations. However, most existing graph neural networks primarily focus on the same type of edges, which might not accurately reflect the intricate real-world graph structure. This paper introduces a novel graph neural network model, MF-GCN, which integrates subgraphs with various edge types as input and combines feature information from each graph convolutional neural network layer to produce the final output. This model learns node feature representations by separately feeding subgraphs with different edge types into the graph convolutional layer. It then computes the weight vectors for fusing various edge type subgraphs based on the learned node features. Additionally, to efficiently extract feature information, the outputs of each graph convolution layer, without an activation function, are weighted and summed to obtain the final node features. This approach resolves the challenges of determining fusion weights and effectively extracting feature information during subgraph fusion. Experimental results show that the proposed model significantly improves performance on all three datasets, highlighting its effectiveness in node representation learning tasks.
基于图卷积神经网络(GCN)的节点分类模型通过学习特征表征来解决图数据中节点分类的难题。然而,现有的大多数图神经网络主要关注同一类边缘,这可能无法准确反映现实世界中错综复杂的图结构。本文介绍了一种新颖的图神经网络模型--MF-GCN,它将具有不同边缘类型的子图作为输入,并结合来自各图卷积神经网络层的特征信息来生成最终输出。该模型通过将具有不同边缘类型的子图分别输入图卷积层来学习节点特征表示。然后,它根据学习到的节点特征计算权重向量,用于融合各种边缘类型的子图。此外,为了有效提取特征信息,每个图卷积层的输出(不含激活函数)都会加权求和,以获得最终的节点特征。这种方法解决了在子图融合过程中确定融合权重和有效提取特征信息的难题。实验结果表明,所提出的模型在所有三个数据集上都显著提高了性能,凸显了其在节点表示学习任务中的有效性。
{"title":"A graph convolutional neural network model based on fused multi-subgraph as input and fused feature information as output","authors":"","doi":"10.1016/j.engappai.2024.109542","DOIUrl":"10.1016/j.engappai.2024.109542","url":null,"abstract":"<div><div>The graph convolution neural network (GCN)-based node classification model tackles the challenge of classifying nodes in graph data through learned feature representations. However, most existing graph neural networks primarily focus on the same type of edges, which might not accurately reflect the intricate real-world graph structure. This paper introduces a novel graph neural network model, MF-GCN, which integrates subgraphs with various edge types as input and combines feature information from each graph convolutional neural network layer to produce the final output. This model learns node feature representations by separately feeding subgraphs with different edge types into the graph convolutional layer. It then computes the weight vectors for fusing various edge type subgraphs based on the learned node features. Additionally, to efficiently extract feature information, the outputs of each graph convolution layer, without an activation function, are weighted and summed to obtain the final node features. This approach resolves the challenges of determining fusion weights and effectively extracting feature information during subgraph fusion. Experimental results show that the proposed model significantly improves performance on all three datasets, highlighting its effectiveness in node representation learning tasks.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Power transformer health index and life span assessment: A comprehensive review of conventional and machine learning based approaches 电力变压器健康指数和寿命评估:传统方法和基于机器学习的方法综合评述
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-10-29 DOI: 10.1016/j.engappai.2024.109474
Power transformers play a critical role within the electrical power system, making their health assessment and the prediction of their remaining lifespan paramount for the purpose of ensuring efficient operation and facilitating effective maintenance planning. This paper undertakes a comprehensive examination of existent literature, with a primary focus on both conventional and cutting-edge techniques employed within this domain. The merits and demerits of recent methodologies and techniques are subjected to meticulous scrutiny and explication. Furthermore, this paper expounds upon intelligent fault diagnosis methodologies and delves into the most widely utilized intelligent algorithms for the assessment of transformer conditions. Diverse Artificial Intelligence (AI) approaches, including Artificial Neural Networks (ANN) and Convolutional Neural Network (CNN), Support Vector Machine (SVM), Random Forest (RF), Genetic Algorithm (GA), and Particle Swarm Optimization (PSO), are elucidated offering pragmatic solutions for enhancing the performance of transformer fault diagnosis. The amalgamation of multiple AI methodologies and the exploration of time-series analysis further contribute to the augmentation of diagnostic precision and the early detection of faults in transformers. By furnishing a comprehensive panorama of AI applications in the field of transformer fault diagnosis, this study lays the groundwork for future research endeavors and the progression of this critical area of study.
电力变压器在电力系统中起着至关重要的作用,因此对其健康状况进行评估并预测其剩余寿命对于确保高效运行和促进有效的维护规划至关重要。本文对现有文献进行了全面研究,主要侧重于该领域采用的传统和前沿技术。本文对最新方法和技术的优缺点进行了细致的研究和阐述。此外,本文还阐述了智能故障诊断方法,并深入研究了用于评估变压器状况的最广泛使用的智能算法。本文阐述了多种人工智能(AI)方法,包括人工神经网络(ANN)和卷积神经网络(CNN)、支持向量机(SVM)、随机森林(RF)、遗传算法(GA)和粒子群优化(PSO),为提高变压器故障诊断性能提供了实用的解决方案。多种人工智能方法的融合以及对时间序列分析的探索,进一步提高了诊断精度并有助于变压器故障的早期检测。本研究全面介绍了人工智能在变压器故障诊断领域的应用,为今后的研究工作和这一重要研究领域的发展奠定了基础。
{"title":"Power transformer health index and life span assessment: A comprehensive review of conventional and machine learning based approaches","authors":"","doi":"10.1016/j.engappai.2024.109474","DOIUrl":"10.1016/j.engappai.2024.109474","url":null,"abstract":"<div><div>Power transformers play a critical role within the electrical power system, making their health assessment and the prediction of their remaining lifespan paramount for the purpose of ensuring efficient operation and facilitating effective maintenance planning. This paper undertakes a comprehensive examination of existent literature, with a primary focus on both conventional and cutting-edge techniques employed within this domain. The merits and demerits of recent methodologies and techniques are subjected to meticulous scrutiny and explication. Furthermore, this paper expounds upon intelligent fault diagnosis methodologies and delves into the most widely utilized intelligent algorithms for the assessment of transformer conditions. Diverse Artificial Intelligence (AI) approaches, including Artificial Neural Networks (ANN) and Convolutional Neural Network (CNN), Support Vector Machine (SVM), Random Forest (RF), Genetic Algorithm (GA), and Particle Swarm Optimization (PSO), are elucidated offering pragmatic solutions for enhancing the performance of transformer fault diagnosis. The amalgamation of multiple AI methodologies and the exploration of time-series analysis further contribute to the augmentation of diagnostic precision and the early detection of faults in transformers. By furnishing a comprehensive panorama of AI applications in the field of transformer fault diagnosis, this study lays the groundwork for future research endeavors and the progression of this critical area of study.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Engineering Applications of Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1