Code smells are suboptimal code structures that increase software maintenance costs and are challenging to detect manually. Researchers have explored automatic code smell detection using machine learning (ML) methods, which rely heavily on static code metrics or source code representation. Static code metrics often rely on structural attributes, such as lines of code, cyclomatic complexity, or comment density. However, these metrics do not always reflect true code complexity and provide only quantitative insights without inherently detecting poor coding practices. In contrast, representations such as abstract syntax trees (ASTs) focus on the structural and syntactic elements of code, capturing hierarchical and contextual relationships within the source code. This enables precise identification of code structures such as loops, function calls, and conditionals, which are essential for detecting code smells. This article introduces code smell detection through tree-based abstract representation (COSTAR), a source code representation technique using ASTs to uniquely represent each source code instance. COSTAR captures the hierarchical structure of the source code by extracting all paths from the root to individual nodes within the AST. By employing a pretrained sentence bidirectional encoder representations from transformers embedding model, COSTAR generates vectors for each extracted path. The subsequent calculation of the mean of these vectors yields a precise and comprehensive source code representation. Extensive experiments were conducted to validate COSTAR's performance using various ML techniques on four benchmark MLCQ code smell datasets: Data Class, God Class (Blob), Feature Envy, and Long Method. Various performance metrics have been employed to evaluate the model's performance. The experimental results indicate that COSTAR enhances the performance of the code smell detection model compared to existing methods. An improvement in the F1-score ranging from 0.03 (Long Method) to 0.19 (Feature Envy) was observed. Furthermore, a comparison of COSTAR with state-of-the-art methods demonstrated that it outperformed approaches such as Code2Vec and CuBERT in code smell detection.
代码气味是次优的代码结构,会增加软件维护成本,并且很难手工检测。研究人员已经探索了使用机器学习(ML)方法的自动代码气味检测,这些方法严重依赖于静态代码度量或源代码表示。静态代码度量通常依赖于结构属性,例如代码行数、圈复杂度或注释密度。然而,这些指标并不总是反映真实的代码复杂性,并且只提供定量的见解,而没有内在地检测糟糕的编码实践。相反,抽象语法树(ast)等表示侧重于代码的结构和语法元素,捕获源代码中的层次和上下文关系。这样可以精确地识别代码结构,如循环、函数调用和条件,这些对于检测代码气味是必不可少的。本文介绍了通过基于树的抽象表示(COSTAR)进行代码气味检测,COSTAR是一种使用ast惟一地表示每个源代码实例的源代码表示技术。COSTAR通过提取从根到AST内各个节点的所有路径来捕获源代码的层次结构。通过使用来自变压器嵌入模型的预训练句子双向编码器表示,COSTAR为每个提取的路径生成向量。随后对这些向量的平均值的计算产生了一个精确和全面的源代码表示。在四个基准MLCQ代码气味数据集(Data Class, God Class (Blob), Feature Envy和Long Method)上,使用各种ML技术进行了广泛的实验来验证COSTAR的性能。采用了各种性能指标来评估模型的性能。实验结果表明,与现有方法相比,COSTAR提高了代码气味检测模型的性能。观察到f1得分从0.03(长法)到0.19(特征嫉妒)的改善。此外,COSTAR与最先进的方法的比较表明,它在代码气味检测方面优于Code2Vec和CuBERT等方法。
{"title":"COSTAR: Software Code Smell Detection Through Tree-Based Abstract Representation","authors":"Praveen Singh Thakur;Mahipal Jadeja;Satyendra Singh Chouhan;Santosh Singh Rathore","doi":"10.1109/TR.2025.3648404","DOIUrl":"https://doi.org/10.1109/TR.2025.3648404","url":null,"abstract":"Code smells are suboptimal code structures that increase software maintenance costs and are challenging to detect manually. Researchers have explored automatic code smell detection using machine learning (ML) methods, which rely heavily on static code metrics or source code representation. Static code metrics often rely on structural attributes, such as lines of code, cyclomatic complexity, or comment density. However, these metrics do not always reflect true code complexity and provide only quantitative insights without inherently detecting poor coding practices. In contrast, representations such as abstract syntax trees (ASTs) focus on the structural and syntactic elements of code, capturing hierarchical and contextual relationships within the source code. This enables precise identification of code structures such as loops, function calls, and conditionals, which are essential for detecting code smells. This article introduces code smell detection through tree-based abstract representation (COSTAR), a source code representation technique using ASTs to uniquely represent each source code instance. COSTAR captures the hierarchical structure of the source code by extracting all paths from the root to individual nodes within the AST. By employing a pretrained sentence bidirectional encoder representations from transformers embedding model, COSTAR generates vectors for each extracted path. The subsequent calculation of the mean of these vectors yields a precise and comprehensive source code representation. Extensive experiments were conducted to validate COSTAR's performance using various ML techniques on four benchmark MLCQ code smell datasets: Data Class, God Class (Blob), Feature Envy, and Long Method. Various performance metrics have been employed to evaluate the model's performance. The experimental results indicate that COSTAR enhances the performance of the code smell detection model compared to existing methods. An improvement in the F1-score ranging from 0.03 (Long Method) to 0.19 (Feature Envy) was observed. Furthermore, a comparison of COSTAR with state-of-the-art methods demonstrated that it outperformed approaches such as Code2Vec and CuBERT in code smell detection.","PeriodicalId":56305,"journal":{"name":"IEEE Transactions on Reliability","volume":"75 ","pages":"581-595"},"PeriodicalIF":5.7,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Di Zhu;Ancha Xu;Ziqi Chen;Shuling Ding;Guanqi Fang
In industrial settings, the health state of a product is often difficult to observe directly. Instead, it is typically inferred from noisy degradation data that are related to the system’s operational condition. However, existing methods commonly neglect parameter uncertainty and lack the ability to perform real-time state estimation. To address these challenges, this article proposes a Bayesian inference framework for accurate online identification of system degradation states. Specifically, a Wiener process model with measurement noise is developed, and prior distributions are introduced to capture parameter uncertainty. In the offline training stage, historical measurement data are utilized to approximate the joint posterior distribution of the latent degradation states and model parameters via variational Bayesian methods. In the online stage, a state-space formulation is adopted to dynamically update the posterior distribution using real-time observations, enabling dynamic estimation of the degradation state. The proposed approach significantly reduces both storage and computational costs. Numerical simulations and real-world case studies demonstrate that the proposed method achieves superior performance in terms of both accuracy and efficiency.
{"title":"An Online Bayesian Framework for Identifying Latent System Degradation States","authors":"Di Zhu;Ancha Xu;Ziqi Chen;Shuling Ding;Guanqi Fang","doi":"10.1109/TR.2025.3647489","DOIUrl":"https://doi.org/10.1109/TR.2025.3647489","url":null,"abstract":"In industrial settings, the health state of a product is often difficult to observe directly. Instead, it is typically inferred from noisy degradation data that are related to the system’s operational condition. However, existing methods commonly neglect parameter uncertainty and lack the ability to perform real-time state estimation. To address these challenges, this article proposes a Bayesian inference framework for accurate online identification of system degradation states. Specifically, a Wiener process model with measurement noise is developed, and prior distributions are introduced to capture parameter uncertainty. In the offline training stage, historical measurement data are utilized to approximate the joint posterior distribution of the latent degradation states and model parameters via variational Bayesian methods. In the online stage, a state-space formulation is adopted to dynamically update the posterior distribution using real-time observations, enabling dynamic estimation of the degradation state. The proposed approach significantly reduces both storage and computational costs. Numerical simulations and real-world case studies demonstrate that the proposed method achieves superior performance in terms of both accuracy and efficiency.","PeriodicalId":56305,"journal":{"name":"IEEE Transactions on Reliability","volume":"75 ","pages":"542-554"},"PeriodicalIF":5.7,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As one of the most advanced variants in the correntropy family, the multikernel correntropy criterion demonstrates superior accuracy in handling non-Gaussian noise, particularly with multimodal distributions. However, current approaches suffer from key limitations-namely, reliance on a single type of sensitive Gaussian kernel and the manual selection of free parameters. To address these issues and further boost robustness, this article introduces the concept of multikernel mixture correntropy (MKMC), along with its key properties. MKMC employs a flexible kernel function composed of a mixture of two Student's t-Cauchy functions with adjustable (nonzero) means. Building on this criterion within multisensor networks, we propose a robust distributed extended Kalman filter-AMKMMC-RDEKF based on adaptive multikernel mixture maximum correntropy. To reduce communication overhead, a consensus averaging strategy is incorporated. Furthermore, an adaptive mechanism is introduced to mitigate the impact of manually tuned free parameters. At the same time, the computational complexity and convergence ability of the proposed algorithm are analyzed. The effectiveness of the proposed algorithm is validated through challenging scenarios involving power system and land vehicle state estimation.
{"title":"Robust Distributed Extended Kalman Filter Based on Adaptive Multikernel Mixture Maximum Correntropy for Non-Gaussian Systems","authors":"Duc Viet Nguyen;Haiquan Zhao;Jinhui Hu;Xiaoli Li","doi":"10.1109/TR.2025.3649169","DOIUrl":"https://doi.org/10.1109/TR.2025.3649169","url":null,"abstract":"As one of the most advanced variants in the correntropy family, the multikernel correntropy criterion demonstrates superior accuracy in handling non-Gaussian noise, particularly with multimodal distributions. However, current approaches suffer from key limitations-namely, reliance on a single type of sensitive Gaussian kernel and the manual selection of free parameters. To address these issues and further boost robustness, this article introduces the concept of multikernel mixture correntropy (MKMC), along with its key properties. MKMC employs a flexible kernel function composed of a mixture of two Student's <italic>t</i>-Cauchy functions with adjustable (nonzero) means. Building on this criterion within multisensor networks, we propose a robust distributed extended Kalman filter-AMKMMC-RDEKF based on adaptive multikernel mixture maximum correntropy. To reduce communication overhead, a consensus averaging strategy is incorporated. Furthermore, an adaptive mechanism is introduced to mitigate the impact of manually tuned free parameters. At the same time, the computational complexity and convergence ability of the proposed algorithm are analyzed. The effectiveness of the proposed algorithm is validated through challenging scenarios involving power system and land vehicle state estimation.","PeriodicalId":56305,"journal":{"name":"IEEE Transactions on Reliability","volume":"75 ","pages":"694-708"},"PeriodicalIF":5.7,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, intelligent diagnosis methods for rotating machines have achieved prominent results. Existing intelligent methods rely on two conditions: first, massive labeled data is necessary during training process; second, the data of application scenarios and training data is under the same working condition. In some actual industrial scenarios, however, labeled samples are insufficient and working conditions are variable. To address this problem, we propose a novel intelligent fault diagnosis method named multidynamic domain adaptation network (MDDAN) based on transfer learning, which can diagnose bearing fault with insufficient labeled data under varying working conditions. The crucial architecture of the proposed MDDAN is a feature extractor module and a multiadaptation module, which are designed to learn domain-invariant features with insufficient labeled data. Furthermore, the idea of adversarial training is introduced by the domain discriminators part of the multiadaptation module, which can improve the domain adaptation performance. To balance the contributions of global domain and subdomain discriminators, we add a dynamic adaptation strategy to domain adaptation module. Finally, Pareto-efficient optimization is introduced to adaptively coordinate multilosses and metrics that further improves the stability and domain adaptation ability of MDDAN. The feasibility and effectiveness of MDDAN are verified on three datasets through a variety of scenarios transfer experiments.
{"title":"A Novel Multidynamic Domain Adaptation Transfer Learning Method for Fault Diagnosis of Bearings With Insufficient Labeled Data","authors":"Shuzhen Han;Shengke Sun;Zhanshan Zhao;Hua Wang;Jiao Yin;Yitong Li;Pingjuan Niu","doi":"10.1109/TR.2025.3648784","DOIUrl":"https://doi.org/10.1109/TR.2025.3648784","url":null,"abstract":"Recently, intelligent diagnosis methods for rotating machines have achieved prominent results. Existing intelligent methods rely on two conditions: first, massive labeled data is necessary during training process; second, the data of application scenarios and training data is under the same working condition. In some actual industrial scenarios, however, labeled samples are insufficient and working conditions are variable. To address this problem, we propose a novel intelligent fault diagnosis method named multidynamic domain adaptation network (MDDAN) based on transfer learning, which can diagnose bearing fault with insufficient labeled data under varying working conditions. The crucial architecture of the proposed MDDAN is a feature extractor module and a multiadaptation module, which are designed to learn domain-invariant features with insufficient labeled data. Furthermore, the idea of adversarial training is introduced by the domain discriminators part of the multiadaptation module, which can improve the domain adaptation performance. To balance the contributions of global domain and subdomain discriminators, we add a dynamic adaptation strategy to domain adaptation module. Finally, Pareto-efficient optimization is introduced to adaptively coordinate multilosses and metrics that further improves the stability and domain adaptation ability of MDDAN. The feasibility and effectiveness of MDDAN are verified on three datasets through a variety of scenarios transfer experiments.","PeriodicalId":56305,"journal":{"name":"IEEE Transactions on Reliability","volume":"75 ","pages":"650-663"},"PeriodicalIF":5.7,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gawher Ahmad Bhat;Damira Smagulova;Elena Jasiunienė
The reliability of nondestructive evaluation techniques is of utmost importance. The probability of detection (POD) is crucial for evaluating the performance of various NDT techniques, as it quantifies the ability to detect defects based on their type and size. In this study, the model-assisted probability of detection approach was used to evaluate the performance of various features for the determination of the defect sizes in adhesively bonded components. The analysis incorporated ultrasonic and radiographic inspection simulations, which were experimentally validated at our research facility. Unlike conventional approaches, which are limited to maximum amplitude-based evaluation, this approach incorporated various ultrasonic features, including peak-to-peak amplitude, absolute energy, mean value of amplitude in the frequency domain, and absolute time-of-flight. The Rose criterion was used for the estimation of the POD for the X-ray radiography. The key innovation of this study lies in the use of specific signal features—rather than the conventional maximum amplitude—to improve the reliability of defect sizing. To enable this, custom-developed ultrasonic and radiographic feature extraction modules were integrated into the CIVA simulation environment, thereby extending its standard capabilities beyond traditional amplitude-based POD analysis. The resulting a90|95 values, obtained from POD curves, demonstrate that the use of carefully selected signal features significantly enhances defect detection performance compared to conventional amplitude-based evaluation. The integration of custom feature extraction notably improves detection reliability, highlighting the advantage of feature-driven analysis in nondestructive testing.
{"title":"Optimization of Nondestructive Evaluation Techniques for Bonded Components Through Model-Assisted POD Analysis","authors":"Gawher Ahmad Bhat;Damira Smagulova;Elena Jasiunienė","doi":"10.1109/TR.2025.3648546","DOIUrl":"https://doi.org/10.1109/TR.2025.3648546","url":null,"abstract":"The reliability of nondestructive evaluation techniques is of utmost importance. The probability of detection (POD) is crucial for evaluating the performance of various NDT techniques, as it quantifies the ability to detect defects based on their type and size. In this study, the model-assisted probability of detection approach was used to evaluate the performance of various features for the determination of the defect sizes in adhesively bonded components. The analysis incorporated ultrasonic and radiographic inspection simulations, which were experimentally validated at our research facility. Unlike conventional approaches, which are limited to maximum amplitude-based evaluation, this approach incorporated various ultrasonic features, including peak-to-peak amplitude, absolute energy, mean value of amplitude in the frequency domain, and absolute time-of-flight. The Rose criterion was used for the estimation of the POD for the X-ray radiography. The key innovation of this study lies in the use of specific signal features—rather than the conventional maximum amplitude—to improve the reliability of defect sizing. To enable this, custom-developed ultrasonic and radiographic feature extraction modules were integrated into the CIVA simulation environment, thereby extending its standard capabilities beyond traditional amplitude-based POD analysis. The resulting a90|95 values, obtained from POD curves, demonstrate that the use of carefully selected signal features significantly enhances defect detection performance compared to conventional amplitude-based evaluation. The integration of custom feature extraction notably improves detection reliability, highlighting the advantage of feature-driven analysis in nondestructive testing.","PeriodicalId":56305,"journal":{"name":"IEEE Transactions on Reliability","volume":"75 ","pages":"555-569"},"PeriodicalIF":5.7,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Active learning Kriging is widely used in structural reliability analysis for its computational efficiency and accuracy. While numerous learning functions exist to accelerate Kriging convergence, their performance varies across problems, with no single function universally dominating. In this study, a learning function selection strategy based on the Markov decision process (MDP) is proposed. Specifically, the selection of learning functions is modeled as an MDP, with actions corresponding to several representative learning functions, thereby avoiding reliance on a fixed sample selection preferences. An accuracy measure for failure probability is developed and designed as the MDP reward, shifting the focus of sample selection from the state of single samples to overall model improvement. Guided by the Bellman optimality principle, the proposed method selects the learning function that maximizes the expected long-term gain in model accuracy at each iteration, thereby achieving a theoretically optimal selection strategy. Several numerical and engineering examples are adopted to validate the effectiveness of the proposed method. The results show that it effectively overcomes the limitation of blindly selecting learning functions and can even outperform the optimal learning function in the action space.
{"title":"An MDP-Driven Learning Function Selection Strategy for Kriging-Based Structural Reliability Analysis","authors":"Hongxiang Yan;Ping Yan;Chengning Zhou","doi":"10.1109/TR.2025.3647741","DOIUrl":"https://doi.org/10.1109/TR.2025.3647741","url":null,"abstract":"Active learning Kriging is widely used in structural reliability analysis for its computational efficiency and accuracy. While numerous learning functions exist to accelerate Kriging convergence, their performance varies across problems, with no single function universally dominating. In this study, a learning function selection strategy based on the Markov decision process (MDP) is proposed. Specifically, the selection of learning functions is modeled as an MDP, with actions corresponding to several representative learning functions, thereby avoiding reliance on a fixed sample selection preferences. An accuracy measure for failure probability is developed and designed as the MDP reward, shifting the focus of sample selection from the state of single samples to overall model improvement. Guided by the Bellman optimality principle, the proposed method selects the learning function that maximizes the expected long-term gain in model accuracy at each iteration, thereby achieving a theoretically optimal selection strategy. Several numerical and engineering examples are adopted to validate the effectiveness of the proposed method. The results show that it effectively overcomes the limitation of blindly selecting learning functions and can even outperform the optimal learning function in the action space.","PeriodicalId":56305,"journal":{"name":"IEEE Transactions on Reliability","volume":"75 ","pages":"624-638"},"PeriodicalIF":5.7,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liangwei Zhang;Jing Lin;Ping Wang;Qing Zhang;Zhicong Zhang;Xiaohui Yan;Chuan Li
Spallation neutron sources are among the rarest and most advanced research infrastructures in the world, with fewer than five large-scale facilities in operation globally. Neutron choppers, as mission-critical components within such systems, must operate continuously under extreme conditions—including strong radiation, low vacuum, and high rotational inertia. These constraints make conventional fault diagnosis approaches ineffective, as sensors cannot be installed near the fault-prone areas (e.g., bearing housings), but instead must be placed remotely due to radiation shielding. This leads to long signal transmission paths, structural discontinuities, and severely degraded signal-to-noise ratios (SNRs), posing substantial challenges for fault diagnosis and predictive maintenance. To address this unique and high-stakes problem, we propose LPWOC (Learnable Parallel Wavelets with Orthogonality Constraints), a noise-robust deep learning model that learns adaptive wavelet filter banks and thresholding functions directly from vibration data. By incorporating conjugate quadrature filters with orthogonality regularization and fully learnable denoising layers, LPWOC offers enhanced feature diversity, low computational complexity, and exceptional resilience to noise. Experiments on a dedicated neutron chopper testbed—featuring realistic sensor placement and seven bearing health statuses—demonstrate 99.21% accuracy under low-SNR conditions, outperforming five state-of-the-art methods. This work provides a scalable and deployable diagnostic solution for one of the most demanding industrial environments in existence.
{"title":"Learnable Parallel Wavelets With Orthogonality Constraints: A Noise-Robust Deep Learning Architecture for Neutron Chopper Fault Diagnosis","authors":"Liangwei Zhang;Jing Lin;Ping Wang;Qing Zhang;Zhicong Zhang;Xiaohui Yan;Chuan Li","doi":"10.1109/TR.2025.3642971","DOIUrl":"https://doi.org/10.1109/TR.2025.3642971","url":null,"abstract":"Spallation neutron sources are among the rarest and most advanced research infrastructures in the world, with fewer than five large-scale facilities in operation globally. Neutron choppers, as mission-critical components within such systems, must operate continuously under extreme conditions—including strong radiation, low vacuum, and high rotational inertia. These constraints make conventional fault diagnosis approaches ineffective, as sensors cannot be installed near the fault-prone areas (e.g., bearing housings), but instead must be placed remotely due to radiation shielding. This leads to long signal transmission paths, structural discontinuities, and severely degraded signal-to-noise ratios (SNRs), posing substantial challenges for fault diagnosis and predictive maintenance. To address this unique and high-stakes problem, we propose LPWOC (Learnable Parallel Wavelets with Orthogonality Constraints), a noise-robust deep learning model that learns adaptive wavelet filter banks and thresholding functions directly from vibration data. By incorporating conjugate quadrature filters with orthogonality regularization and fully learnable denoising layers, LPWOC offers enhanced feature diversity, low computational complexity, and exceptional resilience to noise. Experiments on a dedicated neutron chopper testbed—featuring realistic sensor placement and seven bearing health statuses—demonstrate 99.21% accuracy under low-SNR conditions, outperforming five state-of-the-art methods. This work provides a scalable and deployable diagnostic solution for one of the most demanding industrial environments in existence.","PeriodicalId":56305,"journal":{"name":"IEEE Transactions on Reliability","volume":"75 ","pages":"529-541"},"PeriodicalIF":5.7,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Min Zhang;Jiamin Li;Zhuang Kang;Tong Lan;Haohao Ding
High-speed train brake pads state recognition faces the problems of single data source feature characterization limitation and significant domain shifts under variable working conditions. Considering that multisource heterogeneous data can characterize the brake pad state from different physical dimensions, this article proposes a multisource deep adversarial decoupled autoencoder network for online identification of brake pad state of high-speed trains under variable working conditions. First, a signal characterization system covering the multidimensional state characteristics of the friction interface is constructed by fusing three kinds of multisource heterogeneous data, including friction coefficient, tangential acceleration, and noise. Second, a deep adversarial decoupled autoencoder is designed to realize the explicit decoupling of domain-invariant and domain-specific features by utilizing the synergistic mechanism of mutual information minimization constraint and domain adversarial. Finally, with the validation set accuracy as the optimization objective, a genetic algorithm is introduced to dynamically allocate multisource weights. This adaptive weighted fusion strategy significantly enhances the model’s generalization capability for unknown rotational speed conditions. The experimental results of 10 cross-speed tasks show that the proposed model achieves an average accuracy of 99.12% . It is 7.1%, 9.36%, and 26.5% higher than the single-source model, and 3.58% to 6.36% better than the current leading domain generalization methods.
{"title":"Multisource Deep Adversarial Decoupled Autoencoder Network for State Recognition of High-Speed Train Brake Pads","authors":"Min Zhang;Jiamin Li;Zhuang Kang;Tong Lan;Haohao Ding","doi":"10.1109/TR.2025.3643732","DOIUrl":"https://doi.org/10.1109/TR.2025.3643732","url":null,"abstract":"High-speed train brake pads state recognition faces the problems of single data source feature characterization limitation and significant domain shifts under variable working conditions. Considering that multisource heterogeneous data can characterize the brake pad state from different physical dimensions, this article proposes a multisource deep adversarial decoupled autoencoder network for online identification of brake pad state of high-speed trains under variable working conditions. First, a signal characterization system covering the multidimensional state characteristics of the friction interface is constructed by fusing three kinds of multisource heterogeneous data, including friction coefficient, tangential acceleration, and noise. Second, a deep adversarial decoupled autoencoder is designed to realize the explicit decoupling of domain-invariant and domain-specific features by utilizing the synergistic mechanism of mutual information minimization constraint and domain adversarial. Finally, with the validation set accuracy as the optimization objective, a genetic algorithm is introduced to dynamically allocate multisource weights. This adaptive weighted fusion strategy significantly enhances the model’s generalization capability for unknown rotational speed conditions. The experimental results of 10 cross-speed tasks show that the proposed model achieves an average accuracy of 99.12% . It is 7.1%, 9.36%, and 26.5% higher than the single-source model, and 3.58% to 6.36% better than the current leading domain generalization methods.","PeriodicalId":56305,"journal":{"name":"IEEE Transactions on Reliability","volume":"75 ","pages":"639-649"},"PeriodicalIF":5.7,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digital twin with generative artificial intelligence (AI)-enabled maintenance optimization serves as an essential foundation for the performance of intelligent manufacturing systems (IMS). However, existing models often fail to simultaneously consider both reliability and cost. In an IMS, reliability guarantees stable system operation and consistent product quality, while cost control enables enterprises to optimize resource use, enhance productivity, and lower operating costs. Together, these metrics determine the overall effectiveness of the system and the competitiveness of the enterprise. To address the research gap, this study proposes a maintenance optimization method that jointly considers reliability and cost. In particular, a novel reliability assessment method is developed, incorporating both physical failures modeled and functional outputs that account for imperfect quality inspection. Moreover, considering rework and imperfect quality inspection, a cost analysis is performed for various operation modes of IMS. Further, a novel adaptive multi-objective particle swarm optimization with maintenance priority constraints (AMOPSO-P) method is developed to conduct the IMS control decision-making process, optimizing reliability and cost. Finally, to validate the proposed algorithm, we conduct a case study of China United Equipment Group on control decisions for a three-stage, four-station servo valve manufacturing system using simulations.
{"title":"Digital Twin-Enabled Smart Operation and Maintenance Framework With Generative AI Design of Intelligent Manufacturing Systems","authors":"Hongyan Dui;Hengbo Wang;Liudong Xing","doi":"10.1109/TR.2025.3646186","DOIUrl":"https://doi.org/10.1109/TR.2025.3646186","url":null,"abstract":"Digital twin with generative artificial intelligence (AI)-enabled maintenance optimization serves as an essential foundation for the performance of intelligent manufacturing systems (IMS). However, existing models often fail to simultaneously consider both reliability and cost. In an IMS, reliability guarantees stable system operation and consistent product quality, while cost control enables enterprises to optimize resource use, enhance productivity, and lower operating costs. Together, these metrics determine the overall effectiveness of the system and the competitiveness of the enterprise. To address the research gap, this study proposes a maintenance optimization method that jointly considers reliability and cost. In particular, a novel reliability assessment method is developed, incorporating both physical failures modeled and functional outputs that account for imperfect quality inspection. Moreover, considering rework and imperfect quality inspection, a cost analysis is performed for various operation modes of IMS. Further, a novel adaptive multi-objective particle swarm optimization with maintenance priority constraints (AMOPSO-P) method is developed to conduct the IMS control decision-making process, optimizing reliability and cost. Finally, to validate the proposed algorithm, we conduct a case study of China United Equipment Group on control decisions for a three-stage, four-station servo valve manufacturing system using simulations.","PeriodicalId":56305,"journal":{"name":"IEEE Transactions on Reliability","volume":"75 ","pages":"504-517"},"PeriodicalIF":5.7,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the continuous development of smart grids, the cyber-physical power system (CPPS) has become the core architecture of modern power systems. However, accurately identifying critical nodes in CPPS to guard against cascading failures remains a severe challenge. Existing methods fail to effectively characterize the hierarchical interactions and cannot capture the dynamic characteristics of cascading failure propagation in real time online, thus resorting to offline evaluation approaches. To address this, this article proposes an online identification method for critical nodes in CPPS using a deep reinforcement learning framework, providing a reference for node protection. This method identifies critical nodes from two different perspectives: network topology and node electrical characteristics. First, corresponding feature representations are designed for different types of nodes. Then, a deep learning framework called CP-DQN, which integrates feature perception and topology perception, is constructed by combining graph attention networks and dueling deep Q-network, enabling adaptive fusion of node topological and electrical features. Simulation results show that the proposed method exhibits superior performance in the IEEE 39 and IEEE 118 bus systems. Compared with several existing mainstream methods, it demonstrates higher superiority and practicality.
{"title":"Deep Reinforcement Learning-Based Approach for Identifying Critical Nodes in Cyber Physical Power Systems","authors":"Yuancheng Li;Hefang Zhang","doi":"10.1109/TR.2025.3646881","DOIUrl":"https://doi.org/10.1109/TR.2025.3646881","url":null,"abstract":"With the continuous development of smart grids, the cyber-physical power system (CPPS) has become the core architecture of modern power systems. However, accurately identifying critical nodes in CPPS to guard against cascading failures remains a severe challenge. Existing methods fail to effectively characterize the hierarchical interactions and cannot capture the dynamic characteristics of cascading failure propagation in real time online, thus resorting to offline evaluation approaches. To address this, this article proposes an online identification method for critical nodes in CPPS using a deep reinforcement learning framework, providing a reference for node protection. This method identifies critical nodes from two different perspectives: network topology and node electrical characteristics. First, corresponding feature representations are designed for different types of nodes. Then, a deep learning framework called CP-DQN, which integrates feature perception and topology perception, is constructed by combining graph attention networks and dueling deep Q-network, enabling adaptive fusion of node topological and electrical features. Simulation results show that the proposed method exhibits superior performance in the IEEE 39 and IEEE 118 bus systems. Compared with several existing mainstream methods, it demonstrates higher superiority and practicality.","PeriodicalId":56305,"journal":{"name":"IEEE Transactions on Reliability","volume":"75 ","pages":"464-477"},"PeriodicalIF":5.7,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}