Pub Date : 2022-01-24DOI: 10.36001/ijphm.2022.v13i1.3045
Débora C. Corrêa, A. Polpo, Michael Small, Shreyas Srikanth, Kylie Hollins, M. Hodkiewicz
An essential requirement in any data analysis is to have a response variable representing the aim of the analysis. Much academic work is based on laboratory or simulated data, where the experiment is controlled, and the ground truth clearly defined. This is seldom the reality for equipment performance in an industrial environment and it is common to find issues with the response variable in industry situations. We discuss this matter using a case study where the problem is to detect an asset event (failure) using data available but for which no ground truth is available from historical records. Our data frame contains measurements of 14 sensors recorded every minute from a process control system and 4 current motors on the asset of interest over a three year period. In this situation the ``how to'' label the event of interest is of fundamental importance. Different labelling strategies will generate different models with direct impact on the in-service fault detection efficacy of the resulting model. We discuss a data-driven approach to label a binary response variable (fault/anomaly detection) and compare it to a rule-based approach. Labelling of the time series was performed using dynamic time warping followed by agglomerative hierarchical clustering to group events with similar event dynamics. Both data sets have significant imbalance with 1,200,000 non-event data but only 150 events in the rule-based data set and 64 events in the data-driven data set. We study the performance of the models based on these two different labelling strategies, treating each data set independently. We describe decisions made in window-size selection, managing imbalance, hyper-parameter tuning, training and test selection, and use two models, logistic regression and random forest for event detection. We estimate useful models for both data sets. By useful, we understand that we could detect events for the first four months in the test set. However as the months progressed the performance of both models deteriorated, with an increasing number of false positives, reflecting possible changes in dynamics of the system. This work raises questions such as ``what are we detecting?'' and ``is there a right way to label?'' and presents a data driven approach to support labelling of historical events in process plant data for event detection in the absence of ground truth data.
{"title":"Data-driven approach for labelling process plant event data","authors":"Débora C. Corrêa, A. Polpo, Michael Small, Shreyas Srikanth, Kylie Hollins, M. Hodkiewicz","doi":"10.36001/ijphm.2022.v13i1.3045","DOIUrl":"https://doi.org/10.36001/ijphm.2022.v13i1.3045","url":null,"abstract":"An essential requirement in any data analysis is to have a response variable representing the aim of the analysis. Much academic work is based on laboratory or simulated data, where the experiment is controlled, and the ground truth clearly defined. This is seldom the reality for equipment performance in an industrial environment and it is common to find issues with the response variable in industry situations. We discuss this matter using a case study where the problem is to detect an asset event (failure) using data available but for which no ground truth is available from historical records. Our data frame contains measurements of 14 sensors recorded every minute from a process control system and 4 current motors on the asset of interest over a three year period. In this situation the ``how to'' label the event of interest is of fundamental importance. Different labelling strategies will generate different models with direct impact on the in-service fault detection efficacy of the resulting model. We discuss a data-driven approach to label a binary response variable (fault/anomaly detection) and compare it to a rule-based approach. Labelling of the time series was performed using dynamic time warping followed by agglomerative hierarchical clustering to group events with similar event dynamics. Both data sets have significant imbalance with 1,200,000 non-event data but only 150 events in the rule-based data set and 64 events in the data-driven data set. We study the performance of the models based on these two different labelling strategies, treating each data set independently. We describe decisions made in window-size selection, managing imbalance, hyper-parameter tuning, training and test selection, and use two models, logistic regression and random forest for event detection. We estimate useful models for both data sets. By useful, we understand that we could detect events for the first four months in the test set. However as the months progressed the performance of both models deteriorated, with an increasing number of false positives, reflecting possible changes in dynamics of the system. This work raises questions such as ``what are we detecting?'' and ``is there a right way to label?'' and presents a data driven approach to support labelling of historical events in process plant data for event detection in the absence of ground truth data.","PeriodicalId":42100,"journal":{"name":"International Journal of Prognostics and Health Management","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2022-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42858904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-16DOI: 10.36001/ijphm.2021.v12i2.3067
Daniel Maas, Renan Sebem, André Bittencourt Leal
This work presents a multilayer architecture for fault diagnosis in embedded systems based on formal modeling of Discrete Event Systems (DES). Most works on diagnosis of DES focus in faults of actuators, which are the devices subject to intensive wear in industry. However, embedded systems are commonly subject to cost reduction, which may increase the probability of faults in the electronic hardware. Further, software faults are hard to track and fix, and the common solution is to replace the whole electronic board. We propose a modeling approach which includes the isolation of the source of the fault in the model, regarding three layers of embedded systems: software, hardware, and sensors & actuators. The proposed method is applied to a home appliance refrigerator and after exhaustive practical tests with forced fault occurrences, all faults were diagnosed, precisely identifying the layer and the faulty component. The solution was then incorporated into the product manufactured in industrial scale.
{"title":"Multilayer Architecture for Fault Diagnosis of Embedded Systems","authors":"Daniel Maas, Renan Sebem, André Bittencourt Leal","doi":"10.36001/ijphm.2021.v12i2.3067","DOIUrl":"https://doi.org/10.36001/ijphm.2021.v12i2.3067","url":null,"abstract":"This work presents a multilayer architecture for fault diagnosis in embedded systems based on formal modeling of Discrete Event Systems (DES). Most works on diagnosis of DES focus in faults of actuators, which are the devices subject to intensive wear in industry. However, embedded systems are commonly subject to cost reduction, which may increase the probability of faults in the electronic hardware. Further, software faults are hard to track and fix, and the common solution is to replace the whole electronic board. We propose a modeling approach which includes the isolation of the source of the fault in the model, regarding three layers of embedded systems: software, hardware, and sensors & actuators. The proposed method is applied to a home appliance refrigerator and after exhaustive practical tests with forced fault occurrences, all faults were diagnosed, precisely identifying the layer and the faulty component. The solution was then incorporated into the product manufactured in industrial scale.","PeriodicalId":42100,"journal":{"name":"International Journal of Prognostics and Health Management","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47191789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-26DOI: 10.36001/ijphm.2021.v12i2.2948
A. Ozdagli, X. Koutsoukos
In the last decade, the interest in machine learning (ML) has grown significantly within the structural health monitoring (SHM) community. Traditional supervised ML approaches for detecting faults assume that the training and test data come from similar distributions. However, real-world applications, where an ML model is trained, for example, on numerical simulation data and tested on experimental data, are deemed to fail in detecting the damage. The deterioration in the prediction performance is mainly related to the fact that the numerical and experimental data are collected under different conditions and they do not share the same underlying features. This paper proposes a domain adaptation approach for ML-based damage detection and localization problems where the classifier has access to the labeled training (source) and unlabeled test (target) data, but the source and target domains are statistically different. The proposed domain adaptation method seeks to form a feature space that is capable of representing both source and target domains by implementing a domain-adversarial neural network. This neural network uses H-divergence criteria to minimize the discrepancy between the source and target domain in a latent feature space. To evaluate the performance, we present two case studies where we design a neural network model for classifying the health condition of a variety of systems. The effectiveness of the domain adaptation is shown by computing the classification accuracy of the unlabeled target data with and without domain adaptation. Furthermore, the performance gain of the domain adaptation over a well-known transfer knowledge approach called Transfer Component Analysis is also demonstrated. Overall, the results demonstrate that the domain adaption is a valid approach for damage detection applications where access to labeled experimental data is limited.
{"title":"Domain Adaptation for Structural Fault Detection under Model Uncertainty","authors":"A. Ozdagli, X. Koutsoukos","doi":"10.36001/ijphm.2021.v12i2.2948","DOIUrl":"https://doi.org/10.36001/ijphm.2021.v12i2.2948","url":null,"abstract":"In the last decade, the interest in machine learning (ML) has grown significantly within the structural health monitoring (SHM) community. Traditional supervised ML approaches for detecting faults assume that the training and test data come from similar distributions. However, real-world applications, where an ML model is trained, for example, on numerical simulation data and tested on experimental data, are deemed to fail in detecting the damage. The deterioration in the prediction performance is mainly related to the fact that the numerical and experimental data are collected under different conditions and they do not share the same underlying features. This paper proposes a domain adaptation approach for ML-based damage detection and localization problems where the classifier has access to the labeled training (source) and unlabeled test (target) data, but the source and target domains are statistically different. The proposed domain adaptation method seeks to form a feature space that is capable of representing both source and target domains by implementing a domain-adversarial neural network. This neural network uses H-divergence criteria to minimize the discrepancy between the source and target domain in a latent feature space. To evaluate the performance, we present two case studies where we design a neural network model for classifying the health condition of a variety of systems. The effectiveness of the domain adaptation is shown by computing the classification accuracy of the unlabeled target data with and without domain adaptation. Furthermore, the performance gain of the domain adaptation over a well-known transfer knowledge approach called Transfer Component Analysis is also demonstrated. Overall, the results demonstrate that the domain adaption is a valid approach for damage detection applications where access to labeled experimental data is limited.","PeriodicalId":42100,"journal":{"name":"International Journal of Prognostics and Health Management","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48280256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-12DOI: 10.36001/ijphm.2019.v10i1.2754
Sreedhar Babu G, Sekhar A.S., Lingamurthy. A
The paper presents diagnostics methodology that can identify the event of occurrence of fault in the actuator or the linkage system of the flight control actuation system driven by Linear Electromechanical Actuators (LEMA). The standard data analysis like motor current signature analysis (MCSA) is good at identifying the incipient faults within the elements of the actuators in situations where-in the actuators are driving control surfaces. But in back driven cases, where-in LEMA is driven back by control surfaces, the faults outside the LEMAs are difficult to be detected due to higher mechanical advantages of transmission elements like roller screws, gear train and linkage arms scaling down their effects before reaching the motor. One such event occurred in a ground test, wherein the jet vanes were sheared when back driven by excessive gas dynamic forces. Neither the motor current nor the LEMA position feedback data has any clue of the instance of occurrence of such shearing. The case study is discussed in detail and diagnostics solution for such failures is proposed. A new methodology to pin point the event of occurrence is arrived at based on ground static test data of four independent channels. The same is reassured for its applicability using lab experiments on three samples mimicking the failure. The method's applicability is also extended for extracting events in actual flight, by comparing the flight telemetry data with the mimicked lab level (dry runs) data. The methodology uses the analysis of LEMA motor current data to arrive at the vital diagnostic information. The current data of LEMA directly cannot be interpreted due to non-stationary nature arising from variable speed and its pulsating form because of the pulse width modulation (PWM) switching, threshold voltages and closed loop dynamics of the servo. Hence the motor current is integrated using cumulative trapezoidal method. This integrated data is spline curve fitted to arrive at residuals vector. The Hadamard product is used on the residuals vector to amplify the information and suppress the noise. Further, normalizing is done to compare data across tests and samples. With this, necessary diagnostic information was extracted from static test data. The method is extended for extracting diagnostics information from actual flight using comparison analysis of, the test data in actual environment with mimicked lab level dry runs. It is also verified for applicability in faults directly driven by actuators in lab level experiments on three samples.
{"title":"Diagnostics of actuation system by Hadamard product of integrated motor current residuals applied to electro-mechanical actuators","authors":"Sreedhar Babu G, Sekhar A.S., Lingamurthy. A","doi":"10.36001/ijphm.2019.v10i1.2754","DOIUrl":"https://doi.org/10.36001/ijphm.2019.v10i1.2754","url":null,"abstract":"The paper presents diagnostics methodology that can identify the event of occurrence of fault in the actuator or the linkage system of the flight control actuation system driven by Linear Electromechanical Actuators (LEMA). The standard data analysis like motor current signature analysis (MCSA) is good at identifying the incipient faults within the elements of the actuators in situations where-in the actuators are driving control surfaces. But in back driven cases, where-in LEMA is driven back by control surfaces, the faults outside the LEMAs are difficult to be detected due to higher mechanical advantages of transmission elements like roller screws, gear train and linkage arms scaling down their effects before reaching the motor. One such event occurred in a ground test, wherein the jet vanes were sheared when back driven by excessive gas dynamic forces. Neither the motor current nor the LEMA position feedback data has any clue of the instance of occurrence of such shearing. The case study is discussed in detail and diagnostics solution for such failures is proposed. A new methodology to pin point the event of occurrence is arrived at based on ground static test data of four independent channels. The same is reassured for its applicability using lab experiments on three samples mimicking the failure. The method's applicability is also extended for extracting events in actual flight, by comparing the flight telemetry data with the mimicked lab level (dry runs) data. The methodology uses the analysis of LEMA motor current data to arrive at the vital diagnostic information. The current data of LEMA directly cannot be interpreted due to non-stationary nature arising from variable speed and its pulsating form because of the pulse width modulation (PWM) switching, threshold voltages and closed loop dynamics of the servo. Hence the motor current is integrated using cumulative trapezoidal method. This integrated data is spline curve fitted to arrive at residuals vector. The Hadamard product is used on the residuals vector to amplify the information and suppress the noise. Further, normalizing is done to compare data across tests and samples. With this, necessary diagnostic information was extracted from static test data. The method is extended for extracting diagnostics information from actual flight using comparison analysis of, the test data in actual environment with mimicked lab level dry runs. It is also verified for applicability in faults directly driven by actuators in lab level experiments on three samples.","PeriodicalId":42100,"journal":{"name":"International Journal of Prognostics and Health Management","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49253232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-10DOI: 10.36001/ijphm.2021.v12i2.2943
Luc Keizers, R. Loendersloot, T. Tinga
Prognostics gained a lot of research attention over the last decade, not the least due to the rise of data-driven prediction models. Also hybrid approaches are being developed that combine physics-based and data-driven models for better performance. However, limited attention is given to prognostics for varying operational and environmental conditions. In fact, varying operational and environmental conditions can significantly influence the remaining useful life of assets. A powerful hybrid tool for prognostics is Bayesian filtering, where a physical degradation model is updated based on realtime data. Although these types of filters are widely studied for prognostics, application for assets in varying conditions is rarely considered in literature. In this paper, it is proposed to apply an unscented Kalman filter for prognostics under varying operational conditions. Four scenarios are described in which a distinction is made between the level in which real-time and future loads are known and between short-term and long-term prognostics. The method is demonstrated on an artificial crack growth case study with frequently changing stress ranges in two different stress profiles. After this specific case, the generic application of the method is discussed. A positioning diagram is presented, indicating in which situations the proposed filter is useful and feasible. It is demonstrated that incorporation of physical knowledge can lead to highly accurate prognostics due to a degradation model in which uncertainty in model parameters is reduced. It is also demonstrated that in case of limited physical knowledge, data can compensate for missing physics to yield reasonable predictions.
{"title":"Unscented Kalman Filtering for Prognostics Under Varying Operational and Environmental Conditions","authors":"Luc Keizers, R. Loendersloot, T. Tinga","doi":"10.36001/ijphm.2021.v12i2.2943","DOIUrl":"https://doi.org/10.36001/ijphm.2021.v12i2.2943","url":null,"abstract":"Prognostics gained a lot of research attention over the last decade, not the least due to the rise of data-driven prediction models. Also hybrid approaches are being developed that combine physics-based and data-driven models for better performance. However, limited attention is given to prognostics for varying operational and environmental conditions. In fact, varying operational and environmental conditions can significantly influence the remaining useful life of assets. A powerful hybrid tool for prognostics is Bayesian filtering, where a physical degradation model is updated based on realtime data. Although these types of filters are widely studied for prognostics, application for assets in varying conditions is rarely considered in literature. In this paper, it is proposed to apply an unscented Kalman filter for prognostics under varying operational conditions. Four scenarios are described in which a distinction is made between the level in which real-time and future loads are known and between short-term and long-term prognostics. The method is demonstrated on an artificial crack growth case study with frequently changing stress ranges in two different stress profiles. After this specific case, the generic application of the method is discussed. A positioning diagram is presented, indicating in which situations the proposed filter is useful and feasible. It is demonstrated that incorporation of physical knowledge can lead to highly accurate prognostics due to a degradation model in which uncertainty in model parameters is reduced. It is also demonstrated that in case of limited physical knowledge, data can compensate for missing physics to yield reasonable predictions.","PeriodicalId":42100,"journal":{"name":"International Journal of Prognostics and Health Management","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47119795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-10DOI: 10.36001/ijphm.2021.v12i2.3087
Fabian Mauthe, Simone Hagmeyer, P. Zeiler
For a successful realization of prognostics and health management (PHM), the availability of sufficient run-to-failure data sets is a crucial factor. The sheer number of given data points holds less importance than the full coverage of the potential state space. However, full coverage is a major challenge in most industrial applications. Among other things, high investment and operating costs as well as the long service life of many technical systems make it difficult to acquire complete run-to-failure data sets. Consequently, in industrial applications data sets with specific deficiencies are frequently encountered. The development of appropriate methods to address such data scenarios is a fundamental research issue. Therefore, the purpose of this paper is to provide facilitation for this research. Accordingly, the paper starts by specifying the value and availability of data in PHM. Subsequently, criteria for characterizing data sets are defined independent of the actual PHM application. The criteria are used to identify typical data scenarios with specific deficiencies that possess significant relevance for industrial applications. Thereafter, the most comprehensive overview of data sets suitable for PHM and currently publicly accessible is provided. Thereby, not all previously identified data scenarios with their specific deficiencies are addressed by at least one data set. A program is established for the aforementioned facilitation of further research. One objective of the program is to create data sets reflecting these data scenarios using a test bench. First, possible applications and their degradation processes to be studied on the test bench are briefly characterized. Thereby, the final decision to select filtration as a test bench application is argued. Subsequently, the test bench created is introduced, including a description of the functional concept, pneumatic layout and components involved, as well as the filter media and test dusts employed. Typical run-to-failure trajectories are illustrated. Thereafter, the data set published under the name Preventive to Predictive Maintenance is presented. Additionally, a schedule for future releases of data sets on further industry-relevant data scenarios is sketched.
{"title":"Creation of Publicly Available Data Sets for Prognostics and Diagnostics Addressing Data Scenarios Relevant to Industrial Applications","authors":"Fabian Mauthe, Simone Hagmeyer, P. Zeiler","doi":"10.36001/ijphm.2021.v12i2.3087","DOIUrl":"https://doi.org/10.36001/ijphm.2021.v12i2.3087","url":null,"abstract":"For a successful realization of prognostics and health management (PHM), the availability of sufficient run-to-failure data sets is a crucial factor. The sheer number of given data points holds less importance than the full coverage of the potential state space. However, full coverage is a major challenge in most industrial applications. Among other things, high investment and operating costs as well as the long service life of many technical systems make it difficult to acquire complete run-to-failure data sets. Consequently, in industrial applications data sets with specific deficiencies are frequently encountered. The development of appropriate methods to address such data scenarios is a fundamental research issue. Therefore, the purpose of this paper is to provide facilitation for this research. Accordingly, the paper starts by specifying the value and availability of data in PHM. Subsequently, criteria for characterizing data sets are defined independent of the actual PHM application. The criteria are used to identify typical data scenarios with specific deficiencies that possess significant relevance for industrial applications. Thereafter, the most comprehensive overview of data sets suitable for PHM and currently publicly accessible is provided. Thereby, not all previously identified data scenarios with their specific deficiencies are addressed by at least one data set. A program is established for the aforementioned facilitation of further research. One objective of the program is to create data sets reflecting these data scenarios using a test bench. First, possible applications and their degradation processes to be studied on the test bench are briefly characterized. Thereby, the final decision to select filtration as a test bench application is argued. Subsequently, the test bench created is introduced, including a description of the functional concept, pneumatic layout and components involved, as well as the filter media and test dusts employed. Typical run-to-failure trajectories are illustrated. Thereafter, the data set published under the name Preventive to Predictive Maintenance is presented. Additionally, a schedule for future releases of data sets on further industry-relevant data scenarios is sketched.","PeriodicalId":42100,"journal":{"name":"International Journal of Prognostics and Health Management","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48108532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-29DOI: 10.36001/ijphm.2021.v12i2.3017
H. S, K. K, R. Jegadeeshwaran, G. Sakthivel
Brake is one of the crucial elements in automobiles. If there is any malfunction in the brake system, it will adversely affect the entire system. This leads to tribulation on vehicle and passenger safety. Therefore the brake system has a major role to do in automobiles and hence it is necessary to monitor its functioning. In recent trends, vibration-based condition monitoring techniques are preferred for most condition monitoring systems. In the present study, the performance of various fault diagnosis models is tested for observing brake health. A real vehicle brake system was used for the experiments. A piezoelectric accelerometer is used to obtain the signals of vibration under various faulty cases of the brake system as well as good condition. Statistical parameters were extracted from the vibration signals and the suitable features are identified using the effect of the study of the combined features. Various versions of machine learning models are used for the feature classification study. The classification accuracy of such algorithms has been reported and discussed.
{"title":"Brake Health Prediction Using LogitBoost Classifier Through Vibration Signals","authors":"H. S, K. K, R. Jegadeeshwaran, G. Sakthivel","doi":"10.36001/ijphm.2021.v12i2.3017","DOIUrl":"https://doi.org/10.36001/ijphm.2021.v12i2.3017","url":null,"abstract":"Brake is one of the crucial elements in automobiles. If there is any malfunction in the brake system, it will adversely affect the entire system. This leads to tribulation on vehicle and passenger safety. Therefore the brake system has a major role to do in automobiles and hence it is necessary to monitor its functioning. In recent trends, vibration-based condition monitoring techniques are preferred for most condition monitoring systems. In the present study, the performance of various fault diagnosis models is tested for observing brake health. A real vehicle brake system was used for the experiments. A piezoelectric accelerometer is used to obtain the signals of vibration under various faulty cases of the brake system as well as good condition. Statistical parameters were extracted from the vibration signals and the suitable features are identified using the effect of the study of the combined features. Various versions of machine learning models are used for the feature classification study. The classification accuracy of such algorithms has been reported and discussed.","PeriodicalId":42100,"journal":{"name":"International Journal of Prognostics and Health Management","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45745212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-20DOI: 10.36001/ijphm.2021.v12i2.2915
Dustin Helm, M. Timusk
This work proposes a methodology for the detection of rolling-element bearing faults in quasi-parallel machinery. In the context of this work, parallel machinery is considered to be any group of identical components of a mechanical system that are linked to operate on the same duty cycle. Quasi-parallel machinery can further be defined as two components not identical mechanically, but their operating conditions are correlated and they operate in the same environmental conditions. Furthermore, a new fault detection architecture is proposed wherein a feed-forward neural network (FFNN) is utilized to identify the relationship between signals. The proposed technique is based on the analysis of a calculated residual between feature vectors from two separate components. This technique is designed to reduce the effects of changes in the machines operating state on the condition monitoring system. When a fault detection system is monitoring multiple components in a larger system that are mechanically linked, signals and information that can be gleaned from the system can be used to reduce influences from factors that are not related to condition. The FFNN is used to identify the relationship between the feature vectors from two quasi-parallel components and eliminate the difference when no fault is present. The proposed method is tested on vibration data from two gearboxes that are connected in series. The gearboxes contain bearings operating at different speeds and gear mesh frequencies. In these conditions, a variety of rolling-element bearing faults are detected. The results indicate that improvement in fault detection accuracy can be achieved by using the additional information available from the quasi-parallel machine. The proposed method is directly compared to a typical AANN novelty detection scheme.
{"title":"Detection of Rolling-Element Bearing Faults in Non-stationary Quasi-Parallel Machinery Using Residual Analysis Augmented by Neural Networks","authors":"Dustin Helm, M. Timusk","doi":"10.36001/ijphm.2021.v12i2.2915","DOIUrl":"https://doi.org/10.36001/ijphm.2021.v12i2.2915","url":null,"abstract":"This work proposes a methodology for the detection of rolling-element bearing faults in quasi-parallel machinery. In the context of this work, parallel machinery is considered to be any group of identical components of a mechanical system that are linked to operate on the same duty cycle. Quasi-parallel machinery can further be defined as two components not identical mechanically, but their operating conditions are correlated and they operate in the same environmental conditions. Furthermore, a new fault detection architecture is proposed wherein a feed-forward neural network (FFNN) is utilized to identify the relationship between signals. The proposed technique is based on the analysis of a calculated residual between feature vectors from two separate components. This technique is designed to reduce the effects of changes in the machines operating state on the condition monitoring system. When a fault detection system is monitoring multiple components in a larger system that are mechanically linked, signals and information that can be gleaned from the system can be used to reduce influences from factors that are not related to condition. The FFNN is used to identify the relationship between the feature vectors from two quasi-parallel components and eliminate the difference when no fault is present. The proposed method is tested on vibration data from two gearboxes that are connected in series. The gearboxes contain bearings operating at different speeds and gear mesh frequencies. In these conditions, a variety of rolling-element bearing faults are detected. The results indicate that improvement in fault detection accuracy can be achieved by using the additional information available from the quasi-parallel machine. The proposed method is directly compared to a typical AANN novelty detection scheme.","PeriodicalId":42100,"journal":{"name":"International Journal of Prognostics and Health Management","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48394713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-10DOI: 10.36001/ijphm.2021.v12i2.2777
Ferhat Tamssaouet, K. Nguyen, K. Medjaher, M. Orchard
Model-based prognostic approaches use first-principle or regression models to estimate and predict the system’s health state in order to determine the remaining useful life (RUL). Then, in order to handle the prediction results uncertainty, the Bayesian framework is usually used, in which the prior estimates are updated by infield measurements without changing the model parameters. Nevertheless, in the case of system-level prognostic, the mere updating of the prior estimates, based on a predetermined model, is no longer sufficient. This is due to the mutual interactions between components that increase the system modeling uncertainties and may lead to an inaccurate prediction of the system RUL (SRUL). Therefore, this paper proposes a new methodology for online joint uncertainty quantification and model estimation based on particle filtering (PF) and gradient descent (GD). In detail, the inoperability input-output model (IIM) is used to characterize system degradations considering interactions between components and effects of the mission profile; and then the inoperability of system components is estimated in a probabilistic manner using PF. In the case of consecutive discrepancy between the prior and posterior estimates of the system health state, GD is used to correct and to adapt the IIM parameters. To illustrate the effectiveness of the proposed methodology and its suitability for an online implementation, the Tennessee Eastman Process is investigated as a case study.
{"title":"Fresh new look for system-level prognostics","authors":"Ferhat Tamssaouet, K. Nguyen, K. Medjaher, M. Orchard","doi":"10.36001/ijphm.2021.v12i2.2777","DOIUrl":"https://doi.org/10.36001/ijphm.2021.v12i2.2777","url":null,"abstract":"Model-based prognostic approaches use first-principle or regression models to estimate and predict the system’s health state in order to determine the remaining useful life (RUL). Then, in order to handle the prediction results uncertainty, the Bayesian framework is usually used, in which the prior estimates are updated by infield measurements without changing the model parameters. Nevertheless, in the case of system-level prognostic, the mere updating of the prior estimates, based on a predetermined model, is no longer sufficient. This is due to the mutual interactions between components that increase the system modeling uncertainties and may lead to an inaccurate prediction of the system RUL (SRUL). Therefore, this paper proposes a new methodology for online joint uncertainty quantification and model estimation based on particle filtering (PF) and gradient descent (GD). In detail, the inoperability input-output model (IIM) is used to characterize system degradations considering interactions between components and effects of the mission profile; and then the inoperability of system components is estimated in a probabilistic manner using PF. In the case of consecutive discrepancy between the prior and posterior estimates of the system health state, GD is used to correct and to adapt the IIM parameters. To illustrate the effectiveness of the proposed methodology and its suitability for an online implementation, the Tennessee Eastman Process is investigated as a case study.","PeriodicalId":42100,"journal":{"name":"International Journal of Prognostics and Health Management","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44282672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-05DOI: 10.36001/ijphm.2021.v12i2.2966
Kaylee N. Rellaford, Dallin L Smith, Alex Farnsworth, Shane M. Drake, H. Lee, J. Patterson
Polymer jackets play an important protective role in distribution cabling by providing structure and resistance to moisture, heat, and exposure to harmful chemicals. Current methods of structural assessment, such as elongation at break (E-at-B), are inherently destructive. While other non-destructive methods such as indenter evaluation are available, they are not suitable for in-service use. We propose that second harmonic generation (SHG) could provide a non-destructive means of characterizing the aging of chlorosulfonated polyethylene (CSPE) cable jackets. SHG was used to study cables previously aged and characterized by the Electric Power Research Institute (EPRI). Comparative data between the SHG results and indenter modulus tests suggest that SHG can be used to qualitatively differentiate between minimally and significantly aged CSPE cable jackets. The results of this proof-of-concept study suggest additional work that could be done to better understand the mechanisms of the aging of CSPE cable jackets and how SHG could be used to monitor the aging process.
{"title":"Use of Nonlinear Optics for Assessment of Cable Polymer Aging","authors":"Kaylee N. Rellaford, Dallin L Smith, Alex Farnsworth, Shane M. Drake, H. Lee, J. Patterson","doi":"10.36001/ijphm.2021.v12i2.2966","DOIUrl":"https://doi.org/10.36001/ijphm.2021.v12i2.2966","url":null,"abstract":"Polymer jackets play an important protective role in distribution cabling by providing structure and resistance to moisture, heat, and exposure to harmful chemicals. Current methods of structural assessment, such as elongation at break (E-at-B), are inherently destructive. While other non-destructive methods such as indenter evaluation are available, they are not suitable for in-service use. We propose that second harmonic generation (SHG) could provide a non-destructive means of characterizing the aging of chlorosulfonated polyethylene (CSPE) cable jackets. SHG was used to study cables previously aged and characterized by the Electric Power Research Institute (EPRI). Comparative data between the SHG results and indenter modulus tests suggest that SHG can be used to qualitatively differentiate between minimally and significantly aged CSPE cable jackets. The results of this proof-of-concept study suggest additional work that could be done to better understand the mechanisms of the aging of CSPE cable jackets and how SHG could be used to monitor the aging process.","PeriodicalId":42100,"journal":{"name":"International Journal of Prognostics and Health Management","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49408162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}