Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925817
Yu Liu, Hongzhong Huang, H. Pham
This paper introduces a proposed model to evaluate the reliability of multi-component degradation systems suffering two kinds of competing failure causes: internal degradation process and damage from external random shocks. The internal degradation is expressed as a random process with respect to working time, and a geometric process is employed to describe cumulative damage caused by external random shocks. In our proposed model, the system is assumed to be failed when internal degradation or cumulative damage from random shocks exceed random life thresholds. The reliability expression is derived when the random life threshold and degradation process are considered to follow a Weibull distribution. A studied case of series-parallel system is presented to illustrate the proposed model, and a numerical algorithm is provided to simplify the calculating process based on normal approximation and assess the system reliability. Finally, Monte Carlo simulation method is employed to verify the model and algorithms.
{"title":"Reliability evaluation of systems with degradation and random shocks","authors":"Yu Liu, Hongzhong Huang, H. Pham","doi":"10.1109/RAMS.2008.4925817","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925817","url":null,"abstract":"This paper introduces a proposed model to evaluate the reliability of multi-component degradation systems suffering two kinds of competing failure causes: internal degradation process and damage from external random shocks. The internal degradation is expressed as a random process with respect to working time, and a geometric process is employed to describe cumulative damage caused by external random shocks. In our proposed model, the system is assumed to be failed when internal degradation or cumulative damage from random shocks exceed random life thresholds. The reliability expression is derived when the random life threshold and degradation process are considered to follow a Weibull distribution. A studied case of series-parallel system is presented to illustrate the proposed model, and a numerical algorithm is provided to simplify the calculating process based on normal approximation and assess the system reliability. Finally, Monte Carlo simulation method is employed to verify the model and algorithms.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":"139 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128957586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925839
K. Stentoft, M. L. Petersen
The demand for robust electronics is continuously increasing, as electronics is used in almost any products and placed everywhere. The customer expects industrial products to operate at least 5 years without failures, no matter the location. In order to be able to design products which are robust against harsh environments, the specific conditions at the customers must be known and understood. Following, the requirements for the entire product and the single parts must be specified properly. The final product shall be able to pass robustness tests, such as aggressive gases, humidity and contamination. Requirements to the suppliers of single parts and components shall include cleanliness and lifetime data in harsh environments. Danfoss has developed a simple three step 'aggressive sub 3' test procedure for new products, which involves exposure in salt mist, aggressive gasses and cyclic humidity. These tests show good conformity with real life scenarios, but exact acceleration factors relative to the customer environment can not be established, since too many factors are unknown. However it is possible to make life tests with various stress levels and thereby determine acceleration factors for single or few stressors. Other 'must' tests are dust tests, condensing humidity tests and temperature cycling. It is also very important to analyze market feed-backs and thereby get more knowledge about the customers and the products, to be able to make ongoing improvements.
{"title":"Electronics in harsh environments - product verification and validation","authors":"K. Stentoft, M. L. Petersen","doi":"10.1109/RAMS.2008.4925839","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925839","url":null,"abstract":"The demand for robust electronics is continuously increasing, as electronics is used in almost any products and placed everywhere. The customer expects industrial products to operate at least 5 years without failures, no matter the location. In order to be able to design products which are robust against harsh environments, the specific conditions at the customers must be known and understood. Following, the requirements for the entire product and the single parts must be specified properly. The final product shall be able to pass robustness tests, such as aggressive gases, humidity and contamination. Requirements to the suppliers of single parts and components shall include cleanliness and lifetime data in harsh environments. Danfoss has developed a simple three step 'aggressive sub 3' test procedure for new products, which involves exposure in salt mist, aggressive gasses and cyclic humidity. These tests show good conformity with real life scenarios, but exact acceleration factors relative to the customer environment can not be established, since too many factors are unknown. However it is possible to make life tests with various stress levels and thereby determine acceleration factors for single or few stressors. Other 'must' tests are dust tests, condensing humidity tests and temperature cycling. It is also very important to analyze market feed-backs and thereby get more knowledge about the customers and the products, to be able to make ongoing improvements.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121644041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925762
T. Kohda, S. Matsumoto, M. Nakagawa
General systems such as production plants and aircrafts can be regarded as a phased-mission system, which perform several different functions depending on their operation stage. This paper proposes a novel, simple and practical risk analysis method of phased mission systems with multiple failure modes. Firstly, based on the physical definition of a system accident, system accident occurrence conditions at each phase are obtained in terms of component state conditions at start/end of a phase. Evaluating component state conditions, the system accident occurrence probability can be easily evaluated. An illustrative example of a batch process in the chemical reactor shows the merits and details of the proposed method.
{"title":"Risk analysis of phased-mission systems with multiple failure modes","authors":"T. Kohda, S. Matsumoto, M. Nakagawa","doi":"10.1109/RAMS.2008.4925762","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925762","url":null,"abstract":"General systems such as production plants and aircrafts can be regarded as a phased-mission system, which perform several different functions depending on their operation stage. This paper proposes a novel, simple and practical risk analysis method of phased mission systems with multiple failure modes. Firstly, based on the physical definition of a system accident, system accident occurrence conditions at each phase are obtained in terms of component state conditions at start/end of a phase. Evaluating component state conditions, the system accident occurrence probability can be easily evaluated. An illustrative example of a batch process in the chemical reactor shows the merits and details of the proposed method.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123041442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925823
Mei Rong, Ting Zhao, Yang Yu
Process Failure Mode and Effects analysis (PFMEA) is a technique to identify, analyze and evaluate the potential failure modes of a production process. The purpose is to suggest control measures to reduce the risk of the production process effectively. NASA has used the PFMEA technique to analyze human processes or tasks. This type of analysis can be referred to as a "human factors process FMEA, HF-FMEA". HF-FMEA breaks down a task process into discrete steps so the actions associated with a process step can be specifically analyzed for potential failure. For each action performed, all potential human errors are identified. Each error is then evaluated to identify positive and negative contributing factors, barriers, and controls. The aim of these works is to give risk reduction measures to minimize the risk of human process. Although HF-PFMEA has achieved significant achievements in the analysis of human factors, the technique itself is still to be improved. For example, the technique has no generic method to decompose the task, and it is difficult to identify human operational errors comprehensively. This paper proposes some solutions to the shortcomings of the conventional approach. Advanced HF-PFMEA integrates several important concepts that distinguish it from conventional HF-FMEA, including: (1) suggest a method to break down task process according to the sequence and logicality of task process, (2) use hazard and operability study (HAZOP) to analyze human operational errors. The operational errors can be identified by determining guidewords and process parameters.
{"title":"Advanced human factors Process Failure Modes and Effects analysis","authors":"Mei Rong, Ting Zhao, Yang Yu","doi":"10.1109/RAMS.2008.4925823","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925823","url":null,"abstract":"Process Failure Mode and Effects analysis (PFMEA) is a technique to identify, analyze and evaluate the potential failure modes of a production process. The purpose is to suggest control measures to reduce the risk of the production process effectively. NASA has used the PFMEA technique to analyze human processes or tasks. This type of analysis can be referred to as a \"human factors process FMEA, HF-FMEA\". HF-FMEA breaks down a task process into discrete steps so the actions associated with a process step can be specifically analyzed for potential failure. For each action performed, all potential human errors are identified. Each error is then evaluated to identify positive and negative contributing factors, barriers, and controls. The aim of these works is to give risk reduction measures to minimize the risk of human process. Although HF-PFMEA has achieved significant achievements in the analysis of human factors, the technique itself is still to be improved. For example, the technique has no generic method to decompose the task, and it is difficult to identify human operational errors comprehensively. This paper proposes some solutions to the shortcomings of the conventional approach. Advanced HF-PFMEA integrates several important concepts that distinguish it from conventional HF-FMEA, including: (1) suggest a method to break down task process according to the sequence and logicality of task process, (2) use hazard and operability study (HAZOP) to analyze human operational errors. The operational errors can be identified by determining guidewords and process parameters.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132414073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925847
H. Liao, Peng Wang, T. Jin, S. Repaka
It is difficult to control spare parts inventory of a product to meet certain maintenance demand. The problem becomes more challenging when the installed base of the product changes over time. Under this situation, the inventory value needs to be adjusted according to the resulting non-stationary maintenance demand. This challenge is usually encountered when a manufacturer starts selling a new product and agrees to provide spare parts for maintenance. In this paper, a special case involving a new non-repairable product with a single spare pool is considered. It is assumed that the rate of new sales of the product is constant, and the product's failure time follows the Weibull distribution. The mathematical model for the resulting maintenance demand is formulated and calculated through simulation. Based on the maintenance demand, a dynamic (Q, r) - (lotsize/reorder-point) restocking policy is formulated and solved using a multi-resolution approach. Finally, a numerical example with the objective of minimizing the inventory cost under a service level constraint is provided to demonstrate the proposed methodology in practical use.
{"title":"Spare parts management considering new sales","authors":"H. Liao, Peng Wang, T. Jin, S. Repaka","doi":"10.1109/RAMS.2008.4925847","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925847","url":null,"abstract":"It is difficult to control spare parts inventory of a product to meet certain maintenance demand. The problem becomes more challenging when the installed base of the product changes over time. Under this situation, the inventory value needs to be adjusted according to the resulting non-stationary maintenance demand. This challenge is usually encountered when a manufacturer starts selling a new product and agrees to provide spare parts for maintenance. In this paper, a special case involving a new non-repairable product with a single spare pool is considered. It is assumed that the rate of new sales of the product is constant, and the product's failure time follows the Weibull distribution. The mathematical model for the resulting maintenance demand is formulated and calculated through simulation. Based on the maintenance demand, a dynamic (Q, r) - (lotsize/reorder-point) restocking policy is formulated and solved using a multi-resolution approach. Finally, a numerical example with the objective of minimizing the inventory cost under a service level constraint is provided to demonstrate the proposed methodology in practical use.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126854261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925831
K. Groth, Dongfeng Zhu, A. Mosleh
This paper introduces the software implementation of a hybrid methodology for probabilistic risk assessment (PRA) of complex systems. The software, called IRIS (Integrated Risk Information System) combines a user-friendly graphical interface with a powerful computational engine. The framework includes a multi-layered modeling approach, combining Event Sequence Diagrams, Fault Trees, and Bayesian Belief Networks in a hybrid causal logic (HCL) model. This allows the most appropriate modeling techniques to be applied in the different domains of the system. At its core IRIS brings related perspectives of system safety, hazard analysis, and risk analysis into a unifying framework.
{"title":"Hybrid methodology and software platform for probabilistic risk assessment","authors":"K. Groth, Dongfeng Zhu, A. Mosleh","doi":"10.1109/RAMS.2008.4925831","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925831","url":null,"abstract":"This paper introduces the software implementation of a hybrid methodology for probabilistic risk assessment (PRA) of complex systems. The software, called IRIS (Integrated Risk Information System) combines a user-friendly graphical interface with a powerful computational engine. The framework includes a multi-layered modeling approach, combining Event Sequence Diagrams, Fault Trees, and Bayesian Belief Networks in a hybrid causal logic (HCL) model. This allows the most appropriate modeling techniques to be applied in the different domains of the system. At its core IRIS brings related perspectives of system safety, hazard analysis, and risk analysis into a unifying framework.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126316016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925840
J. Bowles, W. Hanczaryk
As the 21st century progresses, computer systems have become a target for a new type of criminal who attacks software with malicious intent. Failure Modes and Effects Analysis, which is normally used to improve system reliability by identifying and mitigating the effects of potential system failures, provides a basic framework that can be applied to counter the threats a computer system will encounter in its operational environment. The process consists of: 1) becoming familiar with the system and system components; 2) developing a threat model by identifying external dependencies and security assumptions; 3) identifying and classifying the types of threats to the system; 4) determining the effects of the threat; and 5) making changes to counter the potential threats. This approach ensures that the assessment of the threat will be done in a systematic and meticulous manner that is more likely to result in a secure and reliable system.
{"title":"Threat effects analysis: Applying FMEA to model computer system threats","authors":"J. Bowles, W. Hanczaryk","doi":"10.1109/RAMS.2008.4925840","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925840","url":null,"abstract":"As the 21st century progresses, computer systems have become a target for a new type of criminal who attacks software with malicious intent. Failure Modes and Effects Analysis, which is normally used to improve system reliability by identifying and mitigating the effects of potential system failures, provides a basic framework that can be applied to counter the threats a computer system will encounter in its operational environment. The process consists of: 1) becoming familiar with the system and system components; 2) developing a threat model by identifying external dependencies and security assumptions; 3) identifying and classifying the types of threats to the system; 4) determining the effects of the threat; and 5) making changes to counter the potential threats. This approach ensures that the assessment of the threat will be done in a systematic and meticulous manner that is more likely to result in a secure and reliable system.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126370746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925832
J. M. Block, Peter Söderholm, T. Tyrberg
It is clear that the simple dasiaperfect repairpsila assumption is not immediately applicable to any of the studied types of hardware, i.e. cooling turbine., high-voltage generator, hydraulic accumulator, and radar transmitter. Hence, the assumption of 'perfect repair' needs to be validated for each specific type of item. Assumptions based on the type of physical hardware (e.g. mechanical item or avionics item) are not always trustworthy. Strangely enough the 'perfect repair' assumption fits best for the cooling turbine, which is a highly stressed mechanical item, while the fit is much poorer for the radar transmitter, which is an avionics item and for the hydraulic accumulator. For the radar transmitter the trend seems to be very scattered. For items with a large number of failures early in their life-cycle, repair is 'better than perfect', i.e. the items become more reliable after repair, presumably by elimination of less reliable subcomponents. However, this effect is not seen for items with few failures early in their life-cycle. For these items 'perfect repair' initially seems to be a valid model. However, in many casesrepair becomes 'less than perfect' later in the life-cycle. For the hydraulic accumulator this trend is even more accentuated and individual items seem to fall into two distinct subpopulations with opposite reliability trends.
{"title":"Changes in items' failure pattern during maintenance: An investigation of the perfect repair assumption","authors":"J. M. Block, Peter Söderholm, T. Tyrberg","doi":"10.1109/RAMS.2008.4925832","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925832","url":null,"abstract":"It is clear that the simple dasiaperfect repairpsila assumption is not immediately applicable to any of the studied types of hardware, i.e. cooling turbine., high-voltage generator, hydraulic accumulator, and radar transmitter. Hence, the assumption of 'perfect repair' needs to be validated for each specific type of item. Assumptions based on the type of physical hardware (e.g. mechanical item or avionics item) are not always trustworthy. Strangely enough the 'perfect repair' assumption fits best for the cooling turbine, which is a highly stressed mechanical item, while the fit is much poorer for the radar transmitter, which is an avionics item and for the hydraulic accumulator. For the radar transmitter the trend seems to be very scattered. For items with a large number of failures early in their life-cycle, repair is 'better than perfect', i.e. the items become more reliable after repair, presumably by elimination of less reliable subcomponents. However, this effect is not seen for items with few failures early in their life-cycle. For these items 'perfect repair' initially seems to be a valid model. However, in many casesrepair becomes 'less than perfect' later in the life-cycle. For the hydraulic accumulator this trend is even more accentuated and individual items seem to fall into two distinct subpopulations with opposite reliability trends.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115859714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925816
H. Dussault, P.S. Zarubin, S. Morris, D. Nicholls
This paper describes the initial design, development and testing of a tool that harvests reliability data from multiple internet resources. An evaluation corpus of 1544 URLs is used to assess typical reliability data collection content and challenges and to provide a basis for evaluating data harvesting tool performance and capability growth. Early results show that the ability to handle portable document format (PDF) documents, correctly parse web pages, including significant punctuation marks and number formatting, and to extract data from tables are important in reliability data collection. The results to date show that reliability data is available on the internet, and that automated tools can begin to discover and harvest that information. However, there is much work to do to be able to reliably discover, extract, cluster, and present valid component reliability to users.
{"title":"Harvesting reliability data from the internet","authors":"H. Dussault, P.S. Zarubin, S. Morris, D. Nicholls","doi":"10.1109/RAMS.2008.4925816","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925816","url":null,"abstract":"This paper describes the initial design, development and testing of a tool that harvests reliability data from multiple internet resources. An evaluation corpus of 1544 URLs is used to assess typical reliability data collection content and challenges and to provide a basis for evaluating data harvesting tool performance and capability growth. Early results show that the ability to handle portable document format (PDF) documents, correctly parse web pages, including significant punctuation marks and number formatting, and to extract data from tables are important in reliability data collection. The results to date show that reliability data is available on the internet, and that automated tools can begin to discover and harvest that information. However, there is much work to do to be able to reliably discover, extract, cluster, and present valid component reliability to users.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122146087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-01-28DOI: 10.1109/RAMS.2008.4925818
A. Usynin, J. Hines, A. Urmanov
This paper investigates the issues related to variability in degradation-based reliability models and how the variability affects the remaining useful life prognosis being made by those models. Particularly, uncertain failure thresholds in cumulative damage models are of primary interest in this study. Many degradation-based reliability approaches make use of a predefined deterministic value of the failure threshold. However, in real-world cases, the designer may not be aware of the precise critical degradation level. In such situations it is suitable to define the critical degradation level as a range of values having certain probabilities of being critical. If no prior information is available regarding the failure threshold; the critical value has to be estimated from experimental reliability data that are subject to uncertainty due to imperfect measurements and random deviations in reliability properties of the tested components. In these circumstances, it is desirable to model the critical threshold as a random variable. Otherwise, the model can be oversimplified since it neglects the failure threshold uncertainty, whose influence onto the reliability prediction can be significant. This paper presents uncertainty analysis regarding how variability in the failure threshold affects the reliability prediction in conjunction with cumulative damage models. Three types of cumulative damage models are investigated; these are a Markov chain-based model, a linear path degradation model, and a Wiener process with drift. Closed-form equations quantifying the threshold uncertainty propagation into the model prediction are given. A numerical example is presented to illustrate how the critical threshold uncertainty reshapes the predicted time-to-failure distribution, supporting the need for considering the critical threshold uncertainty in accurate reliability computations.
{"title":"Uncertain failure thresholds in cumulative damage models","authors":"A. Usynin, J. Hines, A. Urmanov","doi":"10.1109/RAMS.2008.4925818","DOIUrl":"https://doi.org/10.1109/RAMS.2008.4925818","url":null,"abstract":"This paper investigates the issues related to variability in degradation-based reliability models and how the variability affects the remaining useful life prognosis being made by those models. Particularly, uncertain failure thresholds in cumulative damage models are of primary interest in this study. Many degradation-based reliability approaches make use of a predefined deterministic value of the failure threshold. However, in real-world cases, the designer may not be aware of the precise critical degradation level. In such situations it is suitable to define the critical degradation level as a range of values having certain probabilities of being critical. If no prior information is available regarding the failure threshold; the critical value has to be estimated from experimental reliability data that are subject to uncertainty due to imperfect measurements and random deviations in reliability properties of the tested components. In these circumstances, it is desirable to model the critical threshold as a random variable. Otherwise, the model can be oversimplified since it neglects the failure threshold uncertainty, whose influence onto the reliability prediction can be significant. This paper presents uncertainty analysis regarding how variability in the failure threshold affects the reliability prediction in conjunction with cumulative damage models. Three types of cumulative damage models are investigated; these are a Markov chain-based model, a linear path degradation model, and a Wiener process with drift. Closed-form equations quantifying the threshold uncertainty propagation into the model prediction are given. A numerical example is presented to illustrate how the critical threshold uncertainty reshapes the predicted time-to-failure distribution, supporting the need for considering the critical threshold uncertainty in accurate reliability computations.","PeriodicalId":143940,"journal":{"name":"2008 Annual Reliability and Maintainability Symposium","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123720166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}