Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889799
Z. Li, Jian Guo, N. Xiao, Wei Huang
Prior information and elicitation are the prerequisite in Bayesian reliability inference. Multiple sources for priors such as probability fitting based on historical data and expert judgment are often available when estimating the reliability of complex systems. This paper investigates the integration of multiple priors in Bayesian reliability analysis. Specifically, methods for multiple priors' integration based on Bayesian Melding are investigated. The performance of the studied methods with different prior information integration algorithms such as the arithmetic and geometric averaging is investigated. The impacts of the prior misspecification and the pooling parameter selection for prior integration algorithms are also studied. In numerical examples, simulation methods are applied for posterior reliability inference under the proposed prior integration methods and the performance of the two methods are compared.
{"title":"Multiple priors integration for reliability estimation using the Bayesian melding method","authors":"Z. Li, Jian Guo, N. Xiao, Wei Huang","doi":"10.1109/RAM.2017.7889799","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889799","url":null,"abstract":"Prior information and elicitation are the prerequisite in Bayesian reliability inference. Multiple sources for priors such as probability fitting based on historical data and expert judgment are often available when estimating the reliability of complex systems. This paper investigates the integration of multiple priors in Bayesian reliability analysis. Specifically, methods for multiple priors' integration based on Bayesian Melding are investigated. The performance of the studied methods with different prior information integration algorithms such as the arithmetic and geometric averaging is investigated. The impacts of the prior misspecification and the pooling parameter selection for prior integration algorithms are also studied. In numerical examples, simulation methods are applied for posterior reliability inference under the proposed prior integration methods and the performance of the two methods are compared.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"1973 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132542150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889731
M. Bonato, Philippe Goge
The drastic reduction in pollutants emission that has followed recent international regulations imposes that engine fuel consumption be optimized as ever. Efficient engine cooling components together with an improved thermal management strategy play an important role in increasing engine performance, resulting in reduced fuel consumption and decreased pollution emissions. At the same time, new market trends are imposing longer warranty commitments, therefore challenging the capability of automotive suppliers to design products capable of high performance and extended reliability. In this framework, the reliability of engine cooling modules is a real economical and technical topic which has to be validated according to a rigorous methodology. By avoiding the over-use of standard and generic specifications (which cause over or under testing of the component during development phase), this paper proposes the so-called “test tailoring approach.” The goal is to validate the mechanical endurance of our products according to accelerated durability tests that are the most representative as possible to the environmental stresses that the components expect to see during their in-service life. This method permits the generation of customized accelerated bench tests, based on real measurements taken on the vehicle during field tests. The use of safety coefficients and Weibull analysis of destructive tests allows ensuring that the reliability targets are reached. These tailored specifications are employed to validate the mechanical endurance of the engine cooling module undergoing vibration, pressure pulsation and thermal shock stress loadings. This paper presents how this holistic philosophy has been used to validate the design of a new generation of heat exchangers (CO2 gas cooler and evaporator).
{"title":"Test tailoring approach for reliability assessment of automotive heat exchangers","authors":"M. Bonato, Philippe Goge","doi":"10.1109/RAM.2017.7889731","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889731","url":null,"abstract":"The drastic reduction in pollutants emission that has followed recent international regulations imposes that engine fuel consumption be optimized as ever. Efficient engine cooling components together with an improved thermal management strategy play an important role in increasing engine performance, resulting in reduced fuel consumption and decreased pollution emissions. At the same time, new market trends are imposing longer warranty commitments, therefore challenging the capability of automotive suppliers to design products capable of high performance and extended reliability. In this framework, the reliability of engine cooling modules is a real economical and technical topic which has to be validated according to a rigorous methodology. By avoiding the over-use of standard and generic specifications (which cause over or under testing of the component during development phase), this paper proposes the so-called “test tailoring approach.” The goal is to validate the mechanical endurance of our products according to accelerated durability tests that are the most representative as possible to the environmental stresses that the components expect to see during their in-service life. This method permits the generation of customized accelerated bench tests, based on real measurements taken on the vehicle during field tests. The use of safety coefficients and Weibull analysis of destructive tests allows ensuring that the reliability targets are reached. These tailored specifications are employed to validate the mechanical endurance of the engine cooling module undergoing vibration, pressure pulsation and thermal shock stress loadings. This paper presents how this holistic philosophy has been used to validate the design of a new generation of heat exchangers (CO2 gas cooler and evaporator).","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133637672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889739
Y. Shaban, S. Yacout, M. Aly
This paper presents a novel approach for incorporating condition information based on historical data into the development of reliability curves. The approach uses a variation of Kaplan-Meier (KM) estimator and degradation-based estimators of survival patterns. From a statistical perspective, the use of KM estimator to create a reliability curve of a specific type of equipment, results in a general curve that does not take into consideration the instantaneous condition of each individual equipment. The proposed degradation-based estimator updates the KM estimator in order to capture the actual condition of equipment based on the detected patterns. These patterns identify interactions between condition indicators. The degradation-based reliability curves are obtained by a new methodology called ‘Logical Analysis of Survival Data (LASD). LASD identifies interactions between condition indicators without any prior hypotheses. It generates patterns based on machine learning and pattern recognition technique. Using these set of patterns, survival curves, which can predict the reliability of any device at any time based on its actual condition, are developed. To evaluate the LASD approach, it was applied to experimental results that represent cutting tool degradation during turning TiMMCs with condition monitoring. The performance of the LASD when compared to the traditional Kaplan-Meier based reliability curve improves the reliability prediction.
{"title":"Condition-based reliability prediction based on logical analysis of survival data","authors":"Y. Shaban, S. Yacout, M. Aly","doi":"10.1109/RAM.2017.7889739","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889739","url":null,"abstract":"This paper presents a novel approach for incorporating condition information based on historical data into the development of reliability curves. The approach uses a variation of Kaplan-Meier (KM) estimator and degradation-based estimators of survival patterns. From a statistical perspective, the use of KM estimator to create a reliability curve of a specific type of equipment, results in a general curve that does not take into consideration the instantaneous condition of each individual equipment. The proposed degradation-based estimator updates the KM estimator in order to capture the actual condition of equipment based on the detected patterns. These patterns identify interactions between condition indicators. The degradation-based reliability curves are obtained by a new methodology called ‘Logical Analysis of Survival Data (LASD). LASD identifies interactions between condition indicators without any prior hypotheses. It generates patterns based on machine learning and pattern recognition technique. Using these set of patterns, survival curves, which can predict the reliability of any device at any time based on its actual condition, are developed. To evaluate the LASD approach, it was applied to experimental results that represent cutting tool degradation during turning TiMMCs with condition monitoring. The performance of the LASD when compared to the traditional Kaplan-Meier based reliability curve improves the reliability prediction.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132376413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889714
George Yee
The growth of the Internet has been accompanied by the growth of cloud services, leading to the need to protect the privacy of cloud service users. Cloud service providers differ in their capability to protect privacy. It is important to consider this capability as a major factor in the selection of a provider, thus minimizing the privacy risks associated with the selection. This work derives estimates of this capability, based on the number of vulnerabilities to attack, in comparison with the number of vulnerabilities that have been secured. It is proposed that cloud service providers calculate and publish these estimates, so that users can incorporate them as part of the process of selecting a provider (this would in turn encourage cloud service providers to pay more attention to privacy). Cloud service providers could also benefit from such estimates by using them to adjust their security measures for protecting privacy, until certain target capability levels of privacy protection are reached. An example of calculating and applying the estimates is included.
{"title":"Selecting a cloud service provider to minimize privacy risks","authors":"George Yee","doi":"10.1109/RAM.2017.7889714","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889714","url":null,"abstract":"The growth of the Internet has been accompanied by the growth of cloud services, leading to the need to protect the privacy of cloud service users. Cloud service providers differ in their capability to protect privacy. It is important to consider this capability as a major factor in the selection of a provider, thus minimizing the privacy risks associated with the selection. This work derives estimates of this capability, based on the number of vulnerabilities to attack, in comparison with the number of vulnerabilities that have been secured. It is proposed that cloud service providers calculate and publish these estimates, so that users can incorporate them as part of the process of selecting a provider (this would in turn encourage cloud service providers to pay more attention to privacy). Cloud service providers could also benefit from such estimates by using them to adjust their security measures for protecting privacy, until certain target capability levels of privacy protection are reached. An example of calculating and applying the estimates is included.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116019222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889783
Lei Chen, D. Zhong, J. Jiao, T. Zhao
Modern safety-critical systems are becoming increasingly more complex than ever before. Continuous complexity increase renders ensuring the safety of such systems increasingly difficult. So, the ability to perform an effective and robust safety analysis on modern safety-critical system plays a more and more crucial role. Traditional safety analysis models based on event chains which consider that accidents are caused by chains of directly related failure events oversimplify causality and the accident process. Also, they exclude many of the systemic factors in accidents and indirect or nonlinear interactions among events. System-Theoretic Accident Modeling and Process(STAMP) accident model is an accident causality model based on system theory used for complex system, especially complex socio-technical system. Safety in STAMP is regarded as an emergent property of system caused by components interactions and a problem of control which means enforcing safety constrains on components behaviors and interactions. In the STAMP based analysis, three basic constructs underlying the analysis process are highlighted: safety constraints, hierarchical safety control structures and process model. With a rise of system complexity, STAMP is playing an increasingly significant role in the development of systemic accident theory. However, STAMP-based safety analysis is usually completed manually, which seems to be with high cost and low efficiency. To raise analysis efficiency, reduce its cost, this paper proposes a formal approach which integrated a model checking with STAMP to automatically search the potential paths that could lead to hazards. By use of model checking, behaviors of the system are simulated and counter example(s) violating the safety constraints and requirements could be raised, to improve the system design. The application of the proposed approach is illustrated through a case study of a typical air accident analysis to verify the validity of the approach. The process and result gained by the improvement have shown us that the safety engineering workload has been reduced and the analysis efficiency has been raised.
现代安全关键系统正变得比以往任何时候都更加复杂。复杂性的不断增加使得确保此类系统的安全性变得越来越困难。因此,对现代安全关键系统进行有效、稳健的安全分析的能力就显得越来越重要。传统的基于事件链的安全分析模型认为事故是由直接相关的失效事件链引起的,过于简化了因果关系和事故过程。此外,它们排除了事故中的许多系统因素以及事件之间的间接或非线性相互作用。系统理论事故建模与过程(system - theory Accident Modeling and Process, STAMP)事故模型是一种基于系统理论的事故因果关系模型,适用于复杂系统,特别是复杂社会技术系统。在STAMP中,安全性被认为是由组件交互引起的系统的紧急属性,是一个控制问题,即对组件的行为和交互实施安全约束。在基于STAMP的分析中,强调了分析过程的三个基本结构:安全约束、分层安全控制结构和过程模型。随着系统复杂性的提高,STAMP在系统事故理论的发展中发挥着越来越重要的作用。然而,基于stamp的安全性分析通常是手工完成的,成本高,效率低。为了提高分析效率,降低分析成本,本文提出了一种将模型检查与STAMP相结合的形式化方法来自动搜索可能导致危险的潜在路径。通过模型校核,对系统行为进行仿真,提出违反安全约束和要求的反例,改进系统设计。通过一个典型的航空事故分析实例,验证了该方法的有效性。改进的过程和结果表明,减少了安全工程工作量,提高了分析效率。
{"title":"Improving accident causality analysis based on STAMP through integrating model checking","authors":"Lei Chen, D. Zhong, J. Jiao, T. Zhao","doi":"10.1109/RAM.2017.7889783","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889783","url":null,"abstract":"Modern safety-critical systems are becoming increasingly more complex than ever before. Continuous complexity increase renders ensuring the safety of such systems increasingly difficult. So, the ability to perform an effective and robust safety analysis on modern safety-critical system plays a more and more crucial role. Traditional safety analysis models based on event chains which consider that accidents are caused by chains of directly related failure events oversimplify causality and the accident process. Also, they exclude many of the systemic factors in accidents and indirect or nonlinear interactions among events. System-Theoretic Accident Modeling and Process(STAMP) accident model is an accident causality model based on system theory used for complex system, especially complex socio-technical system. Safety in STAMP is regarded as an emergent property of system caused by components interactions and a problem of control which means enforcing safety constrains on components behaviors and interactions. In the STAMP based analysis, three basic constructs underlying the analysis process are highlighted: safety constraints, hierarchical safety control structures and process model. With a rise of system complexity, STAMP is playing an increasingly significant role in the development of systemic accident theory. However, STAMP-based safety analysis is usually completed manually, which seems to be with high cost and low efficiency. To raise analysis efficiency, reduce its cost, this paper proposes a formal approach which integrated a model checking with STAMP to automatically search the potential paths that could lead to hazards. By use of model checking, behaviors of the system are simulated and counter example(s) violating the safety constraints and requirements could be raised, to improve the system design. The application of the proposed approach is illustrated through a case study of a typical air accident analysis to verify the validity of the approach. The process and result gained by the improvement have shown us that the safety engineering workload has been reduced and the analysis efficiency has been raised.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121073168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889660
Nidhal Mahmud
Markov Chains (MCs) are very powerful in capturing the dynamic aspects of systems and in the evaluation of safety measures. However, such models suffer from the state space explosion problem, which often makes their solutions intractable if not impossible. In this paper, a new approach to computing an optimal description of a system MC is presented. The approach is based on an algebraic representation of a Markov chain in a standard sum-of-product canonical form which can then be reduced by symbolic calculus — the sequences are captured by using only the Boolean logic operator AND (symbol ‘.’) and the Priority-OR temporal logic operator (POR, symbol ‘|’). POR is used to represent a priority situation where one event must occur first and other events may or may not occur subsequently. This approach preserves the advantage of using the powerful Boolean methods in the reduction process which is rather extended with temporal logic calculus. By solving the reduced MC, exact measures of interest for the larger MC can be computed. However, since the complete MC needs to be constructed beforehand in order to be reduced afterwards, the approach is practical only via composition. That is, for large systems, a smaller system MC can be produced directly from compositional reduced MCs that are local to the system constituents.
{"title":"A compositional symbolic calculus approach to producing reduced Markov chains","authors":"Nidhal Mahmud","doi":"10.1109/RAM.2017.7889660","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889660","url":null,"abstract":"Markov Chains (MCs) are very powerful in capturing the dynamic aspects of systems and in the evaluation of safety measures. However, such models suffer from the state space explosion problem, which often makes their solutions intractable if not impossible. In this paper, a new approach to computing an optimal description of a system MC is presented. The approach is based on an algebraic representation of a Markov chain in a standard sum-of-product canonical form which can then be reduced by symbolic calculus — the sequences are captured by using only the Boolean logic operator AND (symbol ‘.’) and the Priority-OR temporal logic operator (POR, symbol ‘|’). POR is used to represent a priority situation where one event must occur first and other events may or may not occur subsequently. This approach preserves the advantage of using the powerful Boolean methods in the reduction process which is rather extended with temporal logic calculus. By solving the reduced MC, exact measures of interest for the larger MC can be computed. However, since the complete MC needs to be constructed beforehand in order to be reduced afterwards, the approach is practical only via composition. That is, for large systems, a smaller system MC can be produced directly from compositional reduced MCs that are local to the system constituents.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116836815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889687
Bentolhoda Jafary, V. Nagaraju, L. Fiondella
A primary goal of maintenance is to minimize the consequences of component and system failures. Two subcategories of maintenance actions include: Preventive Maintenance (PM) at predetermined time intervals prior to failure and Emergency Repair (ER) upon failure, where the cost and downtime of emergency repair is significantly greater than preventive maintenance. Most maintenance models developed over the past several decades assume component failures are statistically independent. This assumption simplifies calculations, but is dangerous for safety critical systems that must be maintained because correlated failures can lower the mean time to failure, increasing the probability that emergency repair will be required. This paper presents a simple method with an explicit correlation parameter to characterize the impact of correlated component failures on the optimal preventive maintenance interval of a system with arbitrary structure. This method is applied to two maintenance policies, including: age replacement to minimize cost and age replacement to maximize availability. Examples illustrate that our approach identifies optimal maintenance strategies for these policies such as cost per unit time and stationary availability despite correlated failures.
{"title":"Impact of correlated component failure on age replacement maintenance policies","authors":"Bentolhoda Jafary, V. Nagaraju, L. Fiondella","doi":"10.1109/RAM.2017.7889687","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889687","url":null,"abstract":"A primary goal of maintenance is to minimize the consequences of component and system failures. Two subcategories of maintenance actions include: Preventive Maintenance (PM) at predetermined time intervals prior to failure and Emergency Repair (ER) upon failure, where the cost and downtime of emergency repair is significantly greater than preventive maintenance. Most maintenance models developed over the past several decades assume component failures are statistically independent. This assumption simplifies calculations, but is dangerous for safety critical systems that must be maintained because correlated failures can lower the mean time to failure, increasing the probability that emergency repair will be required. This paper presents a simple method with an explicit correlation parameter to characterize the impact of correlated component failures on the optimal preventive maintenance interval of a system with arbitrary structure. This method is applied to two maintenance policies, including: age replacement to minimize cost and age replacement to maximize availability. Examples illustrate that our approach identifies optimal maintenance strategies for these policies such as cost per unit time and stationary availability despite correlated failures.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116726203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889691
Haitao Zhang, Mingxiao Jiang
For implantable medical devices that must have sufficient fatigue durability to last for many years of implantation, the current fatigue reliability characterization/demonstration method has several drawbacks. In this paper, by combining the feasibility testing, FEA analysis, material level probabilistic stress-life (P-S-N) curve, use condition based fatigue loading, we developed an integrated approach to predict implantable medical device fatigue reliability.
{"title":"An integrated approach for implantable medical devices fatigue reliability prediction","authors":"Haitao Zhang, Mingxiao Jiang","doi":"10.1109/RAM.2017.7889691","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889691","url":null,"abstract":"For implantable medical devices that must have sufficient fatigue durability to last for many years of implantation, the current fatigue reliability characterization/demonstration method has several drawbacks. In this paper, by combining the feasibility testing, FEA analysis, material level probabilistic stress-life (P-S-N) curve, use condition based fatigue loading, we developed an integrated approach to predict implantable medical device fatigue reliability.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131099270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889750
W. Nelson
In sudden-death life testing, a group of m specimens is put on a test machine together and run until the d-th failure. The total cost of such a test is a function of m, of the expected test length, and the number G of such groups tested. For a Weibull model, new results here optimize m and G to minimize the variance of the estimate of a specified low fractile subject to a specified total test cost. Sudden-death testing reduces test time, and the resulting data from the lower tail of the distribution yield an estimate with less model error. The results are illustrated with a client application.
{"title":"Cost optimal sudden-death life testing","authors":"W. Nelson","doi":"10.1109/RAM.2017.7889750","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889750","url":null,"abstract":"In sudden-death life testing, a group of m specimens is put on a test machine together and run until the d-th failure. The total cost of such a test is a function of m, of the expected test length, and the number G of such groups tested. For a Weibull model, new results here optimize m and G to minimize the variance of the estimate of a specified low fractile subject to a specified total test cost. Sudden-death testing reduces test time, and the resulting data from the lower tail of the distribution yield an estimate with less model error. The results are illustrated with a client application.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130731157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889701
Zhicheng Zhu, Yisha Xiang, Suzan Alaswad, C. R. Cassady
Condition-based maintenance (CBM) has been extensively studied. However, the majority of existing CBM research either consider a periodic inspection schedule or a fixed preventive maintenance threshold. While policies with periodic inspections and/or fixed maintenance threshold are easy to implement in practice, they may incur more-than-necessary inspections and induce more failures. In this paper, we develop a sequential CBM policy for systems subject to stochastic degradation. The aim of the proposed policy is to prevent or delay failures and perform maintenance activities just in time. Unlike conventional preventive maintenance that often fixes the inspection interval and the preventive maintenance threshold, both the next inspection time and the corresponding maintenance threshold in this paper are dynamically determined based on the current state of the system. The proposed sequential predictive maintenance policy is particularly important and applicable for general non-homogeneous degradation processes. The proposed model enables optimal scheduling of inspection and preventive maintenance decisions, in order to minimize the long-run maintenance cost rate including inspection, preventive and corrective maintenance costs. The performance of the proposed predictive maintenance policy is evaluated using a simulation-based optimization approach. Frequency of system failures and total maintenance cost rates are computed and compared with a bench mark maintenance policy, a periodic inspection/replacement policy. Our results show that there can be potential savings from the proposed predictive maintenance policy.
{"title":"A sequential inspection and replacement policy for degradation-based systems","authors":"Zhicheng Zhu, Yisha Xiang, Suzan Alaswad, C. R. Cassady","doi":"10.1109/RAM.2017.7889701","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889701","url":null,"abstract":"Condition-based maintenance (CBM) has been extensively studied. However, the majority of existing CBM research either consider a periodic inspection schedule or a fixed preventive maintenance threshold. While policies with periodic inspections and/or fixed maintenance threshold are easy to implement in practice, they may incur more-than-necessary inspections and induce more failures. In this paper, we develop a sequential CBM policy for systems subject to stochastic degradation. The aim of the proposed policy is to prevent or delay failures and perform maintenance activities just in time. Unlike conventional preventive maintenance that often fixes the inspection interval and the preventive maintenance threshold, both the next inspection time and the corresponding maintenance threshold in this paper are dynamically determined based on the current state of the system. The proposed sequential predictive maintenance policy is particularly important and applicable for general non-homogeneous degradation processes. The proposed model enables optimal scheduling of inspection and preventive maintenance decisions, in order to minimize the long-run maintenance cost rate including inspection, preventive and corrective maintenance costs. The performance of the proposed predictive maintenance policy is evaluated using a simulation-based optimization approach. Frequency of system failures and total maintenance cost rates are computed and compared with a bench mark maintenance policy, a periodic inspection/replacement policy. Our results show that there can be potential savings from the proposed predictive maintenance policy.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126759067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}