首页 > 最新文献

2017 Annual Reliability and Maintainability Symposium (RAMS)最新文献

英文 中文
Optimal checkpointing of fault tolerant systems subject to correlated failure 关联故障容错系统的最优检查点
Pub Date : 1900-01-01 DOI: 10.1109/RAM.2017.7889756
Bentolhoda Jafary, L. Fiondella
Checkpointing is a technique to backup work at periodic intervals so that if computation fails it will not be necessary to restart from the beginning but will instead be able to restart from the latest checkpoint. Performing checkpointing operations requires time. Therefore, it is necessary to consider the tradeoff between the time to perform checkpointing operations and the time saved when computation restarts at a checkpoint. This paper presents a method to model the impact of correlated failures on a system that performs checkpointing. We map the checkpointing process to a state space model and superimpose a correlated life distribution. Examples illustrate that the model identifies the optimal number of checkpoints despite the negative impact of correlation on system reliability.
检查点是一种定期备份工作的技术,这样如果计算失败,就不必从头开始重新启动,而是可以从最近的检查点重新启动。执行检查点操作需要时间。因此,有必要考虑执行检查点操作的时间和在检查点重新开始计算时节省的时间之间的权衡。本文提出了一种方法来模拟相关故障对执行检查点的系统的影响。我们将检查点过程映射到状态空间模型,并叠加相关的寿命分布。实例表明,尽管相关性对系统可靠性有负面影响,该模型仍能识别出检查点的最佳数量。
{"title":"Optimal checkpointing of fault tolerant systems subject to correlated failure","authors":"Bentolhoda Jafary, L. Fiondella","doi":"10.1109/RAM.2017.7889756","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889756","url":null,"abstract":"Checkpointing is a technique to backup work at periodic intervals so that if computation fails it will not be necessary to restart from the beginning but will instead be able to restart from the latest checkpoint. Performing checkpointing operations requires time. Therefore, it is necessary to consider the tradeoff between the time to perform checkpointing operations and the time saved when computation restarts at a checkpoint. This paper presents a method to model the impact of correlated failures on a system that performs checkpointing. We map the checkpointing process to a state space model and superimpose a correlated life distribution. Examples illustrate that the model identifies the optimal number of checkpoints despite the negative impact of correlation on system reliability.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131308012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Geometric failure rate reduction model for the analysis of repairable systems 可修系统分析的几何故障率降低模型
Pub Date : 1900-01-01 DOI: 10.1109/RAM.2017.7889754
A. Syamsundar, D. E. Vijay Kumar
A failed component / system brought back to its functioning state after repair exhibits different failure intensity than before its failure. This happens because the system in which the component is functioning experiences deterioration with age or the component / system is a repaired one which is aged compared to a new component / system. These factors affect the failure intensity of the component / system. To model the failure behaviour of such a component / system a simple model, termed the geometric failure rate reduction model by Finkelstein, is proposed. This model effectively models the changed failure behaviour of the component / system under the above circumstances. The model, and its inference are described and its application to a repairable systems demonstrated.
故障部件/系统在修复后恢复其功能状态,其故障强度与故障前不同。这种情况的发生是因为组件运行的系统随着年龄的增长而退化,或者组件/系统是一个经过修复的组件/系统,与新组件/系统相比老化。这些因素影响组件/系统的失效强度。为了模拟这种组件/系统的失效行为,提出了一个简单的模型,称为几何故障率降低模型。此模型有效地模拟了在上述情况下组件/系统的失效行为的变化。描述了该模型及其推理,并举例说明了该模型在可修系统中的应用。
{"title":"Geometric failure rate reduction model for the analysis of repairable systems","authors":"A. Syamsundar, D. E. Vijay Kumar","doi":"10.1109/RAM.2017.7889754","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889754","url":null,"abstract":"A failed component / system brought back to its functioning state after repair exhibits different failure intensity than before its failure. This happens because the system in which the component is functioning experiences deterioration with age or the component / system is a repaired one which is aged compared to a new component / system. These factors affect the failure intensity of the component / system. To model the failure behaviour of such a component / system a simple model, termed the geometric failure rate reduction model by Finkelstein, is proposed. This model effectively models the changed failure behaviour of the component / system under the above circumstances. The model, and its inference are described and its application to a repairable systems demonstrated.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121449730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Uniform rotorcraft guidelines for quantitative risk assessment 定量风险评估的统一旋翼机指南
Pub Date : 1900-01-01 DOI: 10.1109/RAM.2017.7889771
J. Hewitt, Gary D. Braman
Quantitative Risk Assessment (QRA) is an effective element of the Safety Risk Management process that augments qualitative methods used in the past. QRA is based on Life Data Analysis [1] and can accurately predict future risk by analyzing the risk without corrective action (uncorrected risk), and then analyzing the risk with specific mitigating actions implemented (corrected risk). Before 2013, there was no uniformity among QRA limits or guidelines in the rotorcraft industry. A benchmarking activity was initiated to document the basis of established QRA risk guidelines in use or recommended by helicopter and engine manufacturers, regulatory authorities, and academia. Benchmarking is a focused process that enables changes that can lead to improvements in products, processes, and services. The anticipated outcome is a comparison of the performance level and best practices of numerous organizations in managing processes and procedures, which can improve the standard of operational excellence. The benchmarking process resulted in the significant milestone of industry/government agreement on uniform definitions and risk guidelines (numerical parameters) for all rotorcraft and for engines installed on multi engine rotorcraft. The new guidelines were adopted by consensus of the group and have been promulgated by incorporation into the FAA Rotorcraft Risk Analysis Handbook for application in the rotary wing aircraft industry. This work can serve as a model for other safety critical fields where standardized Quantitative Risk definitions and guidelines are needed.
定量风险评估(QRA)是安全风险管理过程的一个有效元素,它补充了过去使用的定性方法。QRA基于Life Data Analysis[1],通过分析未采取纠正措施的风险(未纠正风险),再分析采取了特定缓解措施的风险(已纠正风险),可以准确预测未来风险。在2013年之前,旋翼飞机行业的QRA限制或指导方针没有统一。开展了一项基准测试活动,以记录直升机和发动机制造商、监管机构和学术界使用或推荐的已建立的QRA风险指南的基础。基准测试是一个有重点的过程,它支持能够导致产品、流程和服务改进的更改。预期的结果是对许多组织在管理过程和程序方面的性能水平和最佳实践进行比较,这可以提高卓越运营的标准。基准测试过程导致行业/政府就所有旋翼飞机和安装在多引擎旋翼飞机上的发动机的统一定义和风险指南(数值参数)达成重要里程碑。新的指导方针经小组协商一致通过,并已纳入美国联邦航空局旋翼飞机风险分析手册,以适用于旋翼飞机工业。这项工作可以作为需要标准化定量风险定义和指导方针的其他安全关键领域的模型。
{"title":"Uniform rotorcraft guidelines for quantitative risk assessment","authors":"J. Hewitt, Gary D. Braman","doi":"10.1109/RAM.2017.7889771","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889771","url":null,"abstract":"Quantitative Risk Assessment (QRA) is an effective element of the Safety Risk Management process that augments qualitative methods used in the past. QRA is based on Life Data Analysis [1] and can accurately predict future risk by analyzing the risk without corrective action (uncorrected risk), and then analyzing the risk with specific mitigating actions implemented (corrected risk). Before 2013, there was no uniformity among QRA limits or guidelines in the rotorcraft industry. A benchmarking activity was initiated to document the basis of established QRA risk guidelines in use or recommended by helicopter and engine manufacturers, regulatory authorities, and academia. Benchmarking is a focused process that enables changes that can lead to improvements in products, processes, and services. The anticipated outcome is a comparison of the performance level and best practices of numerous organizations in managing processes and procedures, which can improve the standard of operational excellence. The benchmarking process resulted in the significant milestone of industry/government agreement on uniform definitions and risk guidelines (numerical parameters) for all rotorcraft and for engines installed on multi engine rotorcraft. The new guidelines were adopted by consensus of the group and have been promulgated by incorporation into the FAA Rotorcraft Risk Analysis Handbook for application in the rotary wing aircraft industry. This work can serve as a model for other safety critical fields where standardized Quantitative Risk definitions and guidelines are needed.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127720405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FMECA-based analyses: A SMART foundation 基于fmea的分析:SMART基础
Pub Date : 1900-01-01 DOI: 10.1109/RAM.2017.7889723
T. Dukes, Blair M. Schmidt, Yangyang Yu
The paper focuses on the current engineering practice on SMART (System Safety, Maintainability, Availability, Reliability, and Testability) engineering using Failure Modes, Effects and Criticality Analysis (FMECA) as the fundamental SMART knowledge base. The paper demonstrates General Atomics Aeronautical Systems' adaptation to the Department Of Defense's (DOD's) new Reliability and Maintainability (RAM) engineering trend, such as quantitative hazard analysis, Reliability-Centered Maintenance (RCM) analysis, and fault coverage analysis, using the traditional RAM tool — FMECA.
本文以故障模式、影响和临界性分析(FMECA)为基础的SMART(系统安全性、可维护性、可用性、可靠性和可测试性)工程为研究对象,重点介绍了当前SMART(系统安全性、可维护性、可用性、可靠性和可测试性)工程的工程实践。本文介绍了通用原子航空系统公司如何利用传统的RAM工具FMECA,适应美国国防部(DOD)新的可靠性和可维护性(RAM)工程趋势,如定量危害分析、以可靠性为中心的维修(RCM)分析和故障覆盖分析。
{"title":"FMECA-based analyses: A SMART foundation","authors":"T. Dukes, Blair M. Schmidt, Yangyang Yu","doi":"10.1109/RAM.2017.7889723","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889723","url":null,"abstract":"The paper focuses on the current engineering practice on SMART (System Safety, Maintainability, Availability, Reliability, and Testability) engineering using Failure Modes, Effects and Criticality Analysis (FMECA) as the fundamental SMART knowledge base. The paper demonstrates General Atomics Aeronautical Systems' adaptation to the Department Of Defense's (DOD's) new Reliability and Maintainability (RAM) engineering trend, such as quantitative hazard analysis, Reliability-Centered Maintenance (RCM) analysis, and fault coverage analysis, using the traditional RAM tool — FMECA.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130987804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Value driven tradespace exploration: A new approach to optimize reliability specification and allocation 价值驱动的贸易空间探索:一种优化可靠性规格和配置的新方法
Pub Date : 1900-01-01 DOI: 10.1109/RAM.2017.7889655
C. Jackson, Sana U. Qasisar, M. Ryan
The Value Driven Tradespace Exploration (VDTSE) framework developed in this paper is a new and sophisticated approach to optimizing reliability throughout design, and in so doing dynamically apportion reliability goals to all system elements. The VDTSE framework is an extension of existing approaches that have been successfully used to optimize many design dimensions. This allows (for example) reliability, cost and other design characteristics (such as weight, volume and speed) to be automatically and continually optimized throughout the design process. This represents a substantial improvement on ‘conventional’ approaches to reliability goal setting that involve ‘fixed’ reliability requirements that reflect a single scenario of ‘satisfactory’ performance. This results in a short-sited ‘binary’ approach to reliability value — the system is ‘satisfactory’ if it exceeds the requirement. No value is placed on exceeding requirements, nor is value assigned systems that do not meet strict requirements but may offer more ‘business’ value through other benefits such as reduced cost, weight or volume. For ‘conventional’ approaches that involve specifying reliability requirements, to result in optimal systems the customer needs to have exhaustively analyzed all plausible design configurations, accurately modeled all trends in emerging technology, and put forth a demonstrable requirement that aligns with their analysis. In short, the customer needs to enact the design process before the producer does — a process which is impractical and inefficient. The VDTSE framework outlined herein avoids all these issues. It uses component design characteristics (that include cost and reliability) to establish a tradespace of potential system design solutions. By establishing the concept of ‘value’ to be a function of design characteristics, a Pareto frontier can be identified which contains the set of al locally optimized design solutions. The VDTSE involves an algorithm that rapidly identifies the Pareto frontier from a large number of candidate designs. Finally, this allows the optimum design to be determined (in terms of organizational value) that also involves reliability goals apportioned to individual components and sub-systems.
本文开发的价值驱动贸易空间探索(VDTSE)框架是一种新的复杂方法,可以在整个设计过程中优化可靠性,并在此过程中动态地将可靠性目标分配给所有系统元素。VDTSE框架是现有方法的扩展,已经成功地用于优化许多设计维度。这使得(例如)可靠性,成本和其他设计特性(如重量,体积和速度)在整个设计过程中自动和持续优化。这代表了对可靠性目标设定的“传统”方法的实质性改进,这些方法涉及反映“令人满意”性能的单一场景的“固定”可靠性要求。这导致了一种短址的“二元”可靠性值方法——如果系统超过要求,则系统是“令人满意的”。超出需求没有价值,不满足严格需求但可能通过降低成本、重量或体积等其他好处提供更多“业务”价值的系统也没有价值。对于涉及指定可靠性要求的“传统”方法,为了产生最佳系统,客户需要详尽地分析所有合理的设计配置,准确地模拟新兴技术的所有趋势,并提出与他们的分析一致的可论证的需求。简而言之,客户需要在生产商之前制定设计流程-这是一个不切实际和低效的过程。本文概述的VDTSE框架避免了所有这些问题。它使用组件设计特征(包括成本和可靠性)来建立潜在系统设计解决方案的交易空间。通过将“价值”概念建立为设计特征的函数,可以确定包含所有局部优化设计解决方案集的帕累托边界。VDTSE涉及一种从大量候选设计中快速识别帕累托边界的算法。最后,这允许确定最优设计(根据组织价值),该设计还涉及分配给单个组件和子系统的可靠性目标。
{"title":"Value driven tradespace exploration: A new approach to optimize reliability specification and allocation","authors":"C. Jackson, Sana U. Qasisar, M. Ryan","doi":"10.1109/RAM.2017.7889655","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889655","url":null,"abstract":"The Value Driven Tradespace Exploration (VDTSE) framework developed in this paper is a new and sophisticated approach to optimizing reliability throughout design, and in so doing dynamically apportion reliability goals to all system elements. The VDTSE framework is an extension of existing approaches that have been successfully used to optimize many design dimensions. This allows (for example) reliability, cost and other design characteristics (such as weight, volume and speed) to be automatically and continually optimized throughout the design process. This represents a substantial improvement on ‘conventional’ approaches to reliability goal setting that involve ‘fixed’ reliability requirements that reflect a single scenario of ‘satisfactory’ performance. This results in a short-sited ‘binary’ approach to reliability value — the system is ‘satisfactory’ if it exceeds the requirement. No value is placed on exceeding requirements, nor is value assigned systems that do not meet strict requirements but may offer more ‘business’ value through other benefits such as reduced cost, weight or volume. For ‘conventional’ approaches that involve specifying reliability requirements, to result in optimal systems the customer needs to have exhaustively analyzed all plausible design configurations, accurately modeled all trends in emerging technology, and put forth a demonstrable requirement that aligns with their analysis. In short, the customer needs to enact the design process before the producer does — a process which is impractical and inefficient. The VDTSE framework outlined herein avoids all these issues. It uses component design characteristics (that include cost and reliability) to establish a tradespace of potential system design solutions. By establishing the concept of ‘value’ to be a function of design characteristics, a Pareto frontier can be identified which contains the set of al locally optimized design solutions. The VDTSE involves an algorithm that rapidly identifies the Pareto frontier from a large number of candidate designs. Finally, this allows the optimum design to be determined (in terms of organizational value) that also involves reliability goals apportioned to individual components and sub-systems.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"1 17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127982607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Damage monitoring and prognostics in composites via dynamic Bayesian networks 基于动态贝叶斯网络的复合材料损伤监测与预测
Pub Date : 1900-01-01 DOI: 10.1109/RAM.2017.7889668
E. Rabiei, E. Droguett, M. Modarres
This study presents a new structural health monitoring framework for complex degradation processes such as degradation of composites under fatigue loading. Since early detection and measurement of an observable damage marker in composite is very difficult, the proposed framework is established based on identifying and then monitoring “indirect damage indicators”. Dynamic Bayesian Network is utilized to integrate relevant damage models with any available monitoring data as well as other influential parameters. As the damage evolution process in composites is not fully explored, a technique consisting of extended Particle Filtering and Support Vector Regression is implemented to simultaneously estimate the damage model parameters as well as damage states in the presence of multiple measurements. The method is then applied to predict the time to failure of the component.
本研究提出了一种新的结构健康监测框架,用于复合材料在疲劳载荷下的降解等复杂降解过程。由于复合材料中可观察损伤标志的早期检测和测量非常困难,因此提出了基于识别和监测“间接损伤指标”的框架。利用动态贝叶斯网络将相关损伤模型与任何可用的监测数据以及其他有影响的参数进行整合。针对复合材料损伤演化过程尚未得到充分研究的问题,采用扩展粒子滤波和支持向量回归相结合的方法,对复合材料损伤模型参数和损伤状态进行同时估计。然后将该方法应用于预测部件的失效时间。
{"title":"Damage monitoring and prognostics in composites via dynamic Bayesian networks","authors":"E. Rabiei, E. Droguett, M. Modarres","doi":"10.1109/RAM.2017.7889668","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889668","url":null,"abstract":"This study presents a new structural health monitoring framework for complex degradation processes such as degradation of composites under fatigue loading. Since early detection and measurement of an observable damage marker in composite is very difficult, the proposed framework is established based on identifying and then monitoring “indirect damage indicators”. Dynamic Bayesian Network is utilized to integrate relevant damage models with any available monitoring data as well as other influential parameters. As the damage evolution process in composites is not fully explored, a technique consisting of extended Particle Filtering and Support Vector Regression is implemented to simultaneously estimate the damage model parameters as well as damage states in the presence of multiple measurements. The method is then applied to predict the time to failure of the component.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128974941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Maintenance planning and spare parts provisioning for a small unmanned aerial system 某小型无人机系统的维护规划及备品备件供应
Pub Date : 1900-01-01 DOI: 10.1109/RAM.2017.7889755
D. McLellan, K. Schneider
This paper details a maintenance model for the Silver Fox Small Unmanned Aerial System. The system is comprised of a number of mission critical, modular components as well as a number of mission enhancing components. A typical system is deployed for 180 days and consists of one aircraft, a spares kit, and a hangar queen aircraft. A discrete event simulation model is developed to evaluate the effects of maintenance planning decisions and spare parts provisioning on aircraft availability. The model explores using both the parts from the spares kit and cannibalized parts from the hangar queen in order to keep the aircraft availability high. When a part fails, it is sent back to the United States for refurbishment which takes a certain amount of time. We investigate the effects of spare parts provisioning, allowable maintenance time, and refurbishment lead time on several metrics. We show that changes in these policies have a significant impact on aircraft availability and capability.
本文详细介绍了银狐小型无人机系统的维修模型。该系统由若干关键任务、模块化组件以及若干任务增强组件组成。典型的系统部署时间为180天,由一架飞机、一个备件套件和一架机库皇后飞机组成。建立了离散事件仿真模型,以评估维修计划决策和备件供应对飞机可用性的影响。该模型探索了使用备件套件中的部件和从机库皇后中拆下的部件,以保持飞机的高可用性。当一个零件出现故障时,它被送回美国进行翻新,这需要一定的时间。我们研究了备件供应、允许维护时间和翻新提前期对几个指标的影响。我们表明,这些政策的变化对飞机的可用性和能力有重大影响。
{"title":"Maintenance planning and spare parts provisioning for a small unmanned aerial system","authors":"D. McLellan, K. Schneider","doi":"10.1109/RAM.2017.7889755","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889755","url":null,"abstract":"This paper details a maintenance model for the Silver Fox Small Unmanned Aerial System. The system is comprised of a number of mission critical, modular components as well as a number of mission enhancing components. A typical system is deployed for 180 days and consists of one aircraft, a spares kit, and a hangar queen aircraft. A discrete event simulation model is developed to evaluate the effects of maintenance planning decisions and spare parts provisioning on aircraft availability. The model explores using both the parts from the spares kit and cannibalized parts from the hangar queen in order to keep the aircraft availability high. When a part fails, it is sent back to the United States for refurbishment which takes a certain amount of time. We investigate the effects of spare parts provisioning, allowable maintenance time, and refurbishment lead time on several metrics. We show that changes in these policies have a significant impact on aircraft availability and capability.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116721000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistical guidance for setting product specification limits 设定产品规格限值的统计指南
Pub Date : 1900-01-01 DOI: 10.1109/RAM.2017.7889664
L. Hund, Daniel L. Campbell, Justin T. Newcomer
This document outlines a data-driven probabilistic approach to setting product acceptance testing limits. Product Specification (PS) limits are testing requirements for assuring that the product meets the product requirements. After identifying key manufacturing and performance parameters for acceptance testing, PS limits should be specified for these parameters, with the limits selected to assure that the unit will have a very high likelihood of meeting product requirements (barring any quality defects that would not be detected in acceptance testing). Because the settings for which the product requirements must be met is typically broader than the production acceptance testing space, PS limits should account for the difference between the acceptance testing setting relative to the worst-case setting. We propose an approach to setting PS limits that is based on demonstrating margin to the product requirement in the worst-case setting in which the requirement must be met. PS limits are then determined by considering the overall margin and uncertainty associated with a component requirement and then balancing this margin and uncertainty between the designer and producer. Specifically, after identifying parameters critical to component performance, we propose setting PS limits using a three step procedure: 1. Specify the acceptance testing and worst-case use-settings, the performance characteristic distributions in these two settings, and the mapping between these distributions. 2. Determine the PS limit in the worst-case use-setting by considering margin to the requirement and additional (epistemic) uncertainties. This step controls designer risk, namely the risk of producing product that violates requirements. 3. Define the PS limit for product acceptance testing by transforming the PS limit from the worst-case setting to the acceptance testing setting using the mapping between these distributions. Following this step, the producer risk is quantified by estimating the product scrap rate based on the projected acceptance testing distribution. The approach proposed here provides a framework for documenting the procedure and assumptions used to determine PS limits. This transparency in procedure will help inform what actions should occur when a unit violates a PS limit and how limits should change over time.
本文档概述了一种数据驱动的概率方法来设置产品验收测试限制。产品规范(PS)限值是确保产品符合产品要求的测试要求。在确定验收测试的关键制造和性能参数后,应为这些参数指定PS限值,所选择的限值应确保装置具有非常高的满足产品要求的可能性(排除在验收测试中无法检测到的任何质量缺陷)。由于必须满足产品要求的设置通常比生产验收测试空间更宽,PS限制应考虑验收测试设置与最坏情况设置之间的差异。我们提出了一种设置PS限制的方法,该方法基于在必须满足需求的最坏情况下展示产品需求的边际。PS限制是通过考虑与组件需求相关的总体余量和不确定性,然后在设计师和制作人之间平衡余量和不确定性来确定的。具体而言,在确定对组件性能至关重要的参数后,我们建议使用三步程序设置PS限制:指定验收测试和最坏情况使用设置,这两个设置中的性能特征分布,以及这些分布之间的映射。2. 通过考虑需求余量和额外的(认知)不确定性,确定最坏情况下的PS限制。这个步骤控制设计者的风险,即生产出违反要求的产品的风险。3.通过使用这些分布之间的映射,将PS限制从最坏情况设置转换为验收测试设置,从而定义产品验收测试的PS限制。在这一步之后,生产者的风险是通过基于预计的验收测试分布估计产品废品率来量化的。这里提出的方法为记录用于确定PS限制的程序和假设提供了一个框架。这种程序的透明度将有助于告知当一个单位违反PS限制时应该采取什么行动,以及限制应该如何随时间变化。
{"title":"Statistical guidance for setting product specification limits","authors":"L. Hund, Daniel L. Campbell, Justin T. Newcomer","doi":"10.1109/RAM.2017.7889664","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889664","url":null,"abstract":"This document outlines a data-driven probabilistic approach to setting product acceptance testing limits. Product Specification (PS) limits are testing requirements for assuring that the product meets the product requirements. After identifying key manufacturing and performance parameters for acceptance testing, PS limits should be specified for these parameters, with the limits selected to assure that the unit will have a very high likelihood of meeting product requirements (barring any quality defects that would not be detected in acceptance testing). Because the settings for which the product requirements must be met is typically broader than the production acceptance testing space, PS limits should account for the difference between the acceptance testing setting relative to the worst-case setting. We propose an approach to setting PS limits that is based on demonstrating margin to the product requirement in the worst-case setting in which the requirement must be met. PS limits are then determined by considering the overall margin and uncertainty associated with a component requirement and then balancing this margin and uncertainty between the designer and producer. Specifically, after identifying parameters critical to component performance, we propose setting PS limits using a three step procedure: 1. Specify the acceptance testing and worst-case use-settings, the performance characteristic distributions in these two settings, and the mapping between these distributions. 2. Determine the PS limit in the worst-case use-setting by considering margin to the requirement and additional (epistemic) uncertainties. This step controls designer risk, namely the risk of producing product that violates requirements. 3. Define the PS limit for product acceptance testing by transforming the PS limit from the worst-case setting to the acceptance testing setting using the mapping between these distributions. Following this step, the producer risk is quantified by estimating the product scrap rate based on the projected acceptance testing distribution. The approach proposed here provides a framework for documenting the procedure and assumptions used to determine PS limits. This transparency in procedure will help inform what actions should occur when a unit violates a PS limit and how limits should change over time.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115515070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Mathematical models and software reliability can different mathematics fit all phases of SW lifecycle? 数学模型和软件可靠性不同的数学是否适合软件生命周期的所有阶段?
Pub Date : 1900-01-01 DOI: 10.1109/RAM.2017.7889761
M. Krasich
Software reliability, its predictions and data analyses are mostly based on the number of faults; therefore fault mitigation and reliability growth achieved by mitigation of number of faults. The usual results are the final failure frequency of the delivered mature software. The early reliability prediction is needed at the beginning of the development phase to estimate reliability of the software and its effect of the product it is a part of. Since the discrete number of faults are expected to be observed and mitigated, the non-homogenous Poisson probability distribution comes as the preferred mathematical tool. In the case where during development process no reliability growth was achieved, the same mathematics would just yield parameters which would indicate no reliability changes or, in the worst case, reliability degradation (the growth parameter equal or greater than one). Krasich-Peterson model (patent pending) and Musa original model when used for early predictions are very similar except the first assumes power law fitting of the mitigated faults, whilst the latter model assumes constant rate of failure mitigation. Since the early reliability predictions use assumptions for function parameters derived from quality level of the software inspection, testing, and improvement process and also on the software size, complexity, its use profile, the single way of validating those assumptions and the parameters derived from them is to apply the same mathematics to the reliability estimation of software for early predictions covering its lifecycle. Regardless of what mathematical model is applied, for continuity and for meaningful conclusions and decisions regarding software reliability as well as the future use of such information on other projects, one method type of counting (discrete) distribution should be applied in the same organization throughout the software lifecycle. An additional benefit of such consistency is the ability to compare not only software development and use phases but to compare the different software development and quality and test practices.
软件的可靠性,其预测和数据分析大多基于故障的数量;因此,故障的减少和可靠性的提高是通过减少故障数量来实现的。通常的结果是交付的成熟软件的最终故障频率。早期可靠性预测是在开发阶段开始时需要进行的,用于评估软件的可靠性及其对产品的影响。由于期望观察和减轻离散数量的故障,非齐次泊松概率分布成为首选的数学工具。在开发过程中没有实现可靠性增长的情况下,同样的数学将只产生一些参数,这些参数将表明可靠性没有变化,或者在最坏的情况下,可靠性下降(增长参数等于或大于1)。Krasich-Peterson模型(正在申请专利)和Musa原始模型在用于早期预测时非常相似,除了第一个模型假设缓和故障的幂律拟合,而后一个模型假设恒定的故障缓解率。由于早期的可靠性预测使用了从软件检查、测试和改进过程的质量水平以及软件大小、复杂性、使用概况中得出的功能参数的假设,验证这些假设和从中得出的参数的唯一方法是将相同的数学应用于软件的可靠性估计,用于覆盖其生命周期的早期预测。不管应用了什么数学模型,为了关于软件可靠性的连续性和有意义的结论和决策,以及将来在其他项目上使用这些信息,在整个软件生命周期中,应该在同一个组织中应用一种计数(离散)分布的方法类型。这种一致性的另一个好处是,不仅可以比较软件开发和使用阶段,还可以比较不同的软件开发、质量和测试实践。
{"title":"Mathematical models and software reliability can different mathematics fit all phases of SW lifecycle?","authors":"M. Krasich","doi":"10.1109/RAM.2017.7889761","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889761","url":null,"abstract":"Software reliability, its predictions and data analyses are mostly based on the number of faults; therefore fault mitigation and reliability growth achieved by mitigation of number of faults. The usual results are the final failure frequency of the delivered mature software. The early reliability prediction is needed at the beginning of the development phase to estimate reliability of the software and its effect of the product it is a part of. Since the discrete number of faults are expected to be observed and mitigated, the non-homogenous Poisson probability distribution comes as the preferred mathematical tool. In the case where during development process no reliability growth was achieved, the same mathematics would just yield parameters which would indicate no reliability changes or, in the worst case, reliability degradation (the growth parameter equal or greater than one). Krasich-Peterson model (patent pending) and Musa original model when used for early predictions are very similar except the first assumes power law fitting of the mitigated faults, whilst the latter model assumes constant rate of failure mitigation. Since the early reliability predictions use assumptions for function parameters derived from quality level of the software inspection, testing, and improvement process and also on the software size, complexity, its use profile, the single way of validating those assumptions and the parameters derived from them is to apply the same mathematics to the reliability estimation of software for early predictions covering its lifecycle. Regardless of what mathematical model is applied, for continuity and for meaningful conclusions and decisions regarding software reliability as well as the future use of such information on other projects, one method type of counting (discrete) distribution should be applied in the same organization throughout the software lifecycle. An additional benefit of such consistency is the ability to compare not only software development and use phases but to compare the different software development and quality and test practices.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"1112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116061397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improved method for ALT plan optimization ALT计划优化的改进方法
Pub Date : 1900-01-01 DOI: 10.1109/RAM.2017.7889730
P. Arrowsmith
Candidate test plans with 3 stress levels (L, M, H) were identified using the probability of zero failures at one or more stress levels Pr{ZFP1} as a target parameter. For a given sample size n, the allocations nL and nM are input variables. The optimization involves finding a minimum for the lower stress level, on the premise that plans with wider spread of the stress levels have smaller error of the time-to-failure (TTF) extrapolated to the use stress condition. The only constraint is equal spacing of the stress levels, in terms of the standardized stress (ξ). The proposed method does not require computation of the large sample approximate variance (Avar). The optimization can be conveniently done using a spreadsheet and is quite flexible, enabling different censor times to be used for each stress level and can be readily extended to 4 or more stress levels. Monte Carlo simulation of the candidate test plans was used to verify the assumption that the variance of the extrapolated TTF is proportional to the lower stress ξL, for a given allocation. The optimized test plans and variance of the estimated time to 10% failure are similar to those previously published, using the same planning values. Although the optimization method identifies acceptable candidate test plans, there may be other allocations (with slightly higher ξL) that give lower variance of the estimated TTF. However, the difference is typically within the resolution of the stress factor (e.g. ∆T <1 °C) and the uncertainty of the estimated parameter. Monte Carlo simulation can be used to fine tune candidate test plans found by the optimization method.
以一个或多个应力水平Pr{ZFP1}为目标参数,确定具有3个应力水平(L、M、H)的候选测试计划。对于给定的样本量n,分配nL和nM是输入变量。优化包括寻找较低应力水平的最小值,前提是应力水平分布较广的方案外推到使用应力条件的失效时间(TTF)误差较小。唯一的约束是应力水平的间距相等,以标准化应力(ξ)表示。该方法不需要计算大样本近似方差(Avar)。优化可以使用电子表格方便地完成,并且非常灵活,可以为每个压力水平使用不同的审查时间,并且可以很容易地扩展到4个或更多的压力水平。对候选测试计划进行蒙特卡罗模拟,以验证外推TTF的方差与给定分配的较低应力ξL成正比的假设。使用相同的计划值,优化的测试计划和估计时间到10%失败的方差与先前发布的相似。尽管优化方法确定了可接受的候选测试计划,但可能存在其他分配(具有略高的ξL),其给出的估计TTF方差较低。然而,差异通常在应力因子的分辨率(例如∆T <1°C)和估计参数的不确定性范围内。蒙特卡罗模拟可以对优化方法找到的候选测试方案进行微调。
{"title":"Improved method for ALT plan optimization","authors":"P. Arrowsmith","doi":"10.1109/RAM.2017.7889730","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889730","url":null,"abstract":"Candidate test plans with 3 stress levels (L, M, H) were identified using the probability of zero failures at one or more stress levels Pr{ZFP1} as a target parameter. For a given sample size n, the allocations nL and nM are input variables. The optimization involves finding a minimum for the lower stress level, on the premise that plans with wider spread of the stress levels have smaller error of the time-to-failure (TTF) extrapolated to the use stress condition. The only constraint is equal spacing of the stress levels, in terms of the standardized stress (ξ). The proposed method does not require computation of the large sample approximate variance (Avar). The optimization can be conveniently done using a spreadsheet and is quite flexible, enabling different censor times to be used for each stress level and can be readily extended to 4 or more stress levels. Monte Carlo simulation of the candidate test plans was used to verify the assumption that the variance of the extrapolated TTF is proportional to the lower stress ξL, for a given allocation. The optimized test plans and variance of the estimated time to 10% failure are similar to those previously published, using the same planning values. Although the optimization method identifies acceptable candidate test plans, there may be other allocations (with slightly higher ξL) that give lower variance of the estimated TTF. However, the difference is typically within the resolution of the stress factor (e.g. ∆T <1 °C) and the uncertainty of the estimated parameter. Monte Carlo simulation can be used to fine tune candidate test plans found by the optimization method.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125465785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2017 Annual Reliability and Maintainability Symposium (RAMS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1