Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889762
S. Crowder
The Bernoulli CUSUM (BC) provides a moving window of process performance and is the quickest control chart to detect small increases in fraction defective. The Bernoulli CUSUM designs presented here require 2, 3, or 4 failures in a moving window to produce a signal. The run length distribution provides insight into the properties of the BC beyond the Average or Median Run length. A retrospective analysis of electronic component pass/fail data using the BC suggested that a problem may have been present during previous production. Subsequent production used the BC for real time process performance feedback.
{"title":"An introduction to the Bernoulli CUSUM","authors":"S. Crowder","doi":"10.1109/RAM.2017.7889762","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889762","url":null,"abstract":"The Bernoulli CUSUM (BC) provides a moving window of process performance and is the quickest control chart to detect small increases in fraction defective. The Bernoulli CUSUM designs presented here require 2, 3, or 4 failures in a moving window to produce a signal. The run length distribution provides insight into the properties of the BC beyond the Average or Median Run length. A retrospective analysis of electronic component pass/fail data using the BC suggested that a problem may have been present during previous production. Subsequent production used the BC for real time process performance feedback.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123344651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889778
G. Cohen, J. McLinn
Planning a viable two-stress Accelerated Test can be a good challenge. Determining the stresses is just the start of the reliability challenge. It starts with understanding any relevant history and the associated failure modes. The root cause(s) of the main field failures would represent a good set of stresses. Some customers may experience as many as five simultaneous operating stresses in the field, yet only two or three might be determined to be major. A common stress combination is temperature and vibration, yet sometimes a better combination might be temperature and humidity. Mobile systems might need mechanical loads combined with temperature extremes to best represent the field. Systems exposed to sea air might use salt air combined with temperature. Functional test parameters with degradation measures may be required to obtain meaningful test results of long lived products. Difficulty in handling the stress combination combined with the possibility of non-linear behavior may result in changes to an accelerated test. Add intermittent or soft system failures to this mix and reliability challenges increase. Data analysis of a small number (i.e. two or three samples) increases difficulty of analysis. Sample size selection should represent as wide a variability as possible. Often available samples is smaller than desired and this complicates test planning and results. Degradation measures become indispensable when zero failures occur in test. Data collection times during test may also impact the analysis especially when looking for non-linear behavior. Test data collection points are set for convenience of reading and not to yield the best spread of information for analysis. This paper will present several detailed examples, covering the best methods for selecting stresses for accelerated testing and implementing the test. These examples show practical sample size and test time collection points. Data handling issues to clarify results even when noise in measurements is present will be discusses. Lastly, a short discussion of analysis. Planning for a myriad of possible results should help prevent unexpected events that damage ability to understand the results.
{"title":"Setting up and analyzing a two stress accelerated test","authors":"G. Cohen, J. McLinn","doi":"10.1109/RAM.2017.7889778","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889778","url":null,"abstract":"Planning a viable two-stress Accelerated Test can be a good challenge. Determining the stresses is just the start of the reliability challenge. It starts with understanding any relevant history and the associated failure modes. The root cause(s) of the main field failures would represent a good set of stresses. Some customers may experience as many as five simultaneous operating stresses in the field, yet only two or three might be determined to be major. A common stress combination is temperature and vibration, yet sometimes a better combination might be temperature and humidity. Mobile systems might need mechanical loads combined with temperature extremes to best represent the field. Systems exposed to sea air might use salt air combined with temperature. Functional test parameters with degradation measures may be required to obtain meaningful test results of long lived products. Difficulty in handling the stress combination combined with the possibility of non-linear behavior may result in changes to an accelerated test. Add intermittent or soft system failures to this mix and reliability challenges increase. Data analysis of a small number (i.e. two or three samples) increases difficulty of analysis. Sample size selection should represent as wide a variability as possible. Often available samples is smaller than desired and this complicates test planning and results. Degradation measures become indispensable when zero failures occur in test. Data collection times during test may also impact the analysis especially when looking for non-linear behavior. Test data collection points are set for convenience of reading and not to yield the best spread of information for analysis. This paper will present several detailed examples, covering the best methods for selecting stresses for accelerated testing and implementing the test. These examples show practical sample size and test time collection points. Data handling issues to clarify results even when noise in measurements is present will be discusses. Lastly, a short discussion of analysis. Planning for a myriad of possible results should help prevent unexpected events that damage ability to understand the results.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126516909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889671
Antonio Harrison Sánchez, T. Soares, Andrew Wolahan
Mega-constellations are one of the emerging challenges in the satellite communication business. Several concepts of large networks of inexpensive low Earth orbiting satellites have been proposed in response to the ever increasing demand for low cost broadband capacity, particularly in developing countries where there is limited access to terrestrial networks. In this context, mega-constellation satellite reliability is identified as a key aspect in-view of the potential catastrophic impact on the space debris environment if satellites fail to deorbit given the large number of satellites involved. However, predicting reliability without having a detailed design is a challenging task as bottom up analyses using handbook based methods are not possible. Moreover, there are many inadequacies and limitations with the current reliability prediction process for space applications. Secondly, the available field data regarding low Earth orbit (LEO) satellites may not be representative due to the revolutionary design, manufacturing, and testing approach proposed by the mega-constellation satellite suppliers. Finally, all this leads to a large uncertainty in the predicted reliability of mega-constellation satellites with a consequential risk to the space environment. In order to address the situation, the authors have identified a number of potential solutions to mitigate the risk including design measures, operational procedures, and improvements to the reliability assessment process.
{"title":"Reliability aspects of mega-constellation satellites and their impact on the space debris environment","authors":"Antonio Harrison Sánchez, T. Soares, Andrew Wolahan","doi":"10.1109/RAM.2017.7889671","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889671","url":null,"abstract":"Mega-constellations are one of the emerging challenges in the satellite communication business. Several concepts of large networks of inexpensive low Earth orbiting satellites have been proposed in response to the ever increasing demand for low cost broadband capacity, particularly in developing countries where there is limited access to terrestrial networks. In this context, mega-constellation satellite reliability is identified as a key aspect in-view of the potential catastrophic impact on the space debris environment if satellites fail to deorbit given the large number of satellites involved. However, predicting reliability without having a detailed design is a challenging task as bottom up analyses using handbook based methods are not possible. Moreover, there are many inadequacies and limitations with the current reliability prediction process for space applications. Secondly, the available field data regarding low Earth orbit (LEO) satellites may not be representative due to the revolutionary design, manufacturing, and testing approach proposed by the mega-constellation satellite suppliers. Finally, all this leads to a large uncertainty in the predicted reliability of mega-constellation satellites with a consequential risk to the space environment. In order to address the situation, the authors have identified a number of potential solutions to mitigate the risk including design measures, operational procedures, and improvements to the reliability assessment process.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126407691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889742
M. Krasich, A. Puzella, Philip M. Henault
The small size and high density of microvias makes them susceptible to developing cracks resultant from the stresses that produce mechanical and thermal fatigue on fragile structure. Their use in support of the critical functions of systems regardless of the industry type brings up the importance of their high reliability. Having in mind the overall industry concern, the design team of the transmitter/receiver (T/R) modules has developed the test for selection of the most reliable design which would satisfy the very stringent reliability requirements of this complex system. The accelerated thermal cycling and thermal dwell test has evaluated and compared the variations of a group of legacy deign microvia shapes to the proposed design solutions and was able to aid in selection of the most desirable design solution. Out of 24 new design variations each using four different materials, the best reliability was found in the design D3A-B where the reliability results did not depend on the material used. Majority of other new designs for high reliability results used Material #4, the fact that poses a certain limitation on the design requirements. The test was highly accelerated and automated, and the effort has provided well identified distinctive results for prevention of mission critical failures.
{"title":"Accelerated testing for the selection of the most reliable microvia design","authors":"M. Krasich, A. Puzella, Philip M. Henault","doi":"10.1109/RAM.2017.7889742","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889742","url":null,"abstract":"The small size and high density of microvias makes them susceptible to developing cracks resultant from the stresses that produce mechanical and thermal fatigue on fragile structure. Their use in support of the critical functions of systems regardless of the industry type brings up the importance of their high reliability. Having in mind the overall industry concern, the design team of the transmitter/receiver (T/R) modules has developed the test for selection of the most reliable design which would satisfy the very stringent reliability requirements of this complex system. The accelerated thermal cycling and thermal dwell test has evaluated and compared the variations of a group of legacy deign microvia shapes to the proposed design solutions and was able to aid in selection of the most desirable design solution. Out of 24 new design variations each using four different materials, the best reliability was found in the design D3A-B where the reliability results did not depend on the material used. Majority of other new designs for high reliability results used Material #4, the fact that poses a certain limitation on the design requirements. The test was highly accelerated and automated, and the effort has provided well identified distinctive results for prevention of mission critical failures.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121818807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889796
A. Chauhan, O. Yadav, G. Soni, R. Jain
New product development (NPD) is expensive as well as risky endeavor for any organization owing to high product failure rates. A holistic risk management (RM) model may play a significant role in identifying, analyzing and mitigating the risks involved in NPD process, enabling a reduction in product failure rate. This paper aims at developing an integrated approach to assess and effectively manage risks in NPD process. A holistic methodology based on quantitative tools is proposed for assessment of the risk factors prevalent in the NPD process. The proposed approach is a three-stage holistic methodology for overall management of risks underlying the NPD process. The risk identification stage involves thorough exploration of risk elements acting in various phases of NPD process; and usage of factor analysis tools which render a pool of key risk factors. Interpretive structural modeling is applied on the risk factors in each functional risk domain and NPD risk taxonomy is established. The risk assessment step involves quantification of the criticality of the identified risk factors for prioritization. The evaluation of risk degree of the factors is based on probabilistic likelihood of occurrence and severity of the risk factors, to measure the criticality of the risk factors. The authors suggest usage of fuzzy theory to reduce subjectivity and vagueness in the assessment process for calculating the risk degree of the factors. A technique is developed to capture the riskiness of the entire NPD process into a single ‘risk score’ value. This approach may be used to develop ‘Product Development Risk Reference Model’ as a comprehensive guideline for assessing risk factors occurring in the NPD process in an organization. The approach leads to calculation of a single numerical value which could be easily comprehended by the product developers and help them in assessing their NPD initiatives and take ‘Go-Kill’ decisions in accordance with the prevalent riskiness in their project as per the risk profile of the organization. Further studies would be directed towards analyzing the change in risk situation over a period of time due to the changing market scenarios and the related risk factors.
{"title":"A holistic approach to manage risks in NPD process","authors":"A. Chauhan, O. Yadav, G. Soni, R. Jain","doi":"10.1109/RAM.2017.7889796","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889796","url":null,"abstract":"New product development (NPD) is expensive as well as risky endeavor for any organization owing to high product failure rates. A holistic risk management (RM) model may play a significant role in identifying, analyzing and mitigating the risks involved in NPD process, enabling a reduction in product failure rate. This paper aims at developing an integrated approach to assess and effectively manage risks in NPD process. A holistic methodology based on quantitative tools is proposed for assessment of the risk factors prevalent in the NPD process. The proposed approach is a three-stage holistic methodology for overall management of risks underlying the NPD process. The risk identification stage involves thorough exploration of risk elements acting in various phases of NPD process; and usage of factor analysis tools which render a pool of key risk factors. Interpretive structural modeling is applied on the risk factors in each functional risk domain and NPD risk taxonomy is established. The risk assessment step involves quantification of the criticality of the identified risk factors for prioritization. The evaluation of risk degree of the factors is based on probabilistic likelihood of occurrence and severity of the risk factors, to measure the criticality of the risk factors. The authors suggest usage of fuzzy theory to reduce subjectivity and vagueness in the assessment process for calculating the risk degree of the factors. A technique is developed to capture the riskiness of the entire NPD process into a single ‘risk score’ value. This approach may be used to develop ‘Product Development Risk Reference Model’ as a comprehensive guideline for assessing risk factors occurring in the NPD process in an organization. The approach leads to calculation of a single numerical value which could be easily comprehended by the product developers and help them in assessing their NPD initiatives and take ‘Go-Kill’ decisions in accordance with the prevalent riskiness in their project as per the risk profile of the organization. Further studies would be directed towards analyzing the change in risk situation over a period of time due to the changing market scenarios and the related risk factors.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114233433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889672
R. Austin, N. Mahadevan, B. Sierawski, G. Karsai, A. Witulski, John W. Evans
CubeSats have become an attractive platform for universities, industry, and government space missions because they are cheaper and quicker to develop than full-scale satellites. One way CubeSats keep costs low is by using commercial off-the-shelf parts (COTS) instead of space-qualified parts. Space-qualified parts are often costlier, larger, and consume more power than their commercial counterparts precluding their use within the CubeSat form-factor. Given typical power budgets, monetary budgets, and timelines for CubeSat missions, conventional radiation hardness assurance, like the use of space-qualified parts and radiation testing campaigns of COTS parts, is not practical. Instead, a system-level approach to radiation effects mitigation is needed. In this paper an assurance case for a system-level approach to mitigate radiation effects of a CubeSat science experiment is expressed using Goal Structuring Notation (GSN), a graphical argument standard. The case specifically looks at three main mitigation strategies for the radiation environment: total ionizing dose (TID) screening of parts, detection and recovery from single-event latch-ups (SEL) and single-event functional interrupts (SEFI). The graphical assurance case presented makes a qualitative argument for the radiation reliability of the CubeSat experiment using part and system-level mitigation strategies.
{"title":"A CubeSat-payload radiation-reliability assurance case using goal structuring notation","authors":"R. Austin, N. Mahadevan, B. Sierawski, G. Karsai, A. Witulski, John W. Evans","doi":"10.1109/RAM.2017.7889672","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889672","url":null,"abstract":"CubeSats have become an attractive platform for universities, industry, and government space missions because they are cheaper and quicker to develop than full-scale satellites. One way CubeSats keep costs low is by using commercial off-the-shelf parts (COTS) instead of space-qualified parts. Space-qualified parts are often costlier, larger, and consume more power than their commercial counterparts precluding their use within the CubeSat form-factor. Given typical power budgets, monetary budgets, and timelines for CubeSat missions, conventional radiation hardness assurance, like the use of space-qualified parts and radiation testing campaigns of COTS parts, is not practical. Instead, a system-level approach to radiation effects mitigation is needed. In this paper an assurance case for a system-level approach to mitigate radiation effects of a CubeSat science experiment is expressed using Goal Structuring Notation (GSN), a graphical argument standard. The case specifically looks at three main mitigation strategies for the radiation environment: total ionizing dose (TID) screening of parts, detection and recovery from single-event latch-ups (SEL) and single-event functional interrupts (SEFI). The graphical assurance case presented makes a qualitative argument for the radiation reliability of the CubeSat experiment using part and system-level mitigation strategies.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122733960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAMS.2016.7448053
J. L. Cook
The Army and the Armament Research, Development and Engineering Center (ARDEC) continues to strive to meet cost reduction targets associated with sequestration and the economic turn down, in general. Likewise, the costs and deficiencies associated with quality and reliability underperformance continue to be an area ripe for improvement. In response, specific and detailed process improvements have been undertaken and continue to be instituted specifically to realize the cost and risk reduction benefits of best practice application in the areas of quality, reliability and safety disciplines.
{"title":"A change in process and culture: Implementing quality, reliability and safety in early development","authors":"J. L. Cook","doi":"10.1109/RAMS.2016.7448053","DOIUrl":"https://doi.org/10.1109/RAMS.2016.7448053","url":null,"abstract":"The Army and the Armament Research, Development and Engineering Center (ARDEC) continues to strive to meet cost reduction targets associated with sequestration and the economic turn down, in general. Likewise, the costs and deficiencies associated with quality and reliability underperformance continue to be an area ripe for improvement. In response, specific and detailed process improvements have been undertaken and continue to be instituted specifically to realize the cost and risk reduction benefits of best practice application in the areas of quality, reliability and safety disciplines.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125387961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889747
Srikanth Nandipati, Amith Nag Nichenametla, Abhay Laxmanrao Waghmare
While working on New Product Developments (NPD), it is a common practice to make use of stress strength interference model to estimate the reliability of a product during the early design stage. However, it is very important to take into account the possible degradation in the strength of the material as it is put into operation. This paper is an attempt to make use of test data available for a specific product that was sourced from two different suppliers. Degradation model was built on the small scale replicate (called as coupon) for which the test data was available. This helped to understand the level of degradation over the time period which was then superimposed on the Stress Strength interference model to evaluate the reliability over time. Further, such estimation provides designers forehand information on potential to achieve the allocated component level reliability target derived from system level reliability allocation.
{"title":"Integrating degradation model with stress strength interference model to estimate reliability in design phase","authors":"Srikanth Nandipati, Amith Nag Nichenametla, Abhay Laxmanrao Waghmare","doi":"10.1109/RAM.2017.7889747","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889747","url":null,"abstract":"While working on New Product Developments (NPD), it is a common practice to make use of stress strength interference model to estimate the reliability of a product during the early design stage. However, it is very important to take into account the possible degradation in the strength of the material as it is put into operation. This paper is an attempt to make use of test data available for a specific product that was sourced from two different suppliers. Degradation model was built on the small scale replicate (called as coupon) for which the test data was available. This helped to understand the level of degradation over the time period which was then superimposed on the Stress Strength interference model to evaluate the reliability over time. Further, such estimation provides designers forehand information on potential to achieve the allocated component level reliability target derived from system level reliability allocation.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126174774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889769
Wesley Gunnar White, V. Chandrasekar
This paper presents a decision analysis technique for conducting Analysis of Alternatives (AoA) at the end of the Materiel Solution Analysis (MSA) phase of the United States (US) Department of Defense (DOD) acquisition process. This modified fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is designed to provide Decision Makers (DM) with a tool to evaluate concept alternatives against performance criteria, reliability and Life Cycle Costs (LCC).
{"title":"TOPSIS to optimize performance, reliability, and life cycle costs during analysis of alternatives","authors":"Wesley Gunnar White, V. Chandrasekar","doi":"10.1109/RAM.2017.7889769","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889769","url":null,"abstract":"This paper presents a decision analysis technique for conducting Analysis of Alternatives (AoA) at the end of the Materiel Solution Analysis (MSA) phase of the United States (US) Department of Defense (DOD) acquisition process. This modified fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is designed to provide Decision Makers (DM) with a tool to evaluate concept alternatives against performance criteria, reliability and Life Cycle Costs (LCC).","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"328 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124623349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889715
Zhaofeng Huang, J. Zwolski
In real life risk assessment, a risk event with a likelihood of 1/100 can be easily but mistakenly estimated to have likelihood of 1/1,000, 1/10,000 or even smaller due to an inadequate probability distribution choice. Contrasting to the underestimating, an overestimating can also occur. This paper establishes a systematic and general way of evaluating these underestimating or overestimating situations. The paper applies the method to several commonly used probability distributions, namely Normal, Weibull, Log Normal, and Gumbel distributions, and draws some general conclusions and quantitative trends of overestimating or underestimating possibilities. The paper also provides some general advice for selecting a probability distribution when the sample size of data is small or the risk assessment needs to extrapolate the likelihood estimates to a tail end with no experience. With the method and quantitative trending data presented, the paper will help enhance the validity of risk likelihood estimates leading to a better risk assessment.
{"title":"Effects of probability distribution choice on likelihood estimates in risk analysis","authors":"Zhaofeng Huang, J. Zwolski","doi":"10.1109/RAM.2017.7889715","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889715","url":null,"abstract":"In real life risk assessment, a risk event with a likelihood of 1/100 can be easily but mistakenly estimated to have likelihood of 1/1,000, 1/10,000 or even smaller due to an inadequate probability distribution choice. Contrasting to the underestimating, an overestimating can also occur. This paper establishes a systematic and general way of evaluating these underestimating or overestimating situations. The paper applies the method to several commonly used probability distributions, namely Normal, Weibull, Log Normal, and Gumbel distributions, and draws some general conclusions and quantitative trends of overestimating or underestimating possibilities. The paper also provides some general advice for selecting a probability distribution when the sample size of data is small or the risk assessment needs to extrapolate the likelihood estimates to a tail end with no experience. With the method and quantitative trending data presented, the paper will help enhance the validity of risk likelihood estimates leading to a better risk assessment.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125075292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}