Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889683
A. Rastegari, A. Archenti, Mohammadsadegh Mobin
Machining systems (i.e., machine tools, cutting processes and their interaction) cannot produce accurate parts if performance degradation due to wear in their subsystems (e.g., feed-drive systems and spindle units) is not identified, monitored and controlled. Appropriate maintenance actions delay the possible deterioration and minimize/avoids the machining system stoppage time that leads to lower productivity and higher production cost. Moreover, measuring and monitoring machine tool condition has become increasingly important due to the introduction of agile production, increased accuracy requirements for products and customers' requirements for quality assurance. Condition Based Maintenance (CBM) practices, such as vibration monitoring of machine tool spindle units, are therefore becoming a very attractive, but still challenging, method for companies operating high-value machines and components. CBM is being used to plan for maintenance action based on the condition of the machines and to prevent failures by solving the problems in advance as well as controlling the accuracy of the machining operations. By increasing the knowledge in this area, companies can save money through fewer acute breakdowns, reduction in inventory cost, reduction in repair times, and an increase in the robustness of the manufacturing processes leading to more predictable manufacturing. Hence, the CBM of machine tools ensures the basic conditions to deliver the right ability or capability of the right machine at the right time. One of the most common problems of rotating equipment such as spindles is the bearing condition (due to wear of the bearings). Failure of the bearings can cause major damage in a spindle. Vibration analysis is able to diagnose bearing failures by measuring the overall vibration of a spindle or, more precisely, by frequency analysis. Several factors should be taken into consideration to perform vibration monitoring on a machine tool's spindle. Some of these factors are as follows: the sensor type/sensitivity, number of sensors to be installed on the spindle in different directions, positioning of the vibration accelerometers, frequency range to be measured, resonance frequency, spindle rotational speed during the measurements,
{"title":"Condition based maintenance of machine tools: Vibration monitoring of spindle units","authors":"A. Rastegari, A. Archenti, Mohammadsadegh Mobin","doi":"10.1109/RAM.2017.7889683","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889683","url":null,"abstract":"Machining systems (i.e., machine tools, cutting processes and their interaction) cannot produce accurate parts if performance degradation due to wear in their subsystems (e.g., feed-drive systems and spindle units) is not identified, monitored and controlled. Appropriate maintenance actions delay the possible deterioration and minimize/avoids the machining system stoppage time that leads to lower productivity and higher production cost. Moreover, measuring and monitoring machine tool condition has become increasingly important due to the introduction of agile production, increased accuracy requirements for products and customers' requirements for quality assurance. Condition Based Maintenance (CBM) practices, such as vibration monitoring of machine tool spindle units, are therefore becoming a very attractive, but still challenging, method for companies operating high-value machines and components. CBM is being used to plan for maintenance action based on the condition of the machines and to prevent failures by solving the problems in advance as well as controlling the accuracy of the machining operations. By increasing the knowledge in this area, companies can save money through fewer acute breakdowns, reduction in inventory cost, reduction in repair times, and an increase in the robustness of the manufacturing processes leading to more predictable manufacturing. Hence, the CBM of machine tools ensures the basic conditions to deliver the right ability or capability of the right machine at the right time. One of the most common problems of rotating equipment such as spindles is the bearing condition (due to wear of the bearings). Failure of the bearings can cause major damage in a spindle. Vibration analysis is able to diagnose bearing failures by measuring the overall vibration of a spindle or, more precisely, by frequency analysis. Several factors should be taken into consideration to perform vibration monitoring on a machine tool's spindle. Some of these factors are as follows: the sensor type/sensitivity, number of sensors to be installed on the spindle in different directions, positioning of the vibration accelerometers, frequency range to be measured, resonance frequency, spindle rotational speed during the measurements,","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123760301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889724
Melissa Issad, L. Kloul, A. Rauzy
Safety analysis of railway CBTC systems aims at finding and validating failure scenarios. In this article we present a scenario-based FMEA method based on ScOLA, a scenario oriented modeling language dedicated to the analysis and formalization of complex systems. The specifications of such systems are usually spread in documents of thousands of pages written in a natural language. These documents are the basis for the safety analysis and validations activities. Therefore, we propose the scenario-based FMEA method to perform safety analysis that is more efficient than the paper-based analysis. The method retrieves and evaluates failure scenarios using functional ones. The article aims at presenting the method and its application on a railway system.
{"title":"A scenario-based FMEA method and its evaluation in a railway context","authors":"Melissa Issad, L. Kloul, A. Rauzy","doi":"10.1109/RAM.2017.7889724","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889724","url":null,"abstract":"Safety analysis of railway CBTC systems aims at finding and validating failure scenarios. In this article we present a scenario-based FMEA method based on ScOLA, a scenario oriented modeling language dedicated to the analysis and formalization of complex systems. The specifications of such systems are usually spread in documents of thousands of pages written in a natural language. These documents are the basis for the safety analysis and validations activities. Therefore, we propose the scenario-based FMEA method to perform safety analysis that is more efficient than the paper-based analysis. The method retrieves and evaluates failure scenarios using functional ones. The article aims at presenting the method and its application on a railway system.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114955992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889720
Jeff Hanes, R. P. Wiegand
Fault Tree Analysis (FTA) is used extensively to evaluate the logical dependency of a system on its constituent components. Fault trees (FTs) can be used to identify and correct weaknesses in a design before a system goes to production. Effective methods have been developed over the course of several decades for finding minimal cut sets (MCS). Cut sets identify combinations of component failures that cause the system to fail. Other methods focus on probability risk assessment, in which component failure probabilities are evaluated to determine which failure events are most probable under normal operating conditions. However, traditional FTs do not contain information about the physical location of the components that make up the system. Thus, they cannot identify vulnerabilities induced by the proximity relationships of those components. Components that are sufficiently close to each other could be defeated by a single event with a large enough radius of effect. Events such as the Deepwater Horizon explosion and subsequent oil spill demonstrate the potentially devastating risk posed by such vulnerabilities. Adding positional information to the logical information contained in the FT can capture proximity relationships that constitute vulnerabilities in the overall system but are not contained in the logical structure alone. Thus, existing FTA methods cannot address these concerns. Making use of the positional information would require extensions to existing solution methods or possibly new methods altogether. In practice, fault trees can grow very large, exceeding one thousand components for a large system, which causes a combinatorial explosion in the number of possible solutions. Traditional methods cope with this problem by limiting the number of solutions; generally this is an acceptable limitation since those methods will find the most likely events capable of defeating the fault tree. However, adding more information to the tree and searching for different criteria (such as conditional probabilities) can render that trade invalid and motivates the search for alternate means to find vulnerabilities in the system. Candidate methods for this type of problem should be able to find “hot spots” in the physical space of very large real world systems where a destructive event would damage multiple components and cause the overall system to fail. In the present research, a test set of medium to large fault tree systems was generated using Lindenmayer systems. These systems vary in size from tens of components to over a thousand and vary in terms of complexity as measured by the proportion of operator types and size of minimal cut sets. Two solution approaches were explored in this research that use graph clustering to integrate positional information with FT solutions as an initial attempt to solve spatially constrained fault trees. These methods were applied to the set of test fault trees to evaluate their performance in finding solutions to this t
{"title":"Exploring solution methods for fault trees constrained by location","authors":"Jeff Hanes, R. P. Wiegand","doi":"10.1109/RAM.2017.7889720","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889720","url":null,"abstract":"Fault Tree Analysis (FTA) is used extensively to evaluate the logical dependency of a system on its constituent components. Fault trees (FTs) can be used to identify and correct weaknesses in a design before a system goes to production. Effective methods have been developed over the course of several decades for finding minimal cut sets (MCS). Cut sets identify combinations of component failures that cause the system to fail. Other methods focus on probability risk assessment, in which component failure probabilities are evaluated to determine which failure events are most probable under normal operating conditions. However, traditional FTs do not contain information about the physical location of the components that make up the system. Thus, they cannot identify vulnerabilities induced by the proximity relationships of those components. Components that are sufficiently close to each other could be defeated by a single event with a large enough radius of effect. Events such as the Deepwater Horizon explosion and subsequent oil spill demonstrate the potentially devastating risk posed by such vulnerabilities. Adding positional information to the logical information contained in the FT can capture proximity relationships that constitute vulnerabilities in the overall system but are not contained in the logical structure alone. Thus, existing FTA methods cannot address these concerns. Making use of the positional information would require extensions to existing solution methods or possibly new methods altogether. In practice, fault trees can grow very large, exceeding one thousand components for a large system, which causes a combinatorial explosion in the number of possible solutions. Traditional methods cope with this problem by limiting the number of solutions; generally this is an acceptable limitation since those methods will find the most likely events capable of defeating the fault tree. However, adding more information to the tree and searching for different criteria (such as conditional probabilities) can render that trade invalid and motivates the search for alternate means to find vulnerabilities in the system. Candidate methods for this type of problem should be able to find “hot spots” in the physical space of very large real world systems where a destructive event would damage multiple components and cause the overall system to fail. In the present research, a test set of medium to large fault tree systems was generated using Lindenmayer systems. These systems vary in size from tens of components to over a thousand and vary in terms of complexity as measured by the proportion of operator types and size of minimal cut sets. Two solution approaches were explored in this research that use graph clustering to integrate positional information with FT solutions as an initial attempt to solve spatially constrained fault trees. These methods were applied to the set of test fault trees to evaluate their performance in finding solutions to this t","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122651227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889711
Meng-Lai Yin, Andrew J. Chan
This paper presents a radical approach for real-time maintenance prioritization where the main idea is drawn from neuroscience studies. In this approach, maintenance prioritization is a product of a learning process. Failures and maintenance experiences are learned from and applied through “habituation” and “gist generation”. During real-time operations, the knowledge is retrieved when maintenance prioritization is demanded. The brain's “dual-process” model is applied as the basic framework for conducting maintenance prioritization. The central processing unit, e.g., the “slow brain”, conducts high-fidelity analyses and prioritizes equipment according to their “criticality”. The distributed processing units, e.g., the “fast brain”, provide efficient reactions in real time. These two processes work in parallel to ensure the performance of the real-time maintenance prioritization. A prototyping tool has been developed to demonstrate the concepts.
{"title":"Real-time maintenance prioritization with learning capability","authors":"Meng-Lai Yin, Andrew J. Chan","doi":"10.1109/RAM.2017.7889711","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889711","url":null,"abstract":"This paper presents a radical approach for real-time maintenance prioritization where the main idea is drawn from neuroscience studies. In this approach, maintenance prioritization is a product of a learning process. Failures and maintenance experiences are learned from and applied through “habituation” and “gist generation”. During real-time operations, the knowledge is retrieved when maintenance prioritization is demanded. The brain's “dual-process” model is applied as the basic framework for conducting maintenance prioritization. The central processing unit, e.g., the “slow brain”, conducts high-fidelity analyses and prioritizes equipment according to their “criticality”. The distributed processing units, e.g., the “fast brain”, provide efficient reactions in real time. These two processes work in parallel to ensure the performance of the real-time maintenance prioritization. A prototyping tool has been developed to demonstrate the concepts.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122894377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889735
J. Pulido
In today's global environment, accelerated life testing is becoming a competitive advantage when time spent from conceptual stage to the final product development needs to be minimized (project costs and development time) in order to be successful. Using accelerated life testing techniques for mechanical and structural applications have strong challenges when defining the loading but also the fatigue life to represent actual field performance. Such common problems as well as some helpful strategies using accelerated life testing are presented for faster planning of accelerated life testing (ALT). Examples from the refrigeration industry are used to demonstrate the utility of this strategy. In conclusion the test and analysis were effectively used to increase the degree of reliability improvements and to reduce the total number of test hours resulting in a shorter design cycle.
{"title":"Reliability test design of a membrane air-water heat exchanger","authors":"J. Pulido","doi":"10.1109/RAM.2017.7889735","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889735","url":null,"abstract":"In today's global environment, accelerated life testing is becoming a competitive advantage when time spent from conceptual stage to the final product development needs to be minimized (project costs and development time) in order to be successful. Using accelerated life testing techniques for mechanical and structural applications have strong challenges when defining the loading but also the fatigue life to represent actual field performance. Such common problems as well as some helpful strategies using accelerated life testing are presented for faster planning of accelerated life testing (ALT). Examples from the refrigeration industry are used to demonstrate the utility of this strategy. In conclusion the test and analysis were effectively used to increase the degree of reliability improvements and to reduce the total number of test hours resulting in a shorter design cycle.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114451807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889768
Clyde W. Denison, Matthew Burns
As companies are striving to achieve profitable growth; they are recognizing the importance of system design requirements, where Reliability, Maintainability, Testability and Supportability (RM&S) are “Designed-In” during early design development, that support the product's final development, production, operation and sustainment. To meet this end, the integration of RM&S into Systems Engineering requirements must begin with a balanced, structured, and disciplined Integrated Product Team (IPT), proven processes, and an enterprise-wide integrated engineering development, modeling, and analysis environment. A major prerequisite to determining system reliability, maintainability, and supportability requirements is possessing a good understanding of the overall environment; i.e., the physical environment where the system will be deployed / operated and the culture (military / commercial / industrial / residential) of the operating agency, and this is where the development, modeling and analysis environment becomes crucial. The objective is to design, develop and support quality products that satisfy the user needs with measurable improvements to mission capability, operational availability and life cycle cost. This all requires RM&S practitioners who are involved in a program early enough to influence the design and who are supported in efforts to develop and manage design-to allocations of goals that are identified and correlated with customer operational needs. Analysis focused on early design trades, lessons learned, and operational mission environment testing, with “Test, Analyze and Fix” (TAAF) philosophy is at the heart of any innovative RM&S Program.
{"title":"Big R, easy M: If you do effective modeling and analysis","authors":"Clyde W. Denison, Matthew Burns","doi":"10.1109/RAM.2017.7889768","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889768","url":null,"abstract":"As companies are striving to achieve profitable growth; they are recognizing the importance of system design requirements, where Reliability, Maintainability, Testability and Supportability (RM&S) are “Designed-In” during early design development, that support the product's final development, production, operation and sustainment. To meet this end, the integration of RM&S into Systems Engineering requirements must begin with a balanced, structured, and disciplined Integrated Product Team (IPT), proven processes, and an enterprise-wide integrated engineering development, modeling, and analysis environment. A major prerequisite to determining system reliability, maintainability, and supportability requirements is possessing a good understanding of the overall environment; i.e., the physical environment where the system will be deployed / operated and the culture (military / commercial / industrial / residential) of the operating agency, and this is where the development, modeling and analysis environment becomes crucial. The objective is to design, develop and support quality products that satisfy the user needs with measurable improvements to mission capability, operational availability and life cycle cost. This all requires RM&S practitioners who are involved in a program early enough to influence the design and who are supported in efforts to develop and manage design-to allocations of goals that are identified and correlated with customer operational needs. Analysis focused on early design trades, lessons learned, and operational mission environment testing, with “Test, Analyze and Fix” (TAAF) philosophy is at the heart of any innovative RM&S Program.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117319701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889650
A. Ragab, M. El-Koujok, M. Amazouz, S. Yacout
This paper proposes an interpretable knowledge discovery approach to detect and diagnose faults in chemical processes. The approach is demonstrated using simulated data from the Tennessee Eastman Process (TEP), as a challenging benchmark problem. The TEP is a plant-wide industrial process that is commonly used to study and evaluate a variety of topics, including the design of process monitoring and control techniques. The proposed approach is called Logical Analysis of Data (LAD). LAD is a machine learning approach that is used to discover the hidden knowledge in historical data. The discovered knowledge in the form of extracted patterns is employed to construct a classification rule that is capable of characterizing the physical phenomena in the TEP, wherein one can detect and identify a fault and relate it to the causes that contribute to its occurrence. To evaluate our approach, the LAD is trained on a set of observations collected from different faults, and tested against an independent set of observations. The results in this paper show that the LAD approach achieves the highest accuracy compared to two common machine learning classification techniques; Artificial Neural Networks and Support Vector Machines.
{"title":"Fault detection and diagnosis in the Tennessee Eastman Process using interpretable knowledge discovery","authors":"A. Ragab, M. El-Koujok, M. Amazouz, S. Yacout","doi":"10.1109/RAM.2017.7889650","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889650","url":null,"abstract":"This paper proposes an interpretable knowledge discovery approach to detect and diagnose faults in chemical processes. The approach is demonstrated using simulated data from the Tennessee Eastman Process (TEP), as a challenging benchmark problem. The TEP is a plant-wide industrial process that is commonly used to study and evaluate a variety of topics, including the design of process monitoring and control techniques. The proposed approach is called Logical Analysis of Data (LAD). LAD is a machine learning approach that is used to discover the hidden knowledge in historical data. The discovered knowledge in the form of extracted patterns is employed to construct a classification rule that is capable of characterizing the physical phenomena in the TEP, wherein one can detect and identify a fault and relate it to the causes that contribute to its occurrence. To evaluate our approach, the LAD is trained on a set of observations collected from different faults, and tested against an independent set of observations. The results in this paper show that the LAD approach achieves the highest accuracy compared to two common machine learning classification techniques; Artificial Neural Networks and Support Vector Machines.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127000741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889728
M. Lafayette, Z. Li, S. Webster
Production part approval process (PPAP) is originally designed and used in the automotive industry to assure supplier parts integrity and manufacturing processes maturity. As an effective risk reduction process prior to product/service release, PPAP has been widely used in many other industries including aerospace industry. In this research, the existing method of assessing part risks for PPAP implementation within the United Technology Corporation (UTC) is investigated. Risk assessment has been based on seven risk categories of a part and a multiplicative risk calculation algorithm is used to determine if a PPAP is needed or not. A refined risk assessment algorithm based on logistic regression is proposed using the seven risk categories which include both quantitative and qualitative risk measurements. The logistic regression risk assessment model is trained and tested using past program PPAP data sets. The advantages of the proposed risk assessment method are illustrated through economic analyses of the two PPAP risk methods under the cost estimates of the PPAP standard and elements being practiced at UTC.
{"title":"A risk assessment method for production part approval process","authors":"M. Lafayette, Z. Li, S. Webster","doi":"10.1109/RAM.2017.7889728","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889728","url":null,"abstract":"Production part approval process (PPAP) is originally designed and used in the automotive industry to assure supplier parts integrity and manufacturing processes maturity. As an effective risk reduction process prior to product/service release, PPAP has been widely used in many other industries including aerospace industry. In this research, the existing method of assessing part risks for PPAP implementation within the United Technology Corporation (UTC) is investigated. Risk assessment has been based on seven risk categories of a part and a multiplicative risk calculation algorithm is used to determine if a PPAP is needed or not. A refined risk assessment algorithm based on logistic regression is proposed using the seven risk categories which include both quantitative and qualitative risk measurements. The logistic regression risk assessment model is trained and tested using past program PPAP data sets. The advantages of the proposed risk assessment method are illustrated through economic analyses of the two PPAP risk methods under the cost estimates of the PPAP standard and elements being practiced at UTC.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"56 50","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113957483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889679
B. Cline, R. Niculescu, D. Huffman, Bob Deckel
Machine Learning provides a complementary approach to maintenance planning by analyzing significant data sets of individual machine performance and environment variables, identifying failure signatures and profiles, and providing an actionable prediction of failure for individual parts.
{"title":"Predictive maintenance applications for machine learning","authors":"B. Cline, R. Niculescu, D. Huffman, Bob Deckel","doi":"10.1109/RAM.2017.7889679","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889679","url":null,"abstract":"Machine Learning provides a complementary approach to maintenance planning by analyzing significant data sets of individual machine performance and environment variables, identifying failure signatures and profiles, and providing an actionable prediction of failure for individual parts.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114880965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889702
Tianyi Wu, Xiaobing Ma, Yu Zhao
This paper proposes a condition-based maintenance (CBM) policy for a gradually deteriorating system that could only be repaired for finite times. Periodical inspections are performed to measure the degradation level, and the system is preventively or correctively repaired when the level reaches the preventive and failure threshold, respectively. Both preventive and corrective maintenance actions in this paper are considered imperfect. After each maintenance action, the system is restored to a “better than old” state but the effectiveness of maintenance is stochastically reduced as its number increases. In this way, the system can only keep its desired function for a very small period after sufficient number of maintenances. Therefore, the system cannot be in service for infinite duration and its usage life which is defined as number of maintenance actions needs to be determined systematically. In this respect, system service life is jointly optimized with periodical inspection interval and preventive threshold by minimizing life-cycle cost rate. A nonhomogeneous Markov model is developed to describe the evolution of maintained system and corresponding cost function. Numerical examples are presented to illustrate the application of this maintenance policy.
{"title":"A CBM policy for systems subject to finite maintenance times","authors":"Tianyi Wu, Xiaobing Ma, Yu Zhao","doi":"10.1109/RAM.2017.7889702","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889702","url":null,"abstract":"This paper proposes a condition-based maintenance (CBM) policy for a gradually deteriorating system that could only be repaired for finite times. Periodical inspections are performed to measure the degradation level, and the system is preventively or correctively repaired when the level reaches the preventive and failure threshold, respectively. Both preventive and corrective maintenance actions in this paper are considered imperfect. After each maintenance action, the system is restored to a “better than old” state but the effectiveness of maintenance is stochastically reduced as its number increases. In this way, the system can only keep its desired function for a very small period after sufficient number of maintenances. Therefore, the system cannot be in service for infinite duration and its usage life which is defined as number of maintenance actions needs to be determined systematically. In this respect, system service life is jointly optimized with periodical inspection interval and preventive threshold by minimizing life-cycle cost rate. A nonhomogeneous Markov model is developed to describe the evolution of maintained system and corresponding cost function. Numerical examples are presented to illustrate the application of this maintenance policy.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127610427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}