Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889797
M. Grida, AbdelNaser Zaid, Ghada Kholief
Systems operating in risky environments strive for guaranteeing the highest possible availability. This paper addresses the effect of redundancy and components' economy of scale on achieving a high level of availability. An availability estimation model for a 3-out-4 cold standby system was developed and compared with 6-out-8 system. The analysis of the two models revealed that at relatively low availability target, using larger economic components results in higher availability. On the other hand, targeting an extremely high availability requires to scarify the components' economy of scale.
{"title":"Repairable 3-out-of-4: Cold standby system availability","authors":"M. Grida, AbdelNaser Zaid, Ghada Kholief","doi":"10.1109/RAM.2017.7889797","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889797","url":null,"abstract":"Systems operating in risky environments strive for guaranteeing the highest possible availability. This paper addresses the effect of redundancy and components' economy of scale on achieving a high level of availability. An availability estimation model for a 3-out-4 cold standby system was developed and compared with 6-out-8 system. The analysis of the two models revealed that at relatively low availability target, using larger economic components results in higher availability. On the other hand, targeting an extremely high availability requires to scarify the components' economy of scale.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127767863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889700
Mathew Thomas
Product Development organizations have to deal with the product portfolio of current products and the new products in the pipeline. Many organizations strive to manage product portfolio at an individual product level leading to local optimization of cost, quality and timing. However, a global optimization of cost, quality and timing would involve understanding the complexity created by the total product portfolio and developing strategies to minimize complexity which potentially could lead to better quality and reliability management at an optimum cost. The approach of looking at complexity across products and categorizing them as value added, non-value added and functionally value added yield to components and subsystems standardization. Organizations utilize the standard part approach to minimize complexity and to manage reliability. However, development of such standard parts from the available parts in the existing product portfolio and the newer technological options available at a point of time involves systematic analysis and experimentation to yield the best possible results. Six Sigma, and Design for Six Sigma maturity levels in an organization lead to focusing on systematic reduction of complexity across product lines than the initial focus on low hanging fruits. At this maturity level, the project focus becomes lean development spanning the entire organization. Effectiveness of an approach based on Total Cost of Complexity (TCC), which takes into consideration cost elements such as variable cost, life time quality cost among other costs, is demonstrated with automotive case examples of non-value added, functionally value added, and value added complexity scenarios.
{"title":"Understanding the economic impact of complexity and reliability interactions in product development","authors":"Mathew Thomas","doi":"10.1109/RAM.2017.7889700","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889700","url":null,"abstract":"Product Development organizations have to deal with the product portfolio of current products and the new products in the pipeline. Many organizations strive to manage product portfolio at an individual product level leading to local optimization of cost, quality and timing. However, a global optimization of cost, quality and timing would involve understanding the complexity created by the total product portfolio and developing strategies to minimize complexity which potentially could lead to better quality and reliability management at an optimum cost. The approach of looking at complexity across products and categorizing them as value added, non-value added and functionally value added yield to components and subsystems standardization. Organizations utilize the standard part approach to minimize complexity and to manage reliability. However, development of such standard parts from the available parts in the existing product portfolio and the newer technological options available at a point of time involves systematic analysis and experimentation to yield the best possible results. Six Sigma, and Design for Six Sigma maturity levels in an organization lead to focusing on systematic reduction of complexity across product lines than the initial focus on low hanging fruits. At this maturity level, the project focus becomes lean development spanning the entire organization. Effectiveness of an approach based on Total Cost of Complexity (TCC), which takes into consideration cost elements such as variable cost, life time quality cost among other costs, is demonstrated with automotive case examples of non-value added, functionally value added, and value added complexity scenarios.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121608473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889772
Cong Zhang, Yumei Wu, Zhengwei Yu, Zhiqiang Li
First of all, from the aspect of key component in the Windows kernel, using the related tools of operating system kernel, to analyze and debug each kernel component by combining with the verification procedures, and objects of individual component in executable level is analyzed deeply, to be familiar with the internal principles of each executable component, and learn to use the kernel debugger, laying the foundations for subsequent in-depth kernel development. Then, this paper studied the techniques commonly used by malicious programs, including the hidden process, images, files and various hook techniques. On this basis, for all kinds of malicious behavior, this paper gives the principle of counter-measures, which is taken by file system filter driven. A file system filter driver module is designed and implemented in this paper. This module realizes the basic encryption and decryption, however a simple XOR operation is used in encryption operation. Because it does not affect research ideas through developing file system filter driver to study the Windows kernel. In the implement of transparent encryption and decryption modules, mainly introduce how to achieve each core routine problem according to the custom data structure combining with the kernel file operation process. The detailed logic flow diagrams and text description are given for each core processing routine. This paper explains basic data structure which is developed by the Windows kernel driver, combing this with the knowledge of the Windows kernel components and the understanding of functional needs permits the customization of a number of important data types. These customized data types include description disk file encryption identification, as well as the process control block in memory that is used to safeguard legitimate processes. The core of this paper is to sort out the processing of files operating the in the kernel, and using this to achieve a core based processing flow of transparent encryption and decryption of code modules.
{"title":"Research and implementation of file security mechanisms based on file system filter driver","authors":"Cong Zhang, Yumei Wu, Zhengwei Yu, Zhiqiang Li","doi":"10.1109/RAM.2017.7889772","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889772","url":null,"abstract":"First of all, from the aspect of key component in the Windows kernel, using the related tools of operating system kernel, to analyze and debug each kernel component by combining with the verification procedures, and objects of individual component in executable level is analyzed deeply, to be familiar with the internal principles of each executable component, and learn to use the kernel debugger, laying the foundations for subsequent in-depth kernel development. Then, this paper studied the techniques commonly used by malicious programs, including the hidden process, images, files and various hook techniques. On this basis, for all kinds of malicious behavior, this paper gives the principle of counter-measures, which is taken by file system filter driven. A file system filter driver module is designed and implemented in this paper. This module realizes the basic encryption and decryption, however a simple XOR operation is used in encryption operation. Because it does not affect research ideas through developing file system filter driver to study the Windows kernel. In the implement of transparent encryption and decryption modules, mainly introduce how to achieve each core routine problem according to the custom data structure combining with the kernel file operation process. The detailed logic flow diagrams and text description are given for each core processing routine. This paper explains basic data structure which is developed by the Windows kernel driver, combing this with the knowledge of the Windows kernel components and the understanding of functional needs permits the customization of a number of important data types. These customized data types include description disk file encryption identification, as well as the process control block in memory that is used to safeguard legitimate processes. The core of this paper is to sort out the processing of files operating the in the kernel, and using this to achieve a core based processing flow of transparent encryption and decryption of code modules.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134061421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889652
Ubair. H. Rehmanjan
All modern commercial aircraft are designed with a high degree of redundancy and with an emphasis on reliability and maintainability, having undergone a detailed MSG-3 (Maintenance Steering Group) type analysis and are delivered by the manufacturer with an evolving package of ongoing maintenance requirements to keep the aircraft serviceable and airworthy. Even though the airframe and engines are amongst the assets that have an extremely high amount of reliability analysis carried out and maintenance systems put in place; the interior of the aircraft, that is the cabin product is one that rarely gets any reliability analysis or maintenance schedule specified by the cabin manufacturers. The aircraft manufacturers do not get involved because often the cabin furnishings are chosen and customized by the airlines and there cabin products are known as Buyer Furnished Equipment (BFE). One of the common beliefs is that it does not matter if the cabin product is unserviceable as long as the aircraft is well maintained, however the customer (passenger) interacts with the cabin, in particular their seat and if they do not feel that the interior is being maintained and cleaned, they often thing that this airline maintains their aircraft similarly. If a proper MSG-3 or any RCM (Reliability Centered Maintenance) type analysis on the product had been carried out at either the design phase or at entry-into-service and a suitable maintenance system implemented, which continued to evolve, there would be no requirement for the labor-intensive defect reviews discussed here.
{"title":"Reliability analysis and maintenance program for airline seats","authors":"Ubair. H. Rehmanjan","doi":"10.1109/RAM.2017.7889652","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889652","url":null,"abstract":"All modern commercial aircraft are designed with a high degree of redundancy and with an emphasis on reliability and maintainability, having undergone a detailed MSG-3 (Maintenance Steering Group) type analysis and are delivered by the manufacturer with an evolving package of ongoing maintenance requirements to keep the aircraft serviceable and airworthy. Even though the airframe and engines are amongst the assets that have an extremely high amount of reliability analysis carried out and maintenance systems put in place; the interior of the aircraft, that is the cabin product is one that rarely gets any reliability analysis or maintenance schedule specified by the cabin manufacturers. The aircraft manufacturers do not get involved because often the cabin furnishings are chosen and customized by the airlines and there cabin products are known as Buyer Furnished Equipment (BFE). One of the common beliefs is that it does not matter if the cabin product is unserviceable as long as the aircraft is well maintained, however the customer (passenger) interacts with the cabin, in particular their seat and if they do not feel that the interior is being maintained and cleaned, they often thing that this airline maintains their aircraft similarly. If a proper MSG-3 or any RCM (Reliability Centered Maintenance) type analysis on the product had been carried out at either the design phase or at entry-into-service and a suitable maintenance system implemented, which continued to evolve, there would be no requirement for the labor-intensive defect reviews discussed here.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131254233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889757
F. Müller, P. Zeiler, B. Bertsche
After a first failure, with appropriate measures, such as repair or maintenance, a technical product or system can be put into the function state again. Thus, usually the service life of the system is not completed. For the evaluation of such a repairable system, the availability is an important parameter. Typically, if e.g. the reliability of a system or component is demonstrated, a confidence level is considered, especially if the demonstration is based on limited information. Consequently, the availability demonstration, based on the same information needs to be expressed including a confidence level, too. In this paper a new procedure for the availability demonstration with confidence level is presented. The procedure is based on the pure samples of failure and repair times. It allows individual samples sizes, i.e. they do not have to be equal. The procedure does not require special distribution types of failure and repair behavior. It enables the demonstration of the time-dependent and average availability. Furthermore, the procedure is illustrated by a case study of a repairable system with known samples of failure and repair times. The influence of the samples sizes is investigated. Finally, the potential of the new approach to be applied to more general scenarios is shown based on an example, too.
{"title":"Availability demonstration with confidence level based on reliability and maintainability","authors":"F. Müller, P. Zeiler, B. Bertsche","doi":"10.1109/RAM.2017.7889757","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889757","url":null,"abstract":"After a first failure, with appropriate measures, such as repair or maintenance, a technical product or system can be put into the function state again. Thus, usually the service life of the system is not completed. For the evaluation of such a repairable system, the availability is an important parameter. Typically, if e.g. the reliability of a system or component is demonstrated, a confidence level is considered, especially if the demonstration is based on limited information. Consequently, the availability demonstration, based on the same information needs to be expressed including a confidence level, too. In this paper a new procedure for the availability demonstration with confidence level is presented. The procedure is based on the pure samples of failure and repair times. It allows individual samples sizes, i.e. they do not have to be equal. The procedure does not require special distribution types of failure and repair behavior. It enables the demonstration of the time-dependent and average availability. Furthermore, the procedure is illustrated by a case study of a repairable system with known samples of failure and repair times. The influence of the samples sizes is investigated. Finally, the potential of the new approach to be applied to more general scenarios is shown based on an example, too.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115841056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ram.2017.7889706
B. Haughey
Deming was once quoted saying: “Hard work and best efforts will not by themselves dig us out of the pit.” “The New Economics” 1994 — Ch. 2 — The Heavy Losses —, page 23. It is equally true that hard work and best efforts will not always identify and mitigate the risks of product and process change. We must work together and create a company culture focused on engineering knowledge. The challenge for most companies is the divided responsibility for change (organizational silos) and lack of coordination. Examples abound: assembly processes are changed frequently (and for many reasons) and the manufacturing organization may or may not consider the product risk of those changes; purchasing decides to change a supplier to reduce cost but does not consider the product or process risk while making those decisions; and design makes a change and does not consider the impact to manufacturing, either internally or at the supplier. You can probably think of many other examples but they all add risk that must be mitigated to ensure the product meets customer expectations. Toyota is well known for Quality and Reliability of their products but even better known for their ability to address all risks associated with change. Toyota recognized they must identify the strength of engineering processes and eliminate the waste of redundant meetings (i.e., Technical Design Reviews and Design Failure Modes and Effects Analysis). They recognized the intended results of both were to identify and mitigate the risk of change based on engineering knowledge. Therefore, they linked them together to develop Design Review Based on Failure Modes (DRBFM). Design Review Based on Failure Modes (DRBFM) is a deep analysis process that focuses on engineering changes once a baseline design has been established. A key component of the DRBFM process is the emphasis on development of an organizational culture that is focused on meeting functional requirements and customer expectations. Supporting the engineer, both within and at all levels of the supply chain, is the foundation of the DRBFM methodology. DRBFM is an analytical process used to address design and process changes throughout the product development process, including running changes at launch and postproduction (up to product retirement). The basis of the process is the front-loading of the engineering efforts to clearly define the impact of change and eliminate the need for extended engineering activity due to decoupled and sequential processes. The DRBFM process is inclusive of all systems engineering activities that impact quality/reliability/durability (QRD), service, cost and delivery. The process links the analysis of the impacts to design, validation, service, and manufacturing (including suppliers). Since DRBFM is focused on change, the process fits either directly into the product development cycle, or within the change management process. Most manufacturers have well-defined systems engineering product development pro
{"title":"Linking design reviews with FMEA to quickly mitigate the risk of change…design review based on failure modes","authors":"B. Haughey","doi":"10.1109/ram.2017.7889706","DOIUrl":"https://doi.org/10.1109/ram.2017.7889706","url":null,"abstract":"Deming was once quoted saying: “Hard work and best efforts will not by themselves dig us out of the pit.” “The New Economics” 1994 — Ch. 2 — The Heavy Losses —, page 23. It is equally true that hard work and best efforts will not always identify and mitigate the risks of product and process change. We must work together and create a company culture focused on engineering knowledge. The challenge for most companies is the divided responsibility for change (organizational silos) and lack of coordination. Examples abound: assembly processes are changed frequently (and for many reasons) and the manufacturing organization may or may not consider the product risk of those changes; purchasing decides to change a supplier to reduce cost but does not consider the product or process risk while making those decisions; and design makes a change and does not consider the impact to manufacturing, either internally or at the supplier. You can probably think of many other examples but they all add risk that must be mitigated to ensure the product meets customer expectations. Toyota is well known for Quality and Reliability of their products but even better known for their ability to address all risks associated with change. Toyota recognized they must identify the strength of engineering processes and eliminate the waste of redundant meetings (i.e., Technical Design Reviews and Design Failure Modes and Effects Analysis). They recognized the intended results of both were to identify and mitigate the risk of change based on engineering knowledge. Therefore, they linked them together to develop Design Review Based on Failure Modes (DRBFM). Design Review Based on Failure Modes (DRBFM) is a deep analysis process that focuses on engineering changes once a baseline design has been established. A key component of the DRBFM process is the emphasis on development of an organizational culture that is focused on meeting functional requirements and customer expectations. Supporting the engineer, both within and at all levels of the supply chain, is the foundation of the DRBFM methodology. DRBFM is an analytical process used to address design and process changes throughout the product development process, including running changes at launch and postproduction (up to product retirement). The basis of the process is the front-loading of the engineering efforts to clearly define the impact of change and eliminate the need for extended engineering activity due to decoupled and sequential processes. The DRBFM process is inclusive of all systems engineering activities that impact quality/reliability/durability (QRD), service, cost and delivery. The process links the analysis of the impacts to design, validation, service, and manufacturing (including suppliers). Since DRBFM is focused on change, the process fits either directly into the product development cycle, or within the change management process. Most manufacturers have well-defined systems engineering product development pro","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114609906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889795
T. Nakata
To prevent accidents, it is very important to learn why and how past accidents occurred and escalated. The information of accidents is mostly recorded in natural language texts, which is not convenient to analyze the flow of events in the accidents. This paper proposes a method to recognize typical flow of events in a large set of text reports. By focusing two adjacent sentences, our system succeeded to detect typical pairs of predecessor word and successor word. Then we can recognize the typical flows of accidents.
{"title":"Text-mining on incident reports to find knowledge on industrial safety","authors":"T. Nakata","doi":"10.1109/RAM.2017.7889795","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889795","url":null,"abstract":"To prevent accidents, it is very important to learn why and how past accidents occurred and escalated. The information of accidents is mostly recorded in natural language texts, which is not convenient to analyze the flow of events in the accidents. This paper proposes a method to recognize typical flow of events in a large set of text reports. By focusing two adjacent sentences, our system succeeded to detect typical pairs of predecessor word and successor word. Then we can recognize the typical flows of accidents.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117130884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889707
Vasiliy V. Krivtsov, Alexander Yevkin
The problem of recurrent failure prediction arises in forecasting warranty repairs/cost, maintenance optimization and evaluation of repair quality. The most comprehensive prediction model is the g-renewal process proposed by Kijima [1], which allows for modelling of both perfect and imperfect repairs through the use of the so-called restoration factor. Krivtsov and Yevkin [2] showed that statistical estimation of the g-renewal process parameters is an ill-posed inverse problem (the solution is not unique and/or is sensitive to statistical errors). They proposed a regularization approach specifically suited to the g-renewal process: separating the estimation of the underlying life distribution parameters from the restoration factor in two consecutive steps. Using numerical studies, they showed that the estimation/prediction accuracy of the proposed method was considerably higher than that of the existing methods. This paper elaborates on more advanced regularization techniques, which allow to even further increase the estimation/prediction accuracy in the framework of both Least Squares and Maximum Likelihood estimation. Proposed regularization becomes especially useful for limited sample sizes. The accuracy and efficiency of the proposed approach is validated through extensive numerical studies under various underlying lifetime distributions including Weibull, Gaussian and log-normal.
{"title":"Regularization techniques for recurrent failure prediction under Kijima models","authors":"Vasiliy V. Krivtsov, Alexander Yevkin","doi":"10.1109/RAM.2017.7889707","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889707","url":null,"abstract":"The problem of recurrent failure prediction arises in forecasting warranty repairs/cost, maintenance optimization and evaluation of repair quality. The most comprehensive prediction model is the g-renewal process proposed by Kijima [1], which allows for modelling of both perfect and imperfect repairs through the use of the so-called restoration factor. Krivtsov and Yevkin [2] showed that statistical estimation of the g-renewal process parameters is an ill-posed inverse problem (the solution is not unique and/or is sensitive to statistical errors). They proposed a regularization approach specifically suited to the g-renewal process: separating the estimation of the underlying life distribution parameters from the restoration factor in two consecutive steps. Using numerical studies, they showed that the estimation/prediction accuracy of the proposed method was considerably higher than that of the existing methods. This paper elaborates on more advanced regularization techniques, which allow to even further increase the estimation/prediction accuracy in the framework of both Least Squares and Maximum Likelihood estimation. Proposed regularization becomes especially useful for limited sample sizes. The accuracy and efficiency of the proposed approach is validated through extensive numerical studies under various underlying lifetime distributions including Weibull, Gaussian and log-normal.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124502000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889710
Amirkiarash Kiani, S. Taghipour
Maintenance and production scheduling are interconnected activities which should be planned jointly to minimize their total cost as well as jobs tardiness. Although, the joint optimization of maintenance planning and production scheduling has been addressed extensively in literature, no study has considered production and maintenance optimization based on the concept of delay-time model (DTM). DTM has been effectively utilized in industry for inspection optimization of various systems, such as oil-hydraulic extrusion press, production plant, and industrial vehicles. The DTM considers a two-stage failure process for a system, in which an initial defect will eventually lead to a failure, if left unattended. The elapsed time between a defect occurrence and the failure (in the absence of inspection) is called delay-time, which provides a window of opportunity to inspect the system and fix the defect. In this paper, we consider a single system in a manufacturing plant which is required to process n independent jobs, while a job cannot be preempted for another job. We assume that the system has a single dominant failure mode, and model the system's failure using the DTM concept, in which the time to a defect appearance and the delay time follow certain distributions. The delay time distribution is independent of the time to defect. The system can be completely renewed by preventive replacement before a job to reduce the probability of a defect arrival and its subsequent failure while the job is being processed. An unattended defect may lead to a failure, which causes the system shutdown. The system is then replaced after a failure, and the job is restarted. We assume that the time required for a preventive replacement of the system is shorter than the time required for corrective replacement after a failure. We will jointly optimize preventive maintenance and production scheduling which results in the minimum total expected cost consisting of tardiness penalty and preventive and corrective maintenance costs. More specifically, we will determine the optimal sequence of the jobs as well as the decision on whether or not preventive replacement should be performed before a specific job. We will formulate the objective function and derive analytic expressions to obtain the total expected cost for a given sequence of jobs and a preventive replacement scheme. The application of the proposed model is shown in a case study. The results of the study indicate the optimal job sequence obtained from the joint optimization problem could differ from the case where the optimal sequence is obtained in a standalone scheduling problem. Moreover, the optimal solution depends on the input parameters of the model, most specifically, the job processing times and the distributions of defect arrival and delay time.
{"title":"Joint optimization of maintenance and production scheduling","authors":"Amirkiarash Kiani, S. Taghipour","doi":"10.1109/RAM.2017.7889710","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889710","url":null,"abstract":"Maintenance and production scheduling are interconnected activities which should be planned jointly to minimize their total cost as well as jobs tardiness. Although, the joint optimization of maintenance planning and production scheduling has been addressed extensively in literature, no study has considered production and maintenance optimization based on the concept of delay-time model (DTM). DTM has been effectively utilized in industry for inspection optimization of various systems, such as oil-hydraulic extrusion press, production plant, and industrial vehicles. The DTM considers a two-stage failure process for a system, in which an initial defect will eventually lead to a failure, if left unattended. The elapsed time between a defect occurrence and the failure (in the absence of inspection) is called delay-time, which provides a window of opportunity to inspect the system and fix the defect. In this paper, we consider a single system in a manufacturing plant which is required to process n independent jobs, while a job cannot be preempted for another job. We assume that the system has a single dominant failure mode, and model the system's failure using the DTM concept, in which the time to a defect appearance and the delay time follow certain distributions. The delay time distribution is independent of the time to defect. The system can be completely renewed by preventive replacement before a job to reduce the probability of a defect arrival and its subsequent failure while the job is being processed. An unattended defect may lead to a failure, which causes the system shutdown. The system is then replaced after a failure, and the job is restarted. We assume that the time required for a preventive replacement of the system is shorter than the time required for corrective replacement after a failure. We will jointly optimize preventive maintenance and production scheduling which results in the minimum total expected cost consisting of tardiness penalty and preventive and corrective maintenance costs. More specifically, we will determine the optimal sequence of the jobs as well as the decision on whether or not preventive replacement should be performed before a specific job. We will formulate the objective function and derive analytic expressions to obtain the total expected cost for a given sequence of jobs and a preventive replacement scheme. The application of the proposed model is shown in a case study. The results of the study indicate the optimal job sequence obtained from the joint optimization problem could differ from the case where the optimal sequence is obtained in a standalone scheduling problem. Moreover, the optimal solution depends on the input parameters of the model, most specifically, the job processing times and the distributions of defect arrival and delay time.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129764754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/RAM.2017.7889673
Daniel J. Foley, Darryl W. Kellner
Today's budgetary concerns represent constraints that limit the overall budget of many programs and, subsequently, also the reliability program budget. Because of this, often times product testing may be significantly scaled down or not performed at all. This results in limited to no test information to support a reliability prediction for a development system, including single-shot devices. This lack of data early and throughout the developmental phase can result in an inaccurate reliability prediction or, at the least, one with a limited confidence. This paper will discuss the challenges associated with addressing and mitigating these inaccuracies. It will start by emphasizing the need and approach to gaining a thorough understanding of the system's life cycle. This includes a detailed understanding of the system's operational and non-operational phases, including environments and durations of exposure, and operating sequence (with durations). The paper will then illustrate use of this information to identify and employ reliability information using an example problem to develop an accurate prediction for the single-shot device under study.
{"title":"Single-shot device reliability challenges","authors":"Daniel J. Foley, Darryl W. Kellner","doi":"10.1109/RAM.2017.7889673","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889673","url":null,"abstract":"Today's budgetary concerns represent constraints that limit the overall budget of many programs and, subsequently, also the reliability program budget. Because of this, often times product testing may be significantly scaled down or not performed at all. This results in limited to no test information to support a reliability prediction for a development system, including single-shot devices. This lack of data early and throughout the developmental phase can result in an inaccurate reliability prediction or, at the least, one with a limited confidence. This paper will discuss the challenges associated with addressing and mitigating these inaccuracies. It will start by emphasizing the need and approach to gaining a thorough understanding of the system's life cycle. This includes a detailed understanding of the system's operational and non-operational phases, including environments and durations of exposure, and operating sequence (with durations). The paper will then illustrate use of this information to identify and employ reliability information using an example problem to develop an accurate prediction for the single-shot device under study.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130131781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}