Pub Date : 2016-03-03DOI: 10.1080/0740817X.2015.1056391
Rafay Ishfaq, Uzma Raja, Mark M. Clark
ABSTRACT The changing landscape of environmental regulations, discovery of new domestic sources of natural gas, and the economics of energy markets has resulted in a major shift in the choice of fuel for electric power generation. This research focuses on the relevant factors that impact a power plant's decision to switch fuel from coal to natural gas and the timing of such decisions. The factors studied in this article include capital costs of plant replacement, public policy, associated monetary penalties, availability and access to gas supply networks, and the option of plant retirement. These factors are evaluated in a case study of power plants in the Southeastern United States, using mathematical programming and logistic regression models. The results show that environmental regulations can be effective if the monetary penalties imposed by such regulations are set at an appropriate level, with respect to plant replacement costs. Although it is economic for large-size (power generation capacity > 600 MW) coal-fired power plants to switch fuel to natural gas, plant retirement is more suitable for smaller-sized plants. This article also presents a multi-logit decision model that can help identify the best time for a power plant to switch fuel and whether such a decision is useful in the context of plant replacement costs, fuel costs, electric power decommission limits, and environmental penalties.
{"title":"Fuel-switch decisions in the electric power industry under environmental regulations","authors":"Rafay Ishfaq, Uzma Raja, Mark M. Clark","doi":"10.1080/0740817X.2015.1056391","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1056391","url":null,"abstract":"ABSTRACT The changing landscape of environmental regulations, discovery of new domestic sources of natural gas, and the economics of energy markets has resulted in a major shift in the choice of fuel for electric power generation. This research focuses on the relevant factors that impact a power plant's decision to switch fuel from coal to natural gas and the timing of such decisions. The factors studied in this article include capital costs of plant replacement, public policy, associated monetary penalties, availability and access to gas supply networks, and the option of plant retirement. These factors are evaluated in a case study of power plants in the Southeastern United States, using mathematical programming and logistic regression models. The results show that environmental regulations can be effective if the monetary penalties imposed by such regulations are set at an appropriate level, with respect to plant replacement costs. Although it is economic for large-size (power generation capacity > 600 MW) coal-fired power plants to switch fuel to natural gas, plant retirement is more suitable for smaller-sized plants. This article also presents a multi-logit decision model that can help identify the best time for a power plant to switch fuel and whether such a decision is useful in the context of plant replacement costs, fuel costs, electric power decommission limits, and environmental penalties.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"47 1","pages":"205 - 219"},"PeriodicalIF":0.0,"publicationDate":"2016-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1056391","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59750816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-03DOI: 10.1080/0740817X.2015.1078013
Hugh R. Medal, E. Pohl, M. Rossetti
We study a new facility protection problem in which one must allocate scarce protection resources to a set of facilities given that allocating resources to a facility only has a probabilistic effect on the facility’s post-disruption capacity. This study seeks to test three common assumptions made in the literature on modeling infrastructure systems subject to disruptions: 1) perfect protection, e.g., protecting an element makes it fail-proof, 2) binary protection, i.e., an element is either fully protected or unprotected, and 3) binary state, i.e., disrupted elements are fully operational or non-operational. We model this facility protection problem as a two-stage stochastic program with endogenous uncertainty. Because this stochastic program is non-convex we present a greedy algorithm and show that it has a worst-case performance of 0.63. However, empirical results indicate that the average performance is much better. In addition, experimental results indicate that the mean-value version of this model, in which parameters are set to their mean values, performs close to optimal. Results also indicate that the perfect and binary protection assumptions together significantly affect the performance of a model. On the other hand, the binary state assumption was found to have a smaller effect.
{"title":"Allocating Protection Resources to Facilities When the Effect of Protection is Uncertain","authors":"Hugh R. Medal, E. Pohl, M. Rossetti","doi":"10.1080/0740817X.2015.1078013","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1078013","url":null,"abstract":"We study a new facility protection problem in which one must allocate scarce protection resources to a set of facilities given that allocating resources to a facility only has a probabilistic effect on the facility’s post-disruption capacity. This study seeks to test three common assumptions made in the literature on modeling infrastructure systems subject to disruptions: 1) perfect protection, e.g., protecting an element makes it fail-proof, 2) binary protection, i.e., an element is either fully protected or unprotected, and 3) binary state, i.e., disrupted elements are fully operational or non-operational. We model this facility protection problem as a two-stage stochastic program with endogenous uncertainty. Because this stochastic program is non-convex we present a greedy algorithm and show that it has a worst-case performance of 0.63. However, empirical results indicate that the average performance is much better. In addition, experimental results indicate that the mean-value version of this model, in which parameters are set to their mean values, performs close to optimal. Results also indicate that the perfect and binary protection assumptions together significantly affect the performance of a model. On the other hand, the binary state assumption was found to have a smaller effect.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"220 - 234"},"PeriodicalIF":0.0,"publicationDate":"2016-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1078013","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59751990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-03DOI: 10.1080/0740817X.2015.1067737
Ihsan Yanikoglu, D. den Hertog, J. Kleijnen
ABSTRACT This article presents a robust optimization reformulation of the dual-response problem developed in response surface methodology. The dual-response approach fits separate models for the mean and the variance and analyzes these two models in a mathematical optimization setting. We use metamodels estimated from experiments with both controllable and environmental inputs. These experiments may be performed with either real or simulated systems; we focus on simulation experiments. For the environmental inputs, classic approaches assume known means, variances, or covariances and sometimes even a known distribution. We, however, develop a method that uses only experimental data, so it does not need a known probability distribution. Moreover, our approach yields a solution that is robust against the ambiguity in the probability distribution. We also propose an adjustable robust optimization method that enables adjusting the values of the controllable factors after observing the values of the environmental factors. We illustrate our novel methods through several numerical examples, which demonstrate their effectiveness.
{"title":"Robust dual-response optimization","authors":"Ihsan Yanikoglu, D. den Hertog, J. Kleijnen","doi":"10.1080/0740817X.2015.1067737","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1067737","url":null,"abstract":"ABSTRACT This article presents a robust optimization reformulation of the dual-response problem developed in response surface methodology. The dual-response approach fits separate models for the mean and the variance and analyzes these two models in a mathematical optimization setting. We use metamodels estimated from experiments with both controllable and environmental inputs. These experiments may be performed with either real or simulated systems; we focus on simulation experiments. For the environmental inputs, classic approaches assume known means, variances, or covariances and sometimes even a known distribution. We, however, develop a method that uses only experimental data, so it does not need a known probability distribution. Moreover, our approach yields a solution that is robust against the ambiguity in the probability distribution. We also propose an adjustable robust optimization method that enables adjusting the values of the controllable factors after observing the values of the environmental factors. We illustrate our novel methods through several numerical examples, which demonstrate their effectiveness.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"298 - 312"},"PeriodicalIF":0.0,"publicationDate":"2016-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1067737","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59751543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-26DOI: 10.1080/0740817X.2015.1109739
Shan Li, Yong Chen
ABSTRACT This article presents a Bayesian variable selection–based diagnosis approach to simultaneously identify both process mean shift faults and sensor mean shift faults in manufacturing processes. The proposed method directly models the probability of fault occurrence and can easily incorporate prior knowledge on the probability of a fault occurrence. Important concepts are introduced to understand the diagnosability of the proposed method. A guideline on how to select the values of hyper-parameters is given. A conditional maximum likelihood method is proposed as an alternative method to provide robustness to the selection of some key model parameters. Systematic simulation studies are used to provide insights on the relationship between the success of the diagnosis method and related system structure characteristics. A real assembly example is used to demonstrate the effectiveness of the proposed diagnosis method.
{"title":"A Bayesian variable selection method for joint diagnosis of manufacturing process and sensor faults","authors":"Shan Li, Yong Chen","doi":"10.1080/0740817X.2015.1109739","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1109739","url":null,"abstract":"ABSTRACT This article presents a Bayesian variable selection–based diagnosis approach to simultaneously identify both process mean shift faults and sensor mean shift faults in manufacturing processes. The proposed method directly models the probability of fault occurrence and can easily incorporate prior knowledge on the probability of a fault occurrence. Important concepts are introduced to understand the diagnosability of the proposed method. A guideline on how to select the values of hyper-parameters is given. A conditional maximum likelihood method is proposed as an alternative method to provide robustness to the selection of some key model parameters. Systematic simulation studies are used to provide insights on the relationship between the success of the diagnosis method and related system structure characteristics. A real assembly example is used to demonstrate the effectiveness of the proposed diagnosis method.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"47 1","pages":"313 - 323"},"PeriodicalIF":0.0,"publicationDate":"2016-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1109739","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59753033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-25DOI: 10.1080/0740817X.2015.1110266
Jingyuan Shen, L. Cui
ABSTRACT The environment in which a system operates can have a crucial impact on its performance; for example, a machine operating in mild or harsh environments or the flow of a river changing between seasons. In this article, we consider a dynamic reliability system operating under a cycle of K regimes, which is modeled as a continuous-time Markov process with K different transition rate matrices being used to describe the various regimes. Results for the availability of such a system and probability distributions of the first uptime are given. Three special cases, which occur due to situations where the durations of the regime are constant and where the number of up states in different regimes are identical or increasing, are considered in detail. Finally, some numerical examples are shown to validate the proposed approach.
{"title":"Reliability performance for dynamic systems with cycles of K regimes","authors":"Jingyuan Shen, L. Cui","doi":"10.1080/0740817X.2015.1110266","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1110266","url":null,"abstract":"ABSTRACT The environment in which a system operates can have a crucial impact on its performance; for example, a machine operating in mild or harsh environments or the flow of a river changing between seasons. In this article, we consider a dynamic reliability system operating under a cycle of K regimes, which is modeled as a continuous-time Markov process with K different transition rate matrices being used to describe the various regimes. Results for the availability of such a system and probability distributions of the first uptime are given. Three special cases, which occur due to situations where the durations of the regime are constant and where the number of up states in different regimes are identical or increasing, are considered in detail. Finally, some numerical examples are shown to validate the proposed approach.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"389 - 402"},"PeriodicalIF":0.0,"publicationDate":"2016-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1110266","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59753118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-21DOI: 10.1080/0740817X.2015.1109737
Erick Moreno-Centeno, Adolfo R. Escobedo
ABSTRACT In many different applications of group decision-making, individual ranking agents or judges are able to rank only a small subset of all available candidates. However, as we argue in this article, the aggregation of these incomplete ordinal rankings into a group consensus has not been adequately addressed. We propose an axiomatic method to aggregate a set of incomplete rankings into a consensus ranking; the method is a generalization of an existing approach to aggregate complete rankings. More specifically, we introduce a set of natural axioms that must be satisfied by a distance between two incomplete rankings; prove the uniqueness and existence of a distance satisfying such axioms; formulate the aggregation of incomplete rankings as an optimization problem; propose and test a specific algorithm to solve a variation of this problem where the consensus ranking does not contain ties; and show that the consensus ranking obtained by our axiomatic approach is more intuitive than the consensus ranking obtained by other approaches.
{"title":"Axiomatic aggregation of incomplete rankings","authors":"Erick Moreno-Centeno, Adolfo R. Escobedo","doi":"10.1080/0740817X.2015.1109737","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1109737","url":null,"abstract":"ABSTRACT In many different applications of group decision-making, individual ranking agents or judges are able to rank only a small subset of all available candidates. However, as we argue in this article, the aggregation of these incomplete ordinal rankings into a group consensus has not been adequately addressed. We propose an axiomatic method to aggregate a set of incomplete rankings into a consensus ranking; the method is a generalization of an existing approach to aggregate complete rankings. More specifically, we introduce a set of natural axioms that must be satisfied by a distance between two incomplete rankings; prove the uniqueness and existence of a distance satisfying such axioms; formulate the aggregation of incomplete rankings as an optimization problem; propose and test a specific algorithm to solve a variation of this problem where the consensus ranking does not contain ties; and show that the consensus ranking obtained by our axiomatic approach is more intuitive than the consensus ranking obtained by other approaches.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"475 - 488"},"PeriodicalIF":0.0,"publicationDate":"2016-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1109737","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59752513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-21DOI: 10.1080/0740817X.2015.1109738
Dan Zhang, H. Liao
ABSTRACT Accelerated Life Testing (ALT) has been widely used in reliability estimation for highly reliable products. To improve the efficiency of ALT, many optimum ALT design methods have been developed. However, most of the existing methods solely focus on the reliability estimation precision without considering the significant amounts of energy consumed by the equipment that creates the harsher-than-normal operating conditions in such experiments. In order to warrant the reliability estimation precision while reducing the total energy consumption, this article presents a fully integrated double-loop approach to the design of statistically and energy-efficient ALT experiments. As an important option, the new experimental design method is formulated as a multi-objective optimization problem with three objectives: (i) minimizing the experiment's total energy consumption; (ii) maximizing the reliability estimation precision; and (iii) minimizing the tracking error between the desired and actual stress loadings used in the experiment. A controlled elitist non-dominated sorting genetic algorithm is utilized to solve such large-scale optimization problems involving computer simulation. Numerical examples are provided to demonstrate the effectiveness and possible applications of the proposed experimental design method. Compared with the traditional and sequential optimal ALT planning methods, this method further improves the energy and statistical efficiency of ALT experiments.
{"title":"A fully integrated double-loop approach to the design of statistically and energy efficient accelerated life tests","authors":"Dan Zhang, H. Liao","doi":"10.1080/0740817X.2015.1109738","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1109738","url":null,"abstract":"ABSTRACT Accelerated Life Testing (ALT) has been widely used in reliability estimation for highly reliable products. To improve the efficiency of ALT, many optimum ALT design methods have been developed. However, most of the existing methods solely focus on the reliability estimation precision without considering the significant amounts of energy consumed by the equipment that creates the harsher-than-normal operating conditions in such experiments. In order to warrant the reliability estimation precision while reducing the total energy consumption, this article presents a fully integrated double-loop approach to the design of statistically and energy-efficient ALT experiments. As an important option, the new experimental design method is formulated as a multi-objective optimization problem with three objectives: (i) minimizing the experiment's total energy consumption; (ii) maximizing the reliability estimation precision; and (iii) minimizing the tracking error between the desired and actual stress loadings used in the experiment. A controlled elitist non-dominated sorting genetic algorithm is utilized to solve such large-scale optimization problems involving computer simulation. Numerical examples are provided to demonstrate the effectiveness and possible applications of the proposed experimental design method. Compared with the traditional and sequential optimal ALT planning methods, this method further improves the energy and statistical efficiency of ALT experiments.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"371 - 388"},"PeriodicalIF":0.0,"publicationDate":"2016-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1109738","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59753155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-21DOI: 10.1080/0740817X.2015.1096430
S. R. Hunter, B. McClosky
ABSTRACT Commercial plant breeders improve economically important traits by selectively mating individuals from a given breeding population. Potential pairings are evaluated before the growing season using Monte Carlo simulation, and a mating design is created to allocate a fixed breeding budget across the parent pairs to achieve desired population outcomes. We introduce a novel objective function for this mating design problem that accurately models the goals of a certain class of breeding experiments. The resulting mating design problem is a computationally burdensome simulation optimization problem on a combinatorially large set of feasible points. We propose a two-step solution to this problem: (i) simulate to estimate the performance of each parent pair and (ii) solve an estimated version of the mating design problem, which is an integer program, using the simulation output. To reduce the computational burden when implementing steps (i) and (ii), we analytically identify a Pareto set of parent pairs that will receive the entire breeding budget at optimality. Since we wish to estimate the Pareto set in step (i) as input to step (ii), we derive an asymptotically optimal simulation budget allocation to estimate the Pareto set that, in our numerical experiments, out-performs Multi-objective Optimal Computing Budget Allocation in reducing misclassifications. Given the estimated Pareto set, we provide a branch-and-bound algorithm to solve the estimated mating design problem. Our approach dramatically reduces the computational effort required to solve the mating design problem when compared with naïve methods.
{"title":"Maximizing quantitative traits in the mating design problem via simulation-based Pareto estimation","authors":"S. R. Hunter, B. McClosky","doi":"10.1080/0740817X.2015.1096430","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1096430","url":null,"abstract":"ABSTRACT Commercial plant breeders improve economically important traits by selectively mating individuals from a given breeding population. Potential pairings are evaluated before the growing season using Monte Carlo simulation, and a mating design is created to allocate a fixed breeding budget across the parent pairs to achieve desired population outcomes. We introduce a novel objective function for this mating design problem that accurately models the goals of a certain class of breeding experiments. The resulting mating design problem is a computationally burdensome simulation optimization problem on a combinatorially large set of feasible points. We propose a two-step solution to this problem: (i) simulate to estimate the performance of each parent pair and (ii) solve an estimated version of the mating design problem, which is an integer program, using the simulation output. To reduce the computational burden when implementing steps (i) and (ii), we analytically identify a Pareto set of parent pairs that will receive the entire breeding budget at optimality. Since we wish to estimate the Pareto set in step (i) as input to step (ii), we derive an asymptotically optimal simulation budget allocation to estimate the Pareto set that, in our numerical experiments, out-performs Multi-objective Optimal Computing Budget Allocation in reducing misclassifications. Given the estimated Pareto set, we provide a branch-and-bound algorithm to solve the estimated mating design problem. Our approach dramatically reduces the computational effort required to solve the mating design problem when compared with naïve methods.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"565 - 578"},"PeriodicalIF":0.0,"publicationDate":"2016-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1096430","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59752636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-12DOI: 10.1080/0740817X.2016.1146423
Jun Yu Li, P. Qiu
ABSTRACT In many applications, including the early detection and prevention of diseases and performance evaluation of airplanes and other durable products, we need to sequentially monitor the longitudinal pattern of certain performance variables of a subject. A signal should be given as soon as possible after the pattern has become abnormal. Recently, a new statistical method, called a dynamic screening system (DySS), was proposed to solve this problem. It is a combination of longitudinal data analysis and statistical process control. However, the current DySS method can only handle cases where the observations are normally distributed and within-subject observations are independent or follow a specific time series model (e.g., AR(1) model). In this article, we propose a new nonparametric DySS method that can handle cases where the observation distribution and the correlation among within-subject observations are arbitrary. Therefore, it significantly broadens the application area of the DySS method. Numerical studies show that the new method works well in practice.
{"title":"Nonparametric dynamic screening system for monitoring correlated longitudinal data","authors":"Jun Yu Li, P. Qiu","doi":"10.1080/0740817X.2016.1146423","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1146423","url":null,"abstract":"ABSTRACT In many applications, including the early detection and prevention of diseases and performance evaluation of airplanes and other durable products, we need to sequentially monitor the longitudinal pattern of certain performance variables of a subject. A signal should be given as soon as possible after the pattern has become abnormal. Recently, a new statistical method, called a dynamic screening system (DySS), was proposed to solve this problem. It is a combination of longitudinal data analysis and statistical process control. However, the current DySS method can only handle cases where the observations are normally distributed and within-subject observations are independent or follow a specific time series model (e.g., AR(1) model). In this article, we propose a new nonparametric DySS method that can handle cases where the observation distribution and the correlation among within-subject observations are arbitrary. Therefore, it significantly broadens the application area of the DySS method. Numerical studies show that the new method works well in practice.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"772 - 786"},"PeriodicalIF":0.0,"publicationDate":"2016-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1146423","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-12DOI: 10.1080/0740817X.2016.1147663
M. L. Toledo, Marta A. Freitas, E. Colosimo, Gustavo L. Gilardoni
ABSTRACT In the repairable systems literature one can find a great number of papers that propose maintenance policies under the assumption of minimal repair after each failure (such a repair leaves the system in the same condition as it was just before the failure—as bad as old). This article derives a statistical procedure to estimate the optimal Preventive Maintenance (PM) periodic policy, under the following two assumptions: (i) perfect repair at each PM action (i.e., the system returns to the as-good-as-new state) and (ii) imperfect system repair after each failure (the system returns to an intermediate state between as bad as old and as good as new). Models for imperfect repair have already been presented in the literature. However, an inference procedure for the quantities of interest has not yet been fully studied. In the present article, statistical methods, including the likelihood function, Monte Carlo simulation, and bootstrap resampling methods, are used in order to (i) estimate the degree of efficiency of a repair and (ii) obtain the optimal PM check points that minimize the expected total cost. This study was motivated by a real situation involving the maintenance of engines in off-road vehicles.
{"title":"Optimal periodic maintenance policy under imperfect repair: A case study on the engines of off-road vehicles","authors":"M. L. Toledo, Marta A. Freitas, E. Colosimo, Gustavo L. Gilardoni","doi":"10.1080/0740817X.2016.1147663","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1147663","url":null,"abstract":"ABSTRACT In the repairable systems literature one can find a great number of papers that propose maintenance policies under the assumption of minimal repair after each failure (such a repair leaves the system in the same condition as it was just before the failure—as bad as old). This article derives a statistical procedure to estimate the optimal Preventive Maintenance (PM) periodic policy, under the following two assumptions: (i) perfect repair at each PM action (i.e., the system returns to the as-good-as-new state) and (ii) imperfect system repair after each failure (the system returns to an intermediate state between as bad as old and as good as new). Models for imperfect repair have already been presented in the literature. However, an inference procedure for the quantities of interest has not yet been fully studied. In the present article, statistical methods, including the likelihood function, Monte Carlo simulation, and bootstrap resampling methods, are used in order to (i) estimate the degree of efficiency of a repair and (ii) obtain the optimal PM check points that minimize the expected total cost. This study was motivated by a real situation involving the maintenance of engines in off-road vehicles.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"747 - 758"},"PeriodicalIF":0.0,"publicationDate":"2016-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1147663","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}