Pub Date : 2016-03-03DOI: 10.1080/0740817X.2015.1057303
Chun-Hung Cheng, Y. Kuo
ABSTRACT In this work, we examine a staff scheduling problem in a governmental food safety center that is responsible for the surveillance of imported food at an international airport. In addition to the fact that the staff have different levels of efficiency and have different preference for work shifts, the Operations Manager of the food safety center would like to balance the dissimilarities of workers in order to provide unbiased work schedules for staff members. We adopt a two-phase approach, where the first phase is to schedule the work shifts of food safety inspectors (including rest days and shift types) with schedule fairness and staff preference taken into account and the second phase is to best-fit them to tasks in terms of skill-matches and create diversity of team formations. We also provide polyhedral results and devise valid inequalities for the two formulations. For the first-phase problem, we relax some constraints of the fairness criteria to reduce the problem size to reduce computational effort. We derive an upper bound for the objective value of the relaxation and provide computational results to show that the solutions devised from our proposed methodology are of good quality. For the second-phase problem, we develop a shift-by-shift assignment heuristic to obtain an upper bound for the maximum number of times any pair of workers is assigned to the same shift at the same location. We propose an enumeration algorithm, that solves the problems for fixed values of this number until an optimality condition holds or the problem is infeasible. Computational results show that our proposed approach can produce solutions of good quality in a much shorter period of time, compared with a standalone commercial solver.
{"title":"A dissimilarities balance model for a multi-skilled multi-location food safety inspector scheduling problem","authors":"Chun-Hung Cheng, Y. Kuo","doi":"10.1080/0740817X.2015.1057303","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1057303","url":null,"abstract":"ABSTRACT In this work, we examine a staff scheduling problem in a governmental food safety center that is responsible for the surveillance of imported food at an international airport. In addition to the fact that the staff have different levels of efficiency and have different preference for work shifts, the Operations Manager of the food safety center would like to balance the dissimilarities of workers in order to provide unbiased work schedules for staff members. We adopt a two-phase approach, where the first phase is to schedule the work shifts of food safety inspectors (including rest days and shift types) with schedule fairness and staff preference taken into account and the second phase is to best-fit them to tasks in terms of skill-matches and create diversity of team formations. We also provide polyhedral results and devise valid inequalities for the two formulations. For the first-phase problem, we relax some constraints of the fairness criteria to reduce the problem size to reduce computational effort. We derive an upper bound for the objective value of the relaxation and provide computational results to show that the solutions devised from our proposed methodology are of good quality. For the second-phase problem, we develop a shift-by-shift assignment heuristic to obtain an upper bound for the maximum number of times any pair of workers is assigned to the same shift at the same location. We propose an enumeration algorithm, that solves the problems for fixed values of this number until an optimality condition holds or the problem is infeasible. Computational results show that our proposed approach can produce solutions of good quality in a much shorter period of time, compared with a standalone commercial solver.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"235 - 251"},"PeriodicalIF":0.0,"publicationDate":"2016-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1057303","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59751155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-03DOI: 10.1080/0740817X.2015.1078013
Hugh R. Medal, E. Pohl, M. Rossetti
We study a new facility protection problem in which one must allocate scarce protection resources to a set of facilities given that allocating resources to a facility only has a probabilistic effect on the facility’s post-disruption capacity. This study seeks to test three common assumptions made in the literature on modeling infrastructure systems subject to disruptions: 1) perfect protection, e.g., protecting an element makes it fail-proof, 2) binary protection, i.e., an element is either fully protected or unprotected, and 3) binary state, i.e., disrupted elements are fully operational or non-operational. We model this facility protection problem as a two-stage stochastic program with endogenous uncertainty. Because this stochastic program is non-convex we present a greedy algorithm and show that it has a worst-case performance of 0.63. However, empirical results indicate that the average performance is much better. In addition, experimental results indicate that the mean-value version of this model, in which parameters are set to their mean values, performs close to optimal. Results also indicate that the perfect and binary protection assumptions together significantly affect the performance of a model. On the other hand, the binary state assumption was found to have a smaller effect.
{"title":"Allocating Protection Resources to Facilities When the Effect of Protection is Uncertain","authors":"Hugh R. Medal, E. Pohl, M. Rossetti","doi":"10.1080/0740817X.2015.1078013","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1078013","url":null,"abstract":"We study a new facility protection problem in which one must allocate scarce protection resources to a set of facilities given that allocating resources to a facility only has a probabilistic effect on the facility’s post-disruption capacity. This study seeks to test three common assumptions made in the literature on modeling infrastructure systems subject to disruptions: 1) perfect protection, e.g., protecting an element makes it fail-proof, 2) binary protection, i.e., an element is either fully protected or unprotected, and 3) binary state, i.e., disrupted elements are fully operational or non-operational. We model this facility protection problem as a two-stage stochastic program with endogenous uncertainty. Because this stochastic program is non-convex we present a greedy algorithm and show that it has a worst-case performance of 0.63. However, empirical results indicate that the average performance is much better. In addition, experimental results indicate that the mean-value version of this model, in which parameters are set to their mean values, performs close to optimal. Results also indicate that the perfect and binary protection assumptions together significantly affect the performance of a model. On the other hand, the binary state assumption was found to have a smaller effect.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"220 - 234"},"PeriodicalIF":0.0,"publicationDate":"2016-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1078013","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59751990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-03DOI: 10.1080/0740817X.2015.1067737
Ihsan Yanikoglu, D. den Hertog, J. Kleijnen
ABSTRACT This article presents a robust optimization reformulation of the dual-response problem developed in response surface methodology. The dual-response approach fits separate models for the mean and the variance and analyzes these two models in a mathematical optimization setting. We use metamodels estimated from experiments with both controllable and environmental inputs. These experiments may be performed with either real or simulated systems; we focus on simulation experiments. For the environmental inputs, classic approaches assume known means, variances, or covariances and sometimes even a known distribution. We, however, develop a method that uses only experimental data, so it does not need a known probability distribution. Moreover, our approach yields a solution that is robust against the ambiguity in the probability distribution. We also propose an adjustable robust optimization method that enables adjusting the values of the controllable factors after observing the values of the environmental factors. We illustrate our novel methods through several numerical examples, which demonstrate their effectiveness.
{"title":"Robust dual-response optimization","authors":"Ihsan Yanikoglu, D. den Hertog, J. Kleijnen","doi":"10.1080/0740817X.2015.1067737","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1067737","url":null,"abstract":"ABSTRACT This article presents a robust optimization reformulation of the dual-response problem developed in response surface methodology. The dual-response approach fits separate models for the mean and the variance and analyzes these two models in a mathematical optimization setting. We use metamodels estimated from experiments with both controllable and environmental inputs. These experiments may be performed with either real or simulated systems; we focus on simulation experiments. For the environmental inputs, classic approaches assume known means, variances, or covariances and sometimes even a known distribution. We, however, develop a method that uses only experimental data, so it does not need a known probability distribution. Moreover, our approach yields a solution that is robust against the ambiguity in the probability distribution. We also propose an adjustable robust optimization method that enables adjusting the values of the controllable factors after observing the values of the environmental factors. We illustrate our novel methods through several numerical examples, which demonstrate their effectiveness.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"298 - 312"},"PeriodicalIF":0.0,"publicationDate":"2016-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1067737","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59751543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-26DOI: 10.1080/0740817X.2015.1109739
Shan Li, Yong Chen
ABSTRACT This article presents a Bayesian variable selection–based diagnosis approach to simultaneously identify both process mean shift faults and sensor mean shift faults in manufacturing processes. The proposed method directly models the probability of fault occurrence and can easily incorporate prior knowledge on the probability of a fault occurrence. Important concepts are introduced to understand the diagnosability of the proposed method. A guideline on how to select the values of hyper-parameters is given. A conditional maximum likelihood method is proposed as an alternative method to provide robustness to the selection of some key model parameters. Systematic simulation studies are used to provide insights on the relationship between the success of the diagnosis method and related system structure characteristics. A real assembly example is used to demonstrate the effectiveness of the proposed diagnosis method.
{"title":"A Bayesian variable selection method for joint diagnosis of manufacturing process and sensor faults","authors":"Shan Li, Yong Chen","doi":"10.1080/0740817X.2015.1109739","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1109739","url":null,"abstract":"ABSTRACT This article presents a Bayesian variable selection–based diagnosis approach to simultaneously identify both process mean shift faults and sensor mean shift faults in manufacturing processes. The proposed method directly models the probability of fault occurrence and can easily incorporate prior knowledge on the probability of a fault occurrence. Important concepts are introduced to understand the diagnosability of the proposed method. A guideline on how to select the values of hyper-parameters is given. A conditional maximum likelihood method is proposed as an alternative method to provide robustness to the selection of some key model parameters. Systematic simulation studies are used to provide insights on the relationship between the success of the diagnosis method and related system structure characteristics. A real assembly example is used to demonstrate the effectiveness of the proposed diagnosis method.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"47 1","pages":"313 - 323"},"PeriodicalIF":0.0,"publicationDate":"2016-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1109739","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59753033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-25DOI: 10.1080/0740817X.2015.1110266
Jingyuan Shen, L. Cui
ABSTRACT The environment in which a system operates can have a crucial impact on its performance; for example, a machine operating in mild or harsh environments or the flow of a river changing between seasons. In this article, we consider a dynamic reliability system operating under a cycle of K regimes, which is modeled as a continuous-time Markov process with K different transition rate matrices being used to describe the various regimes. Results for the availability of such a system and probability distributions of the first uptime are given. Three special cases, which occur due to situations where the durations of the regime are constant and where the number of up states in different regimes are identical or increasing, are considered in detail. Finally, some numerical examples are shown to validate the proposed approach.
{"title":"Reliability performance for dynamic systems with cycles of K regimes","authors":"Jingyuan Shen, L. Cui","doi":"10.1080/0740817X.2015.1110266","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1110266","url":null,"abstract":"ABSTRACT The environment in which a system operates can have a crucial impact on its performance; for example, a machine operating in mild or harsh environments or the flow of a river changing between seasons. In this article, we consider a dynamic reliability system operating under a cycle of K regimes, which is modeled as a continuous-time Markov process with K different transition rate matrices being used to describe the various regimes. Results for the availability of such a system and probability distributions of the first uptime are given. Three special cases, which occur due to situations where the durations of the regime are constant and where the number of up states in different regimes are identical or increasing, are considered in detail. Finally, some numerical examples are shown to validate the proposed approach.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"389 - 402"},"PeriodicalIF":0.0,"publicationDate":"2016-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1110266","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59753118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-21DOI: 10.1080/0740817X.2015.1109737
Erick Moreno-Centeno, Adolfo R. Escobedo
ABSTRACT In many different applications of group decision-making, individual ranking agents or judges are able to rank only a small subset of all available candidates. However, as we argue in this article, the aggregation of these incomplete ordinal rankings into a group consensus has not been adequately addressed. We propose an axiomatic method to aggregate a set of incomplete rankings into a consensus ranking; the method is a generalization of an existing approach to aggregate complete rankings. More specifically, we introduce a set of natural axioms that must be satisfied by a distance between two incomplete rankings; prove the uniqueness and existence of a distance satisfying such axioms; formulate the aggregation of incomplete rankings as an optimization problem; propose and test a specific algorithm to solve a variation of this problem where the consensus ranking does not contain ties; and show that the consensus ranking obtained by our axiomatic approach is more intuitive than the consensus ranking obtained by other approaches.
{"title":"Axiomatic aggregation of incomplete rankings","authors":"Erick Moreno-Centeno, Adolfo R. Escobedo","doi":"10.1080/0740817X.2015.1109737","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1109737","url":null,"abstract":"ABSTRACT In many different applications of group decision-making, individual ranking agents or judges are able to rank only a small subset of all available candidates. However, as we argue in this article, the aggregation of these incomplete ordinal rankings into a group consensus has not been adequately addressed. We propose an axiomatic method to aggregate a set of incomplete rankings into a consensus ranking; the method is a generalization of an existing approach to aggregate complete rankings. More specifically, we introduce a set of natural axioms that must be satisfied by a distance between two incomplete rankings; prove the uniqueness and existence of a distance satisfying such axioms; formulate the aggregation of incomplete rankings as an optimization problem; propose and test a specific algorithm to solve a variation of this problem where the consensus ranking does not contain ties; and show that the consensus ranking obtained by our axiomatic approach is more intuitive than the consensus ranking obtained by other approaches.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"475 - 488"},"PeriodicalIF":0.0,"publicationDate":"2016-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1109737","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59752513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-21DOI: 10.1080/0740817X.2015.1109738
Dan Zhang, H. Liao
ABSTRACT Accelerated Life Testing (ALT) has been widely used in reliability estimation for highly reliable products. To improve the efficiency of ALT, many optimum ALT design methods have been developed. However, most of the existing methods solely focus on the reliability estimation precision without considering the significant amounts of energy consumed by the equipment that creates the harsher-than-normal operating conditions in such experiments. In order to warrant the reliability estimation precision while reducing the total energy consumption, this article presents a fully integrated double-loop approach to the design of statistically and energy-efficient ALT experiments. As an important option, the new experimental design method is formulated as a multi-objective optimization problem with three objectives: (i) minimizing the experiment's total energy consumption; (ii) maximizing the reliability estimation precision; and (iii) minimizing the tracking error between the desired and actual stress loadings used in the experiment. A controlled elitist non-dominated sorting genetic algorithm is utilized to solve such large-scale optimization problems involving computer simulation. Numerical examples are provided to demonstrate the effectiveness and possible applications of the proposed experimental design method. Compared with the traditional and sequential optimal ALT planning methods, this method further improves the energy and statistical efficiency of ALT experiments.
{"title":"A fully integrated double-loop approach to the design of statistically and energy efficient accelerated life tests","authors":"Dan Zhang, H. Liao","doi":"10.1080/0740817X.2015.1109738","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1109738","url":null,"abstract":"ABSTRACT Accelerated Life Testing (ALT) has been widely used in reliability estimation for highly reliable products. To improve the efficiency of ALT, many optimum ALT design methods have been developed. However, most of the existing methods solely focus on the reliability estimation precision without considering the significant amounts of energy consumed by the equipment that creates the harsher-than-normal operating conditions in such experiments. In order to warrant the reliability estimation precision while reducing the total energy consumption, this article presents a fully integrated double-loop approach to the design of statistically and energy-efficient ALT experiments. As an important option, the new experimental design method is formulated as a multi-objective optimization problem with three objectives: (i) minimizing the experiment's total energy consumption; (ii) maximizing the reliability estimation precision; and (iii) minimizing the tracking error between the desired and actual stress loadings used in the experiment. A controlled elitist non-dominated sorting genetic algorithm is utilized to solve such large-scale optimization problems involving computer simulation. Numerical examples are provided to demonstrate the effectiveness and possible applications of the proposed experimental design method. Compared with the traditional and sequential optimal ALT planning methods, this method further improves the energy and statistical efficiency of ALT experiments.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"371 - 388"},"PeriodicalIF":0.0,"publicationDate":"2016-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1109738","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59753155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-21DOI: 10.1080/0740817X.2015.1096430
S. R. Hunter, B. McClosky
ABSTRACT Commercial plant breeders improve economically important traits by selectively mating individuals from a given breeding population. Potential pairings are evaluated before the growing season using Monte Carlo simulation, and a mating design is created to allocate a fixed breeding budget across the parent pairs to achieve desired population outcomes. We introduce a novel objective function for this mating design problem that accurately models the goals of a certain class of breeding experiments. The resulting mating design problem is a computationally burdensome simulation optimization problem on a combinatorially large set of feasible points. We propose a two-step solution to this problem: (i) simulate to estimate the performance of each parent pair and (ii) solve an estimated version of the mating design problem, which is an integer program, using the simulation output. To reduce the computational burden when implementing steps (i) and (ii), we analytically identify a Pareto set of parent pairs that will receive the entire breeding budget at optimality. Since we wish to estimate the Pareto set in step (i) as input to step (ii), we derive an asymptotically optimal simulation budget allocation to estimate the Pareto set that, in our numerical experiments, out-performs Multi-objective Optimal Computing Budget Allocation in reducing misclassifications. Given the estimated Pareto set, we provide a branch-and-bound algorithm to solve the estimated mating design problem. Our approach dramatically reduces the computational effort required to solve the mating design problem when compared with naïve methods.
{"title":"Maximizing quantitative traits in the mating design problem via simulation-based Pareto estimation","authors":"S. R. Hunter, B. McClosky","doi":"10.1080/0740817X.2015.1096430","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1096430","url":null,"abstract":"ABSTRACT Commercial plant breeders improve economically important traits by selectively mating individuals from a given breeding population. Potential pairings are evaluated before the growing season using Monte Carlo simulation, and a mating design is created to allocate a fixed breeding budget across the parent pairs to achieve desired population outcomes. We introduce a novel objective function for this mating design problem that accurately models the goals of a certain class of breeding experiments. The resulting mating design problem is a computationally burdensome simulation optimization problem on a combinatorially large set of feasible points. We propose a two-step solution to this problem: (i) simulate to estimate the performance of each parent pair and (ii) solve an estimated version of the mating design problem, which is an integer program, using the simulation output. To reduce the computational burden when implementing steps (i) and (ii), we analytically identify a Pareto set of parent pairs that will receive the entire breeding budget at optimality. Since we wish to estimate the Pareto set in step (i) as input to step (ii), we derive an asymptotically optimal simulation budget allocation to estimate the Pareto set that, in our numerical experiments, out-performs Multi-objective Optimal Computing Budget Allocation in reducing misclassifications. Given the estimated Pareto set, we provide a branch-and-bound algorithm to solve the estimated mating design problem. Our approach dramatically reduces the computational effort required to solve the mating design problem when compared with naïve methods.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"565 - 578"},"PeriodicalIF":0.0,"publicationDate":"2016-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1096430","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59752636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-12DOI: 10.1080/0740817X.2016.1146423
Jun Yu Li, P. Qiu
ABSTRACT In many applications, including the early detection and prevention of diseases and performance evaluation of airplanes and other durable products, we need to sequentially monitor the longitudinal pattern of certain performance variables of a subject. A signal should be given as soon as possible after the pattern has become abnormal. Recently, a new statistical method, called a dynamic screening system (DySS), was proposed to solve this problem. It is a combination of longitudinal data analysis and statistical process control. However, the current DySS method can only handle cases where the observations are normally distributed and within-subject observations are independent or follow a specific time series model (e.g., AR(1) model). In this article, we propose a new nonparametric DySS method that can handle cases where the observation distribution and the correlation among within-subject observations are arbitrary. Therefore, it significantly broadens the application area of the DySS method. Numerical studies show that the new method works well in practice.
{"title":"Nonparametric dynamic screening system for monitoring correlated longitudinal data","authors":"Jun Yu Li, P. Qiu","doi":"10.1080/0740817X.2016.1146423","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1146423","url":null,"abstract":"ABSTRACT In many applications, including the early detection and prevention of diseases and performance evaluation of airplanes and other durable products, we need to sequentially monitor the longitudinal pattern of certain performance variables of a subject. A signal should be given as soon as possible after the pattern has become abnormal. Recently, a new statistical method, called a dynamic screening system (DySS), was proposed to solve this problem. It is a combination of longitudinal data analysis and statistical process control. However, the current DySS method can only handle cases where the observations are normally distributed and within-subject observations are independent or follow a specific time series model (e.g., AR(1) model). In this article, we propose a new nonparametric DySS method that can handle cases where the observation distribution and the correlation among within-subject observations are arbitrary. Therefore, it significantly broadens the application area of the DySS method. Numerical studies show that the new method works well in practice.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"772 - 786"},"PeriodicalIF":0.0,"publicationDate":"2016-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1146423","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-12DOI: 10.1080/0740817X.2016.1147662
M. Dehghani, H. Sherali
ABSTRACT In recent years, many resource allocation models have been developed to protect critical infrastructure by maximizing system resiliency or minimizing its vulnerability to disasters or disruptions. However, these are often computationally intensive and require simplifying assumptions and approximations. In this study, we develop a robust and representative, yet tractable, model for optimizing maintenance planning of generic network-structured systems (transportation, water, power, communication). The proposed modeling framework examines models that consider both linear and nonlinear objective functions and enhances their structure through suitable manipulations. Moreover, the designed models inherently capture the network topography and the stochastic nature of disruptions and can be applied to network-structured systems where performance is assessed based on network flow efficiency and mobility. The developed models are applied to the Istanbul highway system in order to assess their relative computational effectiveness and robustness using several test cases that consider single- and multiple-treatment types, and the problems are solved on the NEOS server using different available software. The results demonstrate that our models are capable of obtaining optimal solutions within a very short time. Furthermore, the linear model is shown to yield a good approximation to the nonlinear model (it determined solutions within 0.3% of optimality, on average). Managerial insights are provided in regard to the optimal policies obtained, which generally appear to favor selecting fewer links and applying a higher quality treatment to them.
{"title":"A resource allocation approach for managing critical network-based infrastructure systems","authors":"M. Dehghani, H. Sherali","doi":"10.1080/0740817X.2016.1147662","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1147662","url":null,"abstract":"ABSTRACT In recent years, many resource allocation models have been developed to protect critical infrastructure by maximizing system resiliency or minimizing its vulnerability to disasters or disruptions. However, these are often computationally intensive and require simplifying assumptions and approximations. In this study, we develop a robust and representative, yet tractable, model for optimizing maintenance planning of generic network-structured systems (transportation, water, power, communication). The proposed modeling framework examines models that consider both linear and nonlinear objective functions and enhances their structure through suitable manipulations. Moreover, the designed models inherently capture the network topography and the stochastic nature of disruptions and can be applied to network-structured systems where performance is assessed based on network flow efficiency and mobility. The developed models are applied to the Istanbul highway system in order to assess their relative computational effectiveness and robustness using several test cases that consider single- and multiple-treatment types, and the problems are solved on the NEOS server using different available software. The results demonstrate that our models are capable of obtaining optimal solutions within a very short time. Furthermore, the linear model is shown to yield a good approximation to the nonlinear model (it determined solutions within 0.3% of optimality, on average). Managerial insights are provided in regard to the optimal policies obtained, which generally appear to favor selecting fewer links and applying a higher quality treatment to them.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"826 - 837"},"PeriodicalIF":0.0,"publicationDate":"2016-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1147662","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}