Pub Date : 2016-02-12DOI: 10.1080/0740817X.2016.1147663
M. L. Toledo, Marta A. Freitas, E. Colosimo, Gustavo L. Gilardoni
ABSTRACT In the repairable systems literature one can find a great number of papers that propose maintenance policies under the assumption of minimal repair after each failure (such a repair leaves the system in the same condition as it was just before the failure—as bad as old). This article derives a statistical procedure to estimate the optimal Preventive Maintenance (PM) periodic policy, under the following two assumptions: (i) perfect repair at each PM action (i.e., the system returns to the as-good-as-new state) and (ii) imperfect system repair after each failure (the system returns to an intermediate state between as bad as old and as good as new). Models for imperfect repair have already been presented in the literature. However, an inference procedure for the quantities of interest has not yet been fully studied. In the present article, statistical methods, including the likelihood function, Monte Carlo simulation, and bootstrap resampling methods, are used in order to (i) estimate the degree of efficiency of a repair and (ii) obtain the optimal PM check points that minimize the expected total cost. This study was motivated by a real situation involving the maintenance of engines in off-road vehicles.
{"title":"Optimal periodic maintenance policy under imperfect repair: A case study on the engines of off-road vehicles","authors":"M. L. Toledo, Marta A. Freitas, E. Colosimo, Gustavo L. Gilardoni","doi":"10.1080/0740817X.2016.1147663","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1147663","url":null,"abstract":"ABSTRACT In the repairable systems literature one can find a great number of papers that propose maintenance policies under the assumption of minimal repair after each failure (such a repair leaves the system in the same condition as it was just before the failure—as bad as old). This article derives a statistical procedure to estimate the optimal Preventive Maintenance (PM) periodic policy, under the following two assumptions: (i) perfect repair at each PM action (i.e., the system returns to the as-good-as-new state) and (ii) imperfect system repair after each failure (the system returns to an intermediate state between as bad as old and as good as new). Models for imperfect repair have already been presented in the literature. However, an inference procedure for the quantities of interest has not yet been fully studied. In the present article, statistical methods, including the likelihood function, Monte Carlo simulation, and bootstrap resampling methods, are used in order to (i) estimate the degree of efficiency of a repair and (ii) obtain the optimal PM check points that minimize the expected total cost. This study was motivated by a real situation involving the maintenance of engines in off-road vehicles.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"747 - 758"},"PeriodicalIF":0.0,"publicationDate":"2016-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1147663","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-12DOI: 10.1080/0740817X.2015.1101523
Tugce G. Martagan, A. Krishnamurthy, C. Maravelias
ABSTRACT The manufacture of biological products from live systems such as bacteria, mammalian, or insect cells is called biomanufacturing. The use of live cells introduces several operational challenges including batch-to-batch variability, parallel growth of both desired antibodies and unwanted toxic byproducts in the same batch, and random shocks leading to multiple competing failure processes. In this article, we develop a stochastic model that integrates the cell-level dynamics of biological processes with operational dynamics to identify optimal harvesting policies that balance the risks of batch failures and yield/quality tradeoffs in fermentation operations. We develop an infinite horizon, discrete-time Markov decision model to derive the structural properties of the optimal harvesting policies. We use IgG1 antibody production as an example to demonstrate the optimal harvesting policy and compare its performance against harvesting policies used in practice. We leverage insights from the optimal policy to propose smart stationary policies that are easier to implement in practice.
{"title":"Optimal condition-based harvesting policies for biomanufacturing operations with failure risks","authors":"Tugce G. Martagan, A. Krishnamurthy, C. Maravelias","doi":"10.1080/0740817X.2015.1101523","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1101523","url":null,"abstract":"ABSTRACT The manufacture of biological products from live systems such as bacteria, mammalian, or insect cells is called biomanufacturing. The use of live cells introduces several operational challenges including batch-to-batch variability, parallel growth of both desired antibodies and unwanted toxic byproducts in the same batch, and random shocks leading to multiple competing failure processes. In this article, we develop a stochastic model that integrates the cell-level dynamics of biological processes with operational dynamics to identify optimal harvesting policies that balance the risks of batch failures and yield/quality tradeoffs in fermentation operations. We develop an infinite horizon, discrete-time Markov decision model to derive the structural properties of the optimal harvesting policies. We use IgG1 antibody production as an example to demonstrate the optimal harvesting policy and compare its performance against harvesting policies used in practice. We leverage insights from the optimal policy to propose smart stationary policies that are easier to implement in practice.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"440 - 461"},"PeriodicalIF":0.0,"publicationDate":"2016-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1101523","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59752373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-08DOI: 10.1080/0740817X.2015.1096431
K. Bastani, Z. Kong, Wenzhen Huang, Yingqing Zhou
ABSTRACT Developments in sensing technologies have created the opportunity to diagnose the process faults in multi-station assembly processes by analyzing measurement data. Sufficient diagnosability for process faults is a challenging issue, as the sensors cannot be excessively used. Therefore, there have been a number of methods reported in the literature for the optimization of the diagnosability of a diagnostic method for a given sensor cost, thus allowing the identification of process faults incurred in multi-station assembly processes. However, most of these methods assume that the number of sensors is more than that of the process errors. Unfortunately, this assumption may not hold in many real industrial applications. Thus, the diagnostic methods have to solve underdetermined linear equations. In order to address this issue, we propose an optimal sensor placement method by devising a new diagnosability criterion based on compressive sensing theory, which is able to handle underdetermined linear equations. Our method seeks the optimal sensor placement by minimizing the average mutual coherence to maximize the diagnosability. The proposed method is demonstrated and validated through case studies from actual industrial applications.
{"title":"Compressive sensing–based optimal sensor placement and fault diagnosis for multi-station assembly processes","authors":"K. Bastani, Z. Kong, Wenzhen Huang, Yingqing Zhou","doi":"10.1080/0740817X.2015.1096431","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1096431","url":null,"abstract":"ABSTRACT Developments in sensing technologies have created the opportunity to diagnose the process faults in multi-station assembly processes by analyzing measurement data. Sufficient diagnosability for process faults is a challenging issue, as the sensors cannot be excessively used. Therefore, there have been a number of methods reported in the literature for the optimization of the diagnosability of a diagnostic method for a given sensor cost, thus allowing the identification of process faults incurred in multi-station assembly processes. However, most of these methods assume that the number of sensors is more than that of the process errors. Unfortunately, this assumption may not hold in many real industrial applications. Thus, the diagnostic methods have to solve underdetermined linear equations. In order to address this issue, we propose an optimal sensor placement method by devising a new diagnosability criterion based on compressive sensing theory, which is able to handle underdetermined linear equations. Our method seeks the optimal sensor placement by minimizing the average mutual coherence to maximize the diagnosability. The proposed method is demonstrated and validated through case studies from actual industrial applications.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"462 - 474"},"PeriodicalIF":0.0,"publicationDate":"2016-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1096431","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59752767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-08DOI: 10.1080/0740817X.2015.1096432
Lujia Wang, Q. Hu, Jian Liu
ABSTRACT Computer software is widely applied in safety-critical systems. The ever-increasing complexity of software systems makes it extremely difficult to ensure software reliability, and this problem has drawn considerable attention from both industry and academia. Most software reliability models are built on a common assumption that the detected faults are immediately corrected; thus, the fault detection and correction processes can be regarded as the same process. In this article, a comprehensive study is conducted to analyze the time dependencies between the fault detection and correction processes. The model parameters are estimated using the Maximum Likelihood Estimation (MLE) method, which is based on an explicit likelihood function combining both the fault detection and correction processes. Numerical case studies are conducted under the proposed modeling framework. The obtained results demonstrate that the proposed MLE method can be applied to more general situations and provide more accurate results. Furthermore, the predictive capability of the MLE method is compared with that of the Least Squares Estimation (LSE) method. The prediction results indicate that the proposed MLE method performs better than the LSE method when the data are not large in size or are collected in the early phase of software testing.
{"title":"Software reliability growth modeling and analysis with dual fault detection and correction processes","authors":"Lujia Wang, Q. Hu, Jian Liu","doi":"10.1080/0740817X.2015.1096432","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1096432","url":null,"abstract":"ABSTRACT Computer software is widely applied in safety-critical systems. The ever-increasing complexity of software systems makes it extremely difficult to ensure software reliability, and this problem has drawn considerable attention from both industry and academia. Most software reliability models are built on a common assumption that the detected faults are immediately corrected; thus, the fault detection and correction processes can be regarded as the same process. In this article, a comprehensive study is conducted to analyze the time dependencies between the fault detection and correction processes. The model parameters are estimated using the Maximum Likelihood Estimation (MLE) method, which is based on an explicit likelihood function combining both the fault detection and correction processes. Numerical case studies are conducted under the proposed modeling framework. The obtained results demonstrate that the proposed MLE method can be applied to more general situations and provide more accurate results. Furthermore, the predictive capability of the MLE method is compared with that of the Least Squares Estimation (LSE) method. The prediction results indicate that the proposed MLE method performs better than the LSE method when the data are not large in size or are collected in the early phase of software testing.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"359 - 370"},"PeriodicalIF":0.0,"publicationDate":"2016-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1096432","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59752608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-06DOI: 10.1080/0740817X.2016.1146422
Wenpo Huang, L. Shu, W. Woodall, K. Tsui
ABSTRACT Control charts are usually designed with constant control limits. In this article, we consider the design of control charts with probability control limits aimed at controlling the conditional false alarm rate at the desired value at each time step. The resulting control limits are dynamic and thus are more general and capable of accommodating more complex situations in practice as compared with the use of a constant control limit. We consider the situation when the sample sizes are varying over time, with a primary focus on the CUmulative SUM (CUSUM)-type control charts. Unlike other methods, no assumptions about future sample sizes are required with our approach. An integral equation approach is developed to facilitate the design and analysis of the CUSUM control chart with probability control limits. The relationship between the CUSUM charts using probability control limits and the CUSUM charts with a fast initial response feature is investigated.
{"title":"CUSUM procedures with probability control limits for monitoring processes with variable sample sizes","authors":"Wenpo Huang, L. Shu, W. Woodall, K. Tsui","doi":"10.1080/0740817X.2016.1146422","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1146422","url":null,"abstract":"ABSTRACT Control charts are usually designed with constant control limits. In this article, we consider the design of control charts with probability control limits aimed at controlling the conditional false alarm rate at the desired value at each time step. The resulting control limits are dynamic and thus are more general and capable of accommodating more complex situations in practice as compared with the use of a constant control limit. We consider the situation when the sample sizes are varying over time, with a primary focus on the CUmulative SUM (CUSUM)-type control charts. Unlike other methods, no assumptions about future sample sizes are required with our approach. An integral equation approach is developed to facilitate the design and analysis of the CUSUM control chart with probability control limits. The relationship between the CUSUM charts using probability control limits and the CUSUM charts with a fast initial response feature is investigated.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"759 - 771"},"PeriodicalIF":0.0,"publicationDate":"2016-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1146422","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-06DOI: 10.1080/0740817X.2016.1140922
S. Song, D. Coit, Q. Feng
ABSTRACT New reliability models have been developed for systems subject to competing hard and soft failure processes with shocks that have dependent effects. In the new model, hard failure occurs when transmitted system shocks are large enough to cause any component in a series system to fail immediately, soft failure occurs when any component deteriorates to a certain failure threshold, and system shocks affect both failure processes for all components. Our new research extends previous reliability models that had dependent failure processes, where the dependency was only because of the shared number of shock exposures and not the shock effects associated with individual system shocks. Dependency of transmitted shock sizes and shock damages to the specific failure processes for all components has not been sufficiently considered, and yet for some actual examples, this can be important. In practice, the effects of shock damages to the multiple failure processes among components are often dependent. In this article, we combine both probabilistic and physical degradation modeling concepts to develop the new system reliability model. Four different dependent patterns/scenarios of shock effects on multiple failure processes for all components are considered for series systems. This represents a significant extension from previous research because it is more realistic yet also more difficult for reliability modeling. The model is demonstrated by severalexamples.
{"title":"Reliability analysis of multiple-component series systems subject to hard and soft failures with dependent shock effects","authors":"S. Song, D. Coit, Q. Feng","doi":"10.1080/0740817X.2016.1140922","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1140922","url":null,"abstract":"ABSTRACT New reliability models have been developed for systems subject to competing hard and soft failure processes with shocks that have dependent effects. In the new model, hard failure occurs when transmitted system shocks are large enough to cause any component in a series system to fail immediately, soft failure occurs when any component deteriorates to a certain failure threshold, and system shocks affect both failure processes for all components. Our new research extends previous reliability models that had dependent failure processes, where the dependency was only because of the shared number of shock exposures and not the shock effects associated with individual system shocks. Dependency of transmitted shock sizes and shock damages to the specific failure processes for all components has not been sufficiently considered, and yet for some actual examples, this can be important. In practice, the effects of shock damages to the multiple failure processes among components are often dependent. In this article, we combine both probabilistic and physical degradation modeling concepts to develop the new system reliability model. Four different dependent patterns/scenarios of shock effects on multiple failure processes for all components are considered for series systems. This represents a significant extension from previous research because it is more realistic yet also more difficult for reliability modeling. The model is demonstrated by severalexamples.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"720 - 735"},"PeriodicalIF":0.0,"publicationDate":"2016-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1140922","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59754300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-06DOI: 10.1080/0740817X.2016.1146424
R. Peng, Qing-gang Zhai, L. Xing, Jun Yang
ABSTRACT Many practical systems have multiple consecutive and non-overlapping phases of operations during their mission and are generally referred to as phased-mission systems (PMSs). This article considers a general type of PMS consisting of subsystems connected in series, where each subsystem contains components with different capacities. The components within the same subsystem are divided into several disjoint work-sharing groups (WSGs). The capacity of each WSG is equal to the summation of the capacities of its working components, and the capacity of each subsystem is equal to the capacity of the WSG with the maximum capacity. The system capacity is bottlenecked by the capacity of the subsystem with the minimum capacity. The system survives the mission only if its capacity meets the predetermined mission demand in all phases. Such PMSs can be commonly found in the power transmission and telecommunication industries. A universal generating function–based method is first proposed for the reliability analysis of the capacitated series-parallel PMSs with the consideration of imperfect fault coverage. As different partitions of the WSGs inside a subsystem can lead to different system reliabilities, the optimal structure that maximizes the system reliability is investigated. Examples are presented to illustrate the proposed reliability evaluation method and optimization procedure.
{"title":"Reliability analysis and optimal structure of series-parallel phased-mission systems subject to fault-level coverage","authors":"R. Peng, Qing-gang Zhai, L. Xing, Jun Yang","doi":"10.1080/0740817X.2016.1146424","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1146424","url":null,"abstract":"ABSTRACT Many practical systems have multiple consecutive and non-overlapping phases of operations during their mission and are generally referred to as phased-mission systems (PMSs). This article considers a general type of PMS consisting of subsystems connected in series, where each subsystem contains components with different capacities. The components within the same subsystem are divided into several disjoint work-sharing groups (WSGs). The capacity of each WSG is equal to the summation of the capacities of its working components, and the capacity of each subsystem is equal to the capacity of the WSG with the maximum capacity. The system capacity is bottlenecked by the capacity of the subsystem with the minimum capacity. The system survives the mission only if its capacity meets the predetermined mission demand in all phases. Such PMSs can be commonly found in the power transmission and telecommunication industries. A universal generating function–based method is first proposed for the reliability analysis of the capacitated series-parallel PMSs with the consideration of imperfect fault coverage. As different partitions of the WSGs inside a subsystem can lead to different system reliabilities, the optimal structure that maximizes the system reliability is investigated. Examples are presented to illustrate the proposed reliability evaluation method and optimization procedure.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"736 - 746"},"PeriodicalIF":0.0,"publicationDate":"2016-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1146424","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59755153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-01DOI: 10.1080/0740817X.2015.1047475
Emine Gundogdu, Hakan Gultekin
ABSTRACT This study considers a production cell consisting of two machines and a material handling robot. The robot has a buffer space that moves with it. Identical parts are to be produced repetitively in this flowshop environment. The problem is to determine the cyclic schedule of the robot moves that maximizes the throughput rate. After developing the necessary framework to analyze such cells, we separately consider the single-, double-, and infinite-capacity buffer cases. For single- and double-capacity cases, consistent with the literature, we consider one-unit cycles that produce a single part in one repetition. We compare these cycles with each other and determine the set of undominated cycles. For the single-capacity case, we determine the parameter regions where each cycle is optimal, whereas for the double-capacity case, we determine efficient cycles and their worst-case performance bounds. For the infinite-capacity buffer case, we define a new class of cycles that better utilizes the benefits of the buffer space. We derive all such cycles and determine the set of undominated ones.We perform a computational study where we investigate the benefits of robots with a buffer space and the effects of the size of the buffer space on the performance. We compare the performances of self-buffered robots, dual-gripper robots, and robots with swap ability.
{"title":"Scheduling in two-machine robotic cells with a self-buffered robot","authors":"Emine Gundogdu, Hakan Gultekin","doi":"10.1080/0740817X.2015.1047475","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1047475","url":null,"abstract":"ABSTRACT This study considers a production cell consisting of two machines and a material handling robot. The robot has a buffer space that moves with it. Identical parts are to be produced repetitively in this flowshop environment. The problem is to determine the cyclic schedule of the robot moves that maximizes the throughput rate. After developing the necessary framework to analyze such cells, we separately consider the single-, double-, and infinite-capacity buffer cases. For single- and double-capacity cases, consistent with the literature, we consider one-unit cycles that produce a single part in one repetition. We compare these cycles with each other and determine the set of undominated cycles. For the single-capacity case, we determine the parameter regions where each cycle is optimal, whereas for the double-capacity case, we determine efficient cycles and their worst-case performance bounds. For the infinite-capacity buffer case, we define a new class of cycles that better utilizes the benefits of the buffer space. We derive all such cycles and determine the set of undominated ones.We perform a computational study where we investigate the benefits of robots with a buffer space and the effects of the size of the buffer space on the performance. We compare the performances of self-buffered robots, dual-gripper robots, and robots with swap ability.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"170 - 191"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1047475","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59750645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-01DOI: 10.1080/0740817X.2015.1078015
K. Maass, M. Daskin, Siqian Shen
ABSTRACT Although the traditional capacitated facility location model uses inflexible, limited capacities, facility managers often have many operational tools to extend capacity or to allow a facility to accept demands in excess of the capacity constraint for short periods of time. We present a mixed-integer program that captures these operational extensions. In particular, demands are not restricted by the capacity constraint, as we allow for unprocessed materials from one day to be held over in inventory and processed on a following day. We also consider demands at a daily level, which allows us to explicitly incorporate the daily variation in, and possibly correlated nature of, demands. Large problem instances, in terms of the number of demand nodes, candidate nodes, and number of days in the time horizon, are generated from United States census population data. We demonstrate that, in some instances, optimal locations identified by the new model differ from those of the traditional capacitated facility location problem and result in significant cost savings.
{"title":"Mitigating hard capacity constraints with inventory in facility location modeling","authors":"K. Maass, M. Daskin, Siqian Shen","doi":"10.1080/0740817X.2015.1078015","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1078015","url":null,"abstract":"ABSTRACT Although the traditional capacitated facility location model uses inflexible, limited capacities, facility managers often have many operational tools to extend capacity or to allow a facility to accept demands in excess of the capacity constraint for short periods of time. We present a mixed-integer program that captures these operational extensions. In particular, demands are not restricted by the capacity constraint, as we allow for unprocessed materials from one day to be held over in inventory and processed on a following day. We also consider demands at a daily level, which allows us to explicitly incorporate the daily variation in, and possibly correlated nature of, demands. Large problem instances, in terms of the number of demand nodes, candidate nodes, and number of days in the time horizon, are generated from United States census population data. We demonstrate that, in some instances, optimal locations identified by the new model differ from those of the traditional capacitated facility location problem and result in significant cost savings.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"120 - 133"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1078015","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59752075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-02-01DOI: 10.1080/0740817X.2015.1011357
Wei Zhang, Zhongsheng Hua, Yu Xia, Baofeng Huo
ABSTRACT We study a periodic-review multi-technology production-inventory problem of a single product with emissions trading over a planning horizon consisting of multiple periods. A manufacturer selects among multiple technologies with different unit production costs and emissions allowance consumption rates to produce the product to meet independently distributed random market demands. The manufacturer receives an emissions allowance at the beginning of the planning horizon and is allowed to trade allowances through an outside market in each of the following periods. To solve the dynamic multi-technology production-inventory problem, we virtually separate the problem into an inner layer and an outer layer. Based on the structural properties of the two layers, we find that the optimal emissions trading policy follows a target interval policy with two thresholds, whereas the optimal production policy has a composite base-stock structure. Our theoretical results show that no more than two technologies should be selected simultaneously at any state. However, different groups of technologies may be selected at different states. Our numerical tests confirm that it can be economically beneficial for a manufacturer to maintain multiple available technologies.
{"title":"Dynamic multi-technology production-inventory problem with emissions trading","authors":"Wei Zhang, Zhongsheng Hua, Yu Xia, Baofeng Huo","doi":"10.1080/0740817X.2015.1011357","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1011357","url":null,"abstract":"ABSTRACT We study a periodic-review multi-technology production-inventory problem of a single product with emissions trading over a planning horizon consisting of multiple periods. A manufacturer selects among multiple technologies with different unit production costs and emissions allowance consumption rates to produce the product to meet independently distributed random market demands. The manufacturer receives an emissions allowance at the beginning of the planning horizon and is allowed to trade allowances through an outside market in each of the following periods. To solve the dynamic multi-technology production-inventory problem, we virtually separate the problem into an inner layer and an outer layer. Based on the structural properties of the two layers, we find that the optimal emissions trading policy follows a target interval policy with two thresholds, whereas the optimal production policy has a composite base-stock structure. Our theoretical results show that no more than two technologies should be selected simultaneously at any state. However, different groups of technologies may be selected at different states. Our numerical tests confirm that it can be economically beneficial for a manufacturer to maintain multiple available technologies.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"110 - 119"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1011357","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59749512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}