Pub Date : 2016-01-02DOI: 10.1080/0740817X.2015.1055392
G. Capizzi, G. Masarotto
ABSTRACT The accurate determination of control limits is crucial in statistical process control. The usual approach consists in computing the limits so that the in-control run-length distribution has some desired properties; for example, a prescribed mean. However, as a consequence of the increasing complexity of process data, the run-length of many control charts discussed in the recent literature can be studied only through simulation. Furthermore, in some scenarios, such as profile and autocorrelated data monitoring, the limits cannot be tabulated in advance, and when different charts are combined, the control limits depend on a multidimensional vector of parameters. In this article, we propose the use of stochastic approximation methods for control chart calibration and discuss enhancements for their implementation (e.g., the initialization of the algorithm, an adaptive choice of the gain, a suitable stopping rule for the iterative process, and the advantages of using multicore workstations). Examples are used to show that simulated stochastic approximation provides a reliable and fully automatic approach for computing the control limits in complex applications. An R package implementing the algorithm is available in the supplemental materials.
{"title":"Efficient control chart calibration by simulated stochastic approximation","authors":"G. Capizzi, G. Masarotto","doi":"10.1080/0740817X.2015.1055392","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1055392","url":null,"abstract":"ABSTRACT The accurate determination of control limits is crucial in statistical process control. The usual approach consists in computing the limits so that the in-control run-length distribution has some desired properties; for example, a prescribed mean. However, as a consequence of the increasing complexity of process data, the run-length of many control charts discussed in the recent literature can be studied only through simulation. Furthermore, in some scenarios, such as profile and autocorrelated data monitoring, the limits cannot be tabulated in advance, and when different charts are combined, the control limits depend on a multidimensional vector of parameters. In this article, we propose the use of stochastic approximation methods for control chart calibration and discuss enhancements for their implementation (e.g., the initialization of the algorithm, an adaptive choice of the gain, a suitable stopping rule for the iterative process, and the advantages of using multicore workstations). Examples are used to show that simulated stochastic approximation provides a reliable and fully automatic approach for computing the control limits in complex applications. An R package implementing the algorithm is available in the supplemental materials.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"57 - 65"},"PeriodicalIF":0.0,"publicationDate":"2016-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1055392","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59750616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-02DOI: 10.1080/0740817X.2015.1056861
P. Castagliola, P. Maravelakis, Fernanda Figueiredo
ABSTRACT The usual practice in control charts is to assume that the chart parameters are known or can be accurately estimated from in-control historical samples and the data are free from outliers. Both of these assumptions are not realistic in practice: a control chart may involve the estimation of process parameters from a very limited number of samples and the data may contain some outliers. In order to overcome these issues, in this article, we develop an Exponentially Weighted Moving Average (EWMA) median chart with estimated parameters to monitor the mean value of a normal process. We study the run length properties of the proposed chart using a Markov Chain approach and the performance of the proposed chart is compared to the EWMA median chart with known parameters. Several tables for the design of the proposed chart are given in order to expedite the use of the chart by practitioners. An illustrative example is also given along with some recommendations about the minimum number of initial subgroups m for different sample sizes n that must be collected for the estimation of the parameters so that the proposed chart has identical performance as the chart with known parameters. From the results we deduce that (i) there is a large difference between the known and estimated parameters cases unless the initial number of subgroups m is large; and (ii) the difference between the known and estimated parameters cases can be reduced by using dedicated chart parameters.
{"title":"The EWMA median chart with estimated parameters","authors":"P. Castagliola, P. Maravelakis, Fernanda Figueiredo","doi":"10.1080/0740817X.2015.1056861","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1056861","url":null,"abstract":"ABSTRACT The usual practice in control charts is to assume that the chart parameters are known or can be accurately estimated from in-control historical samples and the data are free from outliers. Both of these assumptions are not realistic in practice: a control chart may involve the estimation of process parameters from a very limited number of samples and the data may contain some outliers. In order to overcome these issues, in this article, we develop an Exponentially Weighted Moving Average (EWMA) median chart with estimated parameters to monitor the mean value of a normal process. We study the run length properties of the proposed chart using a Markov Chain approach and the performance of the proposed chart is compared to the EWMA median chart with known parameters. Several tables for the design of the proposed chart are given in order to expedite the use of the chart by practitioners. An illustrative example is also given along with some recommendations about the minimum number of initial subgroups m for different sample sizes n that must be collected for the estimation of the parameters so that the proposed chart has identical performance as the chart with known parameters. From the results we deduce that (i) there is a large difference between the known and estimated parameters cases unless the initial number of subgroups m is large; and (ii) the difference between the known and estimated parameters cases can be reduced by using dedicated chart parameters.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"66 - 74"},"PeriodicalIF":0.0,"publicationDate":"2016-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1056861","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59751080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-02DOI: 10.1080/0740817X.2015.1027455
Linmiao Zhang, Kaibo Wang, Nan Chen
ABSTRACT The geometric quality of a wafer is an important quality characteristic in the semiconductor industry. However, it is difficult to monitor this characteristic during the manufacturing process due to the challenges created by the complexity of the data structure. In this article, we propose an Additive Gaussian Process (AGP) model to approximate a standard geometric profile of a wafer while quantifying the deviations from the standard when a manufacturing process is in an in-control state. Based on the AGP model, two statistical tests are developed to determine whether or not a newly produced wafer is conforming. We have conducted extensive numerical simulations and real case studies, the results of which indicate that our proposed method is effective and has potentially wide application.
{"title":"Monitoring wafers’ geometric quality using an additive Gaussian process model","authors":"Linmiao Zhang, Kaibo Wang, Nan Chen","doi":"10.1080/0740817X.2015.1027455","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1027455","url":null,"abstract":"ABSTRACT The geometric quality of a wafer is an important quality characteristic in the semiconductor industry. However, it is difficult to monitor this characteristic during the manufacturing process due to the challenges created by the complexity of the data structure. In this article, we propose an Additive Gaussian Process (AGP) model to approximate a standard geometric profile of a wafer while quantifying the deviations from the standard when a manufacturing process is in an in-control state. Based on the AGP model, two statistical tests are developed to determine whether or not a newly produced wafer is conforming. We have conducted extensive numerical simulations and real case studies, the results of which indicate that our proposed method is effective and has potentially wide application.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"1 - 15"},"PeriodicalIF":0.0,"publicationDate":"2016-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1027455","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59750168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-02DOI: 10.1080/0740817X.2015.1064554
Paul D. Arendt, D. Apley, Wei Chen
ABSTRACT When using physical experimental data to adjust, or calibrate, computer simulation models, two general sources of uncertainty that must be accounted for are calibration parameter uncertainty and model discrepancy. This is complicated by the well-known fact that systems to be calibrated are often subject to identifiability problems, in the sense that it is difficult to precisely estimate the parameters and to distinguish between the effects of parameter uncertainty and model discrepancy. We develop a form of preposterior analysis that can be used, prior to conducting physical experiments but after conducting the computer simulations, to predict the degree of identifiability that will result after conducting the physical experiments for a given experimental design. Specifically, we calculate the preposterior covariance matrix of the calibration parameters and demonstrate that, in the examples that we consider, it provides a reasonable prediction of the actual posterior covariance that is calculated after the experimental data are collected. Consequently, the preposterior covariance can be used as a criterion for designing physical experiments to help achieve better identifiability in calibration problems.
{"title":"A preposterior analysis to predict identifiability in the experimental calibration of computer models","authors":"Paul D. Arendt, D. Apley, Wei Chen","doi":"10.1080/0740817X.2015.1064554","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1064554","url":null,"abstract":"ABSTRACT When using physical experimental data to adjust, or calibrate, computer simulation models, two general sources of uncertainty that must be accounted for are calibration parameter uncertainty and model discrepancy. This is complicated by the well-known fact that systems to be calibrated are often subject to identifiability problems, in the sense that it is difficult to precisely estimate the parameters and to distinguish between the effects of parameter uncertainty and model discrepancy. We develop a form of preposterior analysis that can be used, prior to conducting physical experiments but after conducting the computer simulations, to predict the degree of identifiability that will result after conducting the physical experiments for a given experimental design. Specifically, we calculate the preposterior covariance matrix of the calibration parameters and demonstrate that, in the examples that we consider, it provides a reasonable prediction of the actual posterior covariance that is calculated after the experimental data are collected. Consequently, the preposterior covariance can be used as a criterion for designing physical experiments to help achieve better identifiability in calibration problems.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"75 - 88"},"PeriodicalIF":0.0,"publicationDate":"2016-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1064554","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59751606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-22DOI: 10.1080/0740817X.2016.1189633
Liwen Ouyang, D. Apley, Sanjay Mehrotra
ABSTRACT Controlled trials are ubiquitously used to investigate the effect of a medical treatment. The trial outcome can be dependent on a set of patient covariates. Traditional approaches have relied primarily on randomized patient sampling and allocation to treatment and control groups. However, when covariate data for a large set of patients are available and the dependence of the outcome on the covariates is of interest, one can potentially design treatment/control groups that provide better estimates of the covariate-dependent effects of the treatment or provide similarly accurate estimates with a smaller trial cohort size. In this article, we develop an approach that uses optimal Design Of Experiments (DOE) concepts to select the patients for the treatment and control groups upfront, based on their covariate values, in a manner that optimizes the information content in the data. For the optimal treatment and control groups selection, we develop simple guidelines and an optimization algorithm that achieves much more accurate estimates of the covariate-dependent effects of the treatment than random sampling. We demonstrate the advantage of our method through both theoretical and numerical performance comparisons. The advantages are more pronounced when the trial cohort size is smaller, relative to the number of records in the database. Moreover, our approach causes no sampling bias in the estimated effects, for the same reason that DOE principles do not bias estimated effects. Although we focus on medical treatment assessment, the approach has applicability in many analytics application domains where one wants to conduct a controlled experimental study to identify the covariate-dependent effects of a factor (e.g., a marketing sales promotion), based on a sample of study subjects selected optimally from a large database of covariates.
{"title":"Designed sampling from large databases for controlled trials","authors":"Liwen Ouyang, D. Apley, Sanjay Mehrotra","doi":"10.1080/0740817X.2016.1189633","DOIUrl":"https://doi.org/10.1080/0740817X.2016.1189633","url":null,"abstract":"ABSTRACT Controlled trials are ubiquitously used to investigate the effect of a medical treatment. The trial outcome can be dependent on a set of patient covariates. Traditional approaches have relied primarily on randomized patient sampling and allocation to treatment and control groups. However, when covariate data for a large set of patients are available and the dependence of the outcome on the covariates is of interest, one can potentially design treatment/control groups that provide better estimates of the covariate-dependent effects of the treatment or provide similarly accurate estimates with a smaller trial cohort size. In this article, we develop an approach that uses optimal Design Of Experiments (DOE) concepts to select the patients for the treatment and control groups upfront, based on their covariate values, in a manner that optimizes the information content in the data. For the optimal treatment and control groups selection, we develop simple guidelines and an optimization algorithm that achieves much more accurate estimates of the covariate-dependent effects of the treatment than random sampling. We demonstrate the advantage of our method through both theoretical and numerical performance comparisons. The advantages are more pronounced when the trial cohort size is smaller, relative to the number of records in the database. Moreover, our approach causes no sampling bias in the estimated effects, for the same reason that DOE principles do not bias estimated effects. Although we focus on medical treatment assessment, the approach has applicability in many analytics application domains where one wants to conduct a controlled experimental study to identify the covariate-dependent effects of a factor (e.g., a marketing sales promotion), based on a sample of study subjects selected optimally from a large database of covariates.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"48 1","pages":"1087 - 1097"},"PeriodicalIF":0.0,"publicationDate":"2015-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2016.1189633","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59756050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-03DOI: 10.1080/0740817X.2014.928963
Subhamoy Ganguly, M. Laguna
Production systems with closed-loop facilities must deal with the problem of sequencing batches in consecutive loops. This article studies a problem encountered in a production facility in which plastic parts of several shapes must be painted with different colors to satisfy the demand given by a set of production orders. The shapes and the colors produce a dual-setup problem that to the best of our knowledge has not been considered in the literature. The problem is formulated as a mixed-integer program and the limitations of this approach as a viable solution method are discussed. Two alternative solution approaches are described that are heuristic in nature: one specialized procedure developed from scratch and the other one built in the framework of commercial software. The presented computational experiments were designed to assess the advantages and disadvantages of both approaches.
{"title":"Modeling and solving a closed-loop scheduling problem with two types of setups","authors":"Subhamoy Ganguly, M. Laguna","doi":"10.1080/0740817X.2014.928963","DOIUrl":"https://doi.org/10.1080/0740817X.2014.928963","url":null,"abstract":"Production systems with closed-loop facilities must deal with the problem of sequencing batches in consecutive loops. This article studies a problem encountered in a production facility in which plastic parts of several shapes must be painted with different colors to satisfy the demand given by a set of production orders. The shapes and the colors produce a dual-setup problem that to the best of our knowledge has not been considered in the literature. The problem is formulated as a mixed-integer program and the limitations of this approach as a viable solution method are discussed. Two alternative solution approaches are described that are heuristic in nature: one specialized procedure developed from scratch and the other one built in the framework of commercial software. The presented computational experiments were designed to assess the advantages and disadvantages of both approaches.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"47 1","pages":"880 - 891"},"PeriodicalIF":0.0,"publicationDate":"2015-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2014.928963","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59745083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-03DOI: 10.1080/0740817X.2014.953644
Michael R. Wagner
This article proves that information can be a double-edged sword in supply chains. A simple supply chain is studied that consists of one supplier and one retailer, interacting via a wholesale price contract, where one firm knows the probabilistic distribution of demand and the other only knows the mean and variance. The firm with limited distributional knowledge applies simple robust optimization techniques. It is proved that a firm’s informational advantage is not necessarily beneficial and can lead to a reduction of the firm’s profit, demonstrating the detriment of information. It is shown how the direction of asymmetry, demand variability, and product economics affect both firms’ profits. These results also provide an understanding of how asymmetric information impacts the double-marginalization effect for the cumulative profits of the supply chain in certain cases reducing the effect. The symmetric incomplete informational case, where both firms only know the mean and variance of demand, is also studied and it is shown that it is possible that both firms can benefit from their collective lack of information. Throughout this article, practical guidelines where a supplier or retailer is motivated to share, hide, or seek information are identified.
{"title":"Robust purchasing and information asymmetry in supply chains with a price-only contract","authors":"Michael R. Wagner","doi":"10.1080/0740817X.2014.953644","DOIUrl":"https://doi.org/10.1080/0740817X.2014.953644","url":null,"abstract":"This article proves that information can be a double-edged sword in supply chains. A simple supply chain is studied that consists of one supplier and one retailer, interacting via a wholesale price contract, where one firm knows the probabilistic distribution of demand and the other only knows the mean and variance. The firm with limited distributional knowledge applies simple robust optimization techniques. It is proved that a firm’s informational advantage is not necessarily beneficial and can lead to a reduction of the firm’s profit, demonstrating the detriment of information. It is shown how the direction of asymmetry, demand variability, and product economics affect both firms’ profits. These results also provide an understanding of how asymmetric information impacts the double-marginalization effect for the cumulative profits of the supply chain in certain cases reducing the effect. The symmetric incomplete informational case, where both firms only know the mean and variance of demand, is also studied and it is shown that it is possible that both firms can benefit from their collective lack of information. Throughout this article, practical guidelines where a supplier or retailer is motivated to share, hide, or seek information are identified.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"47 1","pages":"819 - 840"},"PeriodicalIF":0.0,"publicationDate":"2015-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2014.953644","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59746353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-17DOI: 10.1080/0740817X.2015.1027458
F. Ramtin, Jennifer A. Pazour
An automated storage/retrieval system with multiple in-the-aisle pick positions is a semi-automated case-level order fulfillment technology that is widely used in distribution centers. We study the impact of product to pick position assignments on the expected throughput for different operating policies, demand profiles, and shape factors. We develop efficient algorithms of complexity O(nlog(n)) that provide the assignment that minimizes the expected travel time. Also, for different operating policies, shape configurations, and demand curves, we explore the structure of the optimal assignment of products to pick positions and quantify the difference between using a simple, practical assignment policy versus the optimal assignment. Finally, we derive closed-form analytical travel time models by approximating the optimal assignment's expected travel time using continuous demand curves and assuming an infinite number of pick positions in the aisle. We illustrate that these continuous models work well in estimating the travel time of a discrete rack and use them to find optimal design configurations.
{"title":"Product allocation problem for an AS/RS with multiple in-the-aisle pick positions","authors":"F. Ramtin, Jennifer A. Pazour","doi":"10.1080/0740817X.2015.1027458","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1027458","url":null,"abstract":"An automated storage/retrieval system with multiple in-the-aisle pick positions is a semi-automated case-level order fulfillment technology that is widely used in distribution centers. We study the impact of product to pick position assignments on the expected throughput for different operating policies, demand profiles, and shape factors. We develop efficient algorithms of complexity O(nlog(n)) that provide the assignment that minimizes the expected travel time. Also, for different operating policies, shape configurations, and demand curves, we explore the structure of the optimal assignment of products to pick positions and quantify the difference between using a simple, practical assignment policy versus the optimal assignment. Finally, we derive closed-form analytical travel time models by approximating the optimal assignment's expected travel time using continuous demand curves and assuming an infinite number of pick positions in the aisle. We illustrate that these continuous models work well in estimating the travel time of a discrete rack and use them to find optimal design configurations.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"47 1","pages":"1379 - 1396"},"PeriodicalIF":0.0,"publicationDate":"2015-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1027458","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59750190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-26DOI: 10.1080/0740817X.2015.1019162
H. Jalali, I. Nieuwenhuyse
Simulation optimization is increasingly popular for solving complicated and mathematically intractable business problems. Focusing on academic articles published between 1998 and 2013, the present survey aims to unveil the extent to which simulation optimization has been used to solve practical inventory problems (as opposed to small, theoretical “toy problem”), and to detect any trends that might have arisen (e.g., popular topics, effective simulation optimization methods, frequently studied inventory system structures). We find that metaheuristics (especially genetic algorithms) and methods that combine several simulation optimization techniques are the most popular. The resulting categorizations provide a useful overview for researchers studying complex inventory management problems, by providing detailed information on the inventory system characteristics and the employed simulation optimization techniques, highlighting articles that involve stochastic constraints (e.g., expected fill rate constraints) or that employ a robust simulation optimization approach. Finally, in highlighting both trends and gaps in the research field, this review suggests avenues for further research.
{"title":"Simulation optimization in inventory replenishment: a classification","authors":"H. Jalali, I. Nieuwenhuyse","doi":"10.1080/0740817X.2015.1019162","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1019162","url":null,"abstract":"Simulation optimization is increasingly popular for solving complicated and mathematically intractable business problems. Focusing on academic articles published between 1998 and 2013, the present survey aims to unveil the extent to which simulation optimization has been used to solve practical inventory problems (as opposed to small, theoretical “toy problem”), and to detect any trends that might have arisen (e.g., popular topics, effective simulation optimization methods, frequently studied inventory system structures). We find that metaheuristics (especially genetic algorithms) and methods that combine several simulation optimization techniques are the most popular. The resulting categorizations provide a useful overview for researchers studying complex inventory management problems, by providing detailed information on the inventory system characteristics and the employed simulation optimization techniques, highlighting articles that involve stochastic constraints (e.g., expected fill rate constraints) or that employ a robust simulation optimization approach. Finally, in highlighting both trends and gaps in the research field, this review suggests avenues for further research.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"86 1 1","pages":"1217 - 1235"},"PeriodicalIF":0.0,"publicationDate":"2015-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1019162","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59749726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-22DOI: 10.1080/0740817X.2015.1056389
Jie Pan, Yi Tao, L. Lee, E. P. Chew
The significance of product recovery through remanufacturing has been widely recognized and has compelled manufacturers to incorporate product recovery activities into normal manufacturing processes. Consequently, increasing attention has been paid to production and inventory management of the product recovery system where demand is satisfied through either manufacturing brand-new products or remanufacturing returned products into new ones. In this work, we investigate a recovery system with two product types and two return flows. A periodic-review inventory problem is addressed in the two-product recovery system and an approximate dynamic programming approach is proposed to obtain production and recovery decisions. A single-period problem is first solved and the optimal solution is characterized by a multilevel threshold policy. For the multi-period problem, we show that the threshold levels of each period are solely dependent on the gradients of the cost-to-go function at points of interest after approximation. The gradients are estimated by an infinitesimal perturbation analysis–based method and a backward induction approach is then applied to derive the threshold levels of each period. Numerical experiments are conducted under different scenarios and the threshold policy is shown to outperform two other heuristic policies.
{"title":"Production planning and inventory control for a two-product recovery system","authors":"Jie Pan, Yi Tao, L. Lee, E. P. Chew","doi":"10.1080/0740817X.2015.1056389","DOIUrl":"https://doi.org/10.1080/0740817X.2015.1056389","url":null,"abstract":"The significance of product recovery through remanufacturing has been widely recognized and has compelled manufacturers to incorporate product recovery activities into normal manufacturing processes. Consequently, increasing attention has been paid to production and inventory management of the product recovery system where demand is satisfied through either manufacturing brand-new products or remanufacturing returned products into new ones. In this work, we investigate a recovery system with two product types and two return flows. A periodic-review inventory problem is addressed in the two-product recovery system and an approximate dynamic programming approach is proposed to obtain production and recovery decisions. A single-period problem is first solved and the optimal solution is characterized by a multilevel threshold policy. For the multi-period problem, we show that the threshold levels of each period are solely dependent on the gradients of the cost-to-go function at points of interest after approximation. The gradients are estimated by an infinitesimal perturbation analysis–based method and a backward induction approach is then applied to derive the threshold levels of each period. Numerical experiments are conducted under different scenarios and the threshold policy is shown to outperform two other heuristic policies.","PeriodicalId":13379,"journal":{"name":"IIE Transactions","volume":"47 1","pages":"1342 - 1362"},"PeriodicalIF":0.0,"publicationDate":"2015-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0740817X.2015.1056389","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59750631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}