Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822304
Kan Wu, Ning Zhao
Queueing models can be used to evaluate the performance of manufacturing systems. Due to the emergence of cluster tools in contemporary production systems, proper queueing models have to be derived to evaluate the performance of machines with complex configurations. Job cascading is a common structure among cluster tools. Because of the blocking and starvation effects among servers, queue time analysis for a cluster tool with job cascading is difficult in general. Based on the insight from the reduction method, we proposed the approximate model for the mean queue time of a cascading machine subject to breakdowns. The model is validated by simulation and performs well in the examined cases.
{"title":"Mean queue time approximation for a workstation with cascading","authors":"Kan Wu, Ning Zhao","doi":"10.1109/WSC.2016.7822304","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822304","url":null,"abstract":"Queueing models can be used to evaluate the performance of manufacturing systems. Due to the emergence of cluster tools in contemporary production systems, proper queueing models have to be derived to evaluate the performance of machines with complex configurations. Job cascading is a common structure among cluster tools. Because of the blocking and starvation effects among servers, queue time analysis for a cluster tool with job cascading is difficult in general. Based on the insight from the reduction method, we proposed the approximate model for the mean queue time of a cascading machine subject to breakdowns. The model is validated by simulation and performs well in the examined cases.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132591428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822341
Joseph C. Hoecherl, M. Robbins, R. Hill, D. Ahner
We consider the problem of making accession and promotion decisions in the United States Air Force officer sustainment system. Accession decisions determine how many officers should be hired into the system at the lowest grade for each career specialty. Promotion decisions determine how many officers should be promoted to the next highest grade. We formulate a Markov decision process model to examine this military workforce planning problem. The large size of the problem instance motivating this research suggests that classical exact dynamic programming methods are inappropriate. As such, we develop and test approximate dynamic programming (ADP) algorithms to determine high-quality personnel policies relative to current practice. Our best ADP algorithm attains a statistically significant 2.8 percent improvement over the sustainment line policy currently employed by the USAF which serves as the benchmark policy.
{"title":"Approximate dynamic programming algorithms for United States Air Force officer sustainment","authors":"Joseph C. Hoecherl, M. Robbins, R. Hill, D. Ahner","doi":"10.1109/WSC.2016.7822341","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822341","url":null,"abstract":"We consider the problem of making accession and promotion decisions in the United States Air Force officer sustainment system. Accession decisions determine how many officers should be hired into the system at the lowest grade for each career specialty. Promotion decisions determine how many officers should be promoted to the next highest grade. We formulate a Markov decision process model to examine this military workforce planning problem. The large size of the problem instance motivating this research suggests that classical exact dynamic programming methods are inappropriate. As such, we develop and test approximate dynamic programming (ADP) algorithms to determine high-quality personnel policies relative to current practice. Our best ADP algorithm attains a statistically significant 2.8 percent improvement over the sustainment line policy currently employed by the USAF which serves as the benchmark policy.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122419340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Classical sequential ranking-and-selection (R&S) procedures require all pairwise comparisons after collecting one additional observation from each surviving system, which is typically an O(k2) operation where k is the number of systems. When the number of systems is large (e.g., millions), these comparisons can be very costly and may significantly slow down the R&S procedures. In this paper we revise KN procedure slightly and show that one may reduce the computational complexity of all pairwise comparisons to an O(k) operation, thus significantly reducing the computational burden. Numerical experiments show that the computational time reduces by orders of magnitude even for moderate numbers of systems.
{"title":"Speeding up pairwise comparisons for large scale ranking and selection","authors":"L. Hong, Jun Luo, Ying Zhong","doi":"10.5555/3042094.3042199","DOIUrl":"https://doi.org/10.5555/3042094.3042199","url":null,"abstract":"Classical sequential ranking-and-selection (R&S) procedures require all pairwise comparisons after collecting one additional observation from each surviving system, which is typically an O(k2) operation where k is the number of systems. When the number of systems is large (e.g., millions), these comparisons can be very costly and may significantly slow down the R&S procedures. In this paper we revise KN procedure slightly and show that one may reduce the computational complexity of all pairwise comparisons to an O(k) operation, thus significantly reducing the computational burden. Numerical experiments show that the computational time reduces by orders of magnitude even for moderate numbers of systems.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"50 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120987496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822156
Alessandro Pellegrini, F. Quaglia, Cristina Montañola-Sales, Josep Casanovas-García
Agent-based modeling and simulation is a versatile and promising methodology to capture complex interactions among entities and their surrounding environment. A great advantage is its ability to model phenomena at a macro scale by exploiting simpler descriptions at a micro level. It has been proven effective in many fields, and it is rapidly becoming a de-facto standard in the study of population dynamics. In this article we study programmability and performance aspects of the last-generation ROOT-Sim speculative PDES environment for multi/many-core shared-memory architectures. ROOT-Sim transparently offers a programming model where interactions can be based on both explicit message passing and in-place state accesses. We introduce programming guidelines for systematic exploitation of these facilities in agent-based simulations, and we study the effects on performance of an innovative load-sharing policy targeting these types of dependencies. An experimental assessment with synthetic and real-world applications is provided, to assess the validity of our proposal.
{"title":"Programming agent-based demographic models with cross-state and message-exchange dependencies: A study with speculative PDES and automatic load-sharing","authors":"Alessandro Pellegrini, F. Quaglia, Cristina Montañola-Sales, Josep Casanovas-García","doi":"10.1109/WSC.2016.7822156","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822156","url":null,"abstract":"Agent-based modeling and simulation is a versatile and promising methodology to capture complex interactions among entities and their surrounding environment. A great advantage is its ability to model phenomena at a macro scale by exploiting simpler descriptions at a micro level. It has been proven effective in many fields, and it is rapidly becoming a de-facto standard in the study of population dynamics. In this article we study programmability and performance aspects of the last-generation ROOT-Sim speculative PDES environment for multi/many-core shared-memory architectures. ROOT-Sim transparently offers a programming model where interactions can be based on both explicit message passing and in-place state accesses. We introduce programming guidelines for systematic exploitation of these facilities in agent-based simulations, and we study the effects on performance of an innovative load-sharing policy targeting these types of dependencies. An experimental assessment with synthetic and real-world applications is provided, to assess the validity of our proposal.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"378 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115916080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we consider the simulation budget allocation problem to maximize the probability of selecting the best simulated design in ordinal optimization. This problem has been studied extensively on the basis of the normal distribution. In this research, we consider the budget allocation problem when the underlying distribution is exponential. This case is widely seen in simulation practice. We derive an asymptotic closed-form allocation rule which is easy to compute and implement in practice, and provide some useful insights for the optimal budget allocation problem with exponential underlying distribution.
{"title":"Optimal computing budget allocation with exponential underlying distribution","authors":"Fei Gao, Siyang Gao","doi":"10.5555/3042094.3042191","DOIUrl":"https://doi.org/10.5555/3042094.3042191","url":null,"abstract":"In this paper, we consider the simulation budget allocation problem to maximize the probability of selecting the best simulated design in ordinal optimization. This problem has been studied extensively on the basis of the normal distribution. In this research, we consider the budget allocation problem when the underlying distribution is exponential. This case is widely seen in simulation practice. We derive an asymptotic closed-form allocation rule which is easy to compute and implement in practice, and provide some useful insights for the optimal budget allocation problem with exponential underlying distribution.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116699438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822110
Guangxin Jiang, L. J. Hong, Barry L. Nelson
Simulation has been widely used as a tool to estimate risk measures of financial portfolios. However, the sample paths generated in the simulation study are often discarded after the estimate of the risk measure is obtained. In this article, we suggest to store the simulation data and propose a logistic regression based approach to mining them. We show that, at any time and conditioning on the market conditions at the time, we can quickly estimate the portfolio risk measures and classify the portfolio into either low risk or high risk categories. We call this problem dynamic risk monitoring. We study the properties of our estimators and classifiers, and demonstrate the effectiveness of our approach through numerical studies.
{"title":"A simulation analytics approach to dynamic risk monitoring","authors":"Guangxin Jiang, L. J. Hong, Barry L. Nelson","doi":"10.1109/WSC.2016.7822110","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822110","url":null,"abstract":"Simulation has been widely used as a tool to estimate risk measures of financial portfolios. However, the sample paths generated in the simulation study are often discarded after the estimate of the risk measure is obtained. In this article, we suggest to store the simulation data and propose a logistic regression based approach to mining them. We show that, at any time and conditioning on the market conditions at the time, we can quickly estimate the portfolio risk measures and classify the portfolio into either low risk or high risk categories. We call this problem dynamic risk monitoring. We study the properties of our estimators and classifiers, and demonstrate the effectiveness of our approach through numerical studies.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115466462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822290
Chen-Fu Chien, Ying-Jen Chen, Jei-Zheng Wu
With the feature size shrinkage in advanced technology nodes, the modeling of process variations has become more critical for troubleshooting and yield enhancement. Misalignment among equipment tools or chambers in process stages is a major source of process variations. Because a process flow contains hundreds of stages during semiconductor fabrication, tool/chamber misalignment may more significantly affect the variation of transistor parameters in a wafer acceptance test. This study proposes a big data analytic framework that simultaneously considers the mean difference between tools and wafer-to-wafer variation and identifies possible root causes for yield enhancement. An empirical study was conducted to demonstrate the effectiveness of proposed approach and obtained promising results.
{"title":"Big data analytics for modeling WAT parameter variation induced by process tool in semiconductor manufacturing and empirical study","authors":"Chen-Fu Chien, Ying-Jen Chen, Jei-Zheng Wu","doi":"10.1109/WSC.2016.7822290","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822290","url":null,"abstract":"With the feature size shrinkage in advanced technology nodes, the modeling of process variations has become more critical for troubleshooting and yield enhancement. Misalignment among equipment tools or chambers in process stages is a major source of process variations. Because a process flow contains hundreds of stages during semiconductor fabrication, tool/chamber misalignment may more significantly affect the variation of transistor parameters in a wafer acceptance test. This study proposes a big data analytic framework that simultaneously considers the mean difference between tools and wafer-to-wafer variation and identifies possible root causes for yield enhancement. An empirical study was conducted to demonstrate the effectiveness of proposed approach and obtained promising results.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114702202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822072
S. Page
Models help us to understand, explain, predict, and act. They do so by simplifying reality or by constructing artificial analogs. As a result, any one model by be insufficient to capture the complexity of a process. By applying ensembles of diverse models, we can reach deeper understanding, make better predictions, take wiser actions, implement better designs, and reveal multiple logics. This many to one approach offers the possibility of near truth exists at what Richard Levins has called “the intersection of independent lies.”
{"title":"Many Model Thinking","authors":"S. Page","doi":"10.1109/WSC.2016.7822072","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822072","url":null,"abstract":"Models help us to understand, explain, predict, and act. They do so by simplifying reality or by constructing artificial analogs. As a result, any one model by be insufficient to capture the complexity of a process. By applying ensembles of diverse models, we can reach deeper understanding, make better predictions, take wiser actions, implement better designs, and reveal multiple logics. This many to one approach offers the possibility of near truth exists at what Richard Levins has called “the intersection of independent lies.”","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"312 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114718358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822130
M. Fu
In March of 2016, Google DeepMind's AlphaGo, a computer Go-playing program, defeated the reigning human world champion Go player, 4-1, a feat far more impressive than previous victories by computer programs in chess (IBM's Deep Blue) and Jeopardy (IBM's Watson). The main engine behind the program combines machine learning approaches with a technique called Monte Carlo tree search. Current versions of Monte Carlo tree search used in Go-playing algorithms are based on a version developed for games that traces its roots back to the adaptive multi-stage sampling simulation optimization algorithm for estimating value functions in finite-horizon Markov decision processes (MDPs) introduced by Chang et al. (2005), which was the first use of Upper Confidence Bounds (UCBs) for Monte Carlo simulation-based solution of MDPs. We review the main ideas in UCB-based Monte Carlo tree search by connecting it to simulation optimization through the use of two simple examples: decision trees and tic-tac-toe.
{"title":"AlphaGo and Monte Carlo tree search: The simulation optimization perspective","authors":"M. Fu","doi":"10.1109/WSC.2016.7822130","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822130","url":null,"abstract":"In March of 2016, Google DeepMind's AlphaGo, a computer Go-playing program, defeated the reigning human world champion Go player, 4-1, a feat far more impressive than previous victories by computer programs in chess (IBM's Deep Blue) and Jeopardy (IBM's Watson). The main engine behind the program combines machine learning approaches with a technique called Monte Carlo tree search. Current versions of Monte Carlo tree search used in Go-playing algorithms are based on a version developed for games that traces its roots back to the adaptive multi-stage sampling simulation optimization algorithm for estimating value functions in finite-horizon Markov decision processes (MDPs) introduced by Chang et al. (2005), which was the first use of Upper Confidence Bounds (UCBs) for Monte Carlo simulation-based solution of MDPs. We review the main ideas in UCB-based Monte Carlo tree search by connecting it to simulation optimization through the use of two simple examples: decision trees and tic-tac-toe.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121217168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822164
Tobias Uhlig, O. Rose, S. Rank
Queuing systems of any domain oftentimes exhibit correlated arrivals that considerably influence system behavior. Unfortunately, the vast majority of simulation modeling applications and programming languages do not provide the means to properly model the corresponding input processes. In order to obtain valid models, there is a substantial need for tools capable of modeling autocorrelated input processes. Accordingly, this paper provides a review of available tools to fit and model these processes. In addition to a brief theoretical discussion of the approaches, we provide tool evaluation from a practitioners perspective. The assessment of the tools is based on their ability to model input processes that are either fitted to a trace or defined explicitly by their characteristics, i.e., the marginal distribution and autocorrelation coefficients. In our experiments we found that tools relying on autoregressive models performed the best.
{"title":"Evaluation of modeling tools for autocorrelated input processes","authors":"Tobias Uhlig, O. Rose, S. Rank","doi":"10.1109/WSC.2016.7822164","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822164","url":null,"abstract":"Queuing systems of any domain oftentimes exhibit correlated arrivals that considerably influence system behavior. Unfortunately, the vast majority of simulation modeling applications and programming languages do not provide the means to properly model the corresponding input processes. In order to obtain valid models, there is a substantial need for tools capable of modeling autocorrelated input processes. Accordingly, this paper provides a review of available tools to fit and model these processes. In addition to a brief theoretical discussion of the approaches, we provide tool evaluation from a practitioners perspective. The assessment of the tools is based on their ability to model input processes that are either fitted to a trace or defined explicitly by their characteristics, i.e., the marginal distribution and autocorrelation coefficients. In our experiments we found that tools relying on autoregressive models performed the best.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"210 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123385709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}