Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822330
S. Rank, F. Schulze, T. Schmidt
Many intralogistics systems expose autocorrelated arrival processes with significant influence on the systems' performance. Unfortunately there are no control strategies available which take this into account. Instead standard strategies like First Come First Served are applied which lead to systems tending to exhibit long queues and high volatility, even though these strategies perform well in the case of uncorrelated processes. So, there is a strong need for control strategies managing autocorrelated arrival processes. Accordingly this paper introduces HAFI (Highest Autocorrelated First), a new strategy which determines the processes' priority in accordance to their autocorrelation. The paper focuses on controlling autocorrelated arrival processes at a merge. The strategies First Come First Served and Longest Queue First will serve as reference. As a result and in respect to properly designed facilities, HAFI leads to comparatively short queues and waiting times as well as balanced 95th percentile values of the queue lengths of autocorrelated input processes.
{"title":"Hafi—Highest Autocorrelated First: A new priority rule to control autocorrelated input processes at merges","authors":"S. Rank, F. Schulze, T. Schmidt","doi":"10.1109/WSC.2016.7822330","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822330","url":null,"abstract":"Many intralogistics systems expose autocorrelated arrival processes with significant influence on the systems' performance. Unfortunately there are no control strategies available which take this into account. Instead standard strategies like First Come First Served are applied which lead to systems tending to exhibit long queues and high volatility, even though these strategies perform well in the case of uncorrelated processes. So, there is a strong need for control strategies managing autocorrelated arrival processes. Accordingly this paper introduces HAFI (Highest Autocorrelated First), a new strategy which determines the processes' priority in accordance to their autocorrelation. The paper focuses on controlling autocorrelated arrival processes at a merge. The strategies First Come First Served and Longest Queue First will serve as reference. As a result and in respect to properly designed facilities, HAFI leads to comparatively short queues and waiting times as well as balanced 95th percentile values of the queue lengths of autocorrelated input processes.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"14 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127593405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822159
S. Singh, R. Pimplikar, Ritwik Chaudhuri, G. Parija
In today's rapidly changing technological scenario, tech giants revise their strategic alignment every couple of years. As a result, their workforce has to be adapted to the organization's strategy. Members of the workforce who are neither relevant to the strategic alignment, nor can be made relevant by reskilling, have to be either outplaced (i.e. placed in an another job within organization) or separated from the organization. In geographies like Europe, where the cost of separation is very high, it becomes very important to make the right decision for each employee. In this paper, we describe a simulation based methodology to find the probability and time of outplacement of an employee. These numbers are inputs to a global problem of making the optimal decision for the entire workforce.
{"title":"Outplacement time and probability estimation using discrete event simulation","authors":"S. Singh, R. Pimplikar, Ritwik Chaudhuri, G. Parija","doi":"10.1109/WSC.2016.7822159","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822159","url":null,"abstract":"In today's rapidly changing technological scenario, tech giants revise their strategic alignment every couple of years. As a result, their workforce has to be adapted to the organization's strategy. Members of the workforce who are neither relevant to the strategic alignment, nor can be made relevant by reskilling, have to be either outplaced (i.e. placed in an another job within organization) or separated from the organization. In geographies like Europe, where the cost of separation is very high, it becomes very important to make the right decision for each employee. In this paper, we describe a simulation based methodology to find the probability and time of outplacement of an employee. These numbers are inputs to a global problem of making the optimal decision for the entire workforce.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"65 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128020637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822346
Jeremy R. Millar, Jason A. Blake, D. Hodson, J.O. Miller, R. Hill
This work expands the notion of unresolvable uncertainties due to modeling issues in weakly predictive simulations to include unique implementation induced sources that originate from fundamental trade-offs associated with distributed virtual environments. We consider these trade-offs in terms of the Consistency, Availability, and Partition tolerance (CAP) theorem to abstract away technical implementation details. Doing so illuminates systemic properties of weakly predictive simulations, including their ability to produce plausible responses. The plausibility property in particular is related to fairness concerns in distributed gaming and other interactive environments.
{"title":"Sources of unresolvable uncertainties in weakly predictive distributed virtual environments","authors":"Jeremy R. Millar, Jason A. Blake, D. Hodson, J.O. Miller, R. Hill","doi":"10.1109/WSC.2016.7822346","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822346","url":null,"abstract":"This work expands the notion of unresolvable uncertainties due to modeling issues in weakly predictive simulations to include unique implementation induced sources that originate from fundamental trade-offs associated with distributed virtual environments. We consider these trade-offs in terms of the Consistency, Availability, and Partition tolerance (CAP) theorem to abstract away technical implementation details. Doing so illuminates systemic properties of weakly predictive simulations, including their ability to produce plausible responses. The plausibility property in particular is related to fairness concerns in distributed gaming and other interactive environments.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133437624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822108
M. Plumlee, H. Lam
The vast majority of stochastic simulation models are imperfect in that they fail to fully emulate the entirety of real dynamics. Despite this, these imperfect models are still useful in practice, so long as one knows how the model is inexact. This inexactness is measured by a discrepancy between the proposed stochastic model and a true stochastic distribution across multiple values of some decision variables. In this paper, we propose a method to learn the discrepancy of a stochastic simulation using data collected from the system of interest. Our approach is a novel Bayesian framework that addresses the requirements for estimation of probability measures.
{"title":"Learning stochastic model discrepancy","authors":"M. Plumlee, H. Lam","doi":"10.1109/WSC.2016.7822108","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822108","url":null,"abstract":"The vast majority of stochastic simulation models are imperfect in that they fail to fully emulate the entirety of real dynamics. Despite this, these imperfect models are still useful in practice, so long as one knows how the model is inexact. This inexactness is measured by a discrepancy between the proposed stochastic model and a true stochastic distribution across multiple values of some decision variables. In this paper, we propose a method to learn the discrepancy of a stochastic simulation using data collected from the system of interest. Our approach is a novel Bayesian framework that addresses the requirements for estimation of probability measures.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131932539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822290
Chen-Fu Chien, Ying-Jen Chen, Jei-Zheng Wu
With the feature size shrinkage in advanced technology nodes, the modeling of process variations has become more critical for troubleshooting and yield enhancement. Misalignment among equipment tools or chambers in process stages is a major source of process variations. Because a process flow contains hundreds of stages during semiconductor fabrication, tool/chamber misalignment may more significantly affect the variation of transistor parameters in a wafer acceptance test. This study proposes a big data analytic framework that simultaneously considers the mean difference between tools and wafer-to-wafer variation and identifies possible root causes for yield enhancement. An empirical study was conducted to demonstrate the effectiveness of proposed approach and obtained promising results.
{"title":"Big data analytics for modeling WAT parameter variation induced by process tool in semiconductor manufacturing and empirical study","authors":"Chen-Fu Chien, Ying-Jen Chen, Jei-Zheng Wu","doi":"10.1109/WSC.2016.7822290","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822290","url":null,"abstract":"With the feature size shrinkage in advanced technology nodes, the modeling of process variations has become more critical for troubleshooting and yield enhancement. Misalignment among equipment tools or chambers in process stages is a major source of process variations. Because a process flow contains hundreds of stages during semiconductor fabrication, tool/chamber misalignment may more significantly affect the variation of transistor parameters in a wafer acceptance test. This study proposes a big data analytic framework that simultaneously considers the mean difference between tools and wafer-to-wafer variation and identifies possible root causes for yield enhancement. An empirical study was conducted to demonstrate the effectiveness of proposed approach and obtained promising results.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114702202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822072
S. Page
Models help us to understand, explain, predict, and act. They do so by simplifying reality or by constructing artificial analogs. As a result, any one model by be insufficient to capture the complexity of a process. By applying ensembles of diverse models, we can reach deeper understanding, make better predictions, take wiser actions, implement better designs, and reveal multiple logics. This many to one approach offers the possibility of near truth exists at what Richard Levins has called “the intersection of independent lies.”
{"title":"Many Model Thinking","authors":"S. Page","doi":"10.1109/WSC.2016.7822072","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822072","url":null,"abstract":"Models help us to understand, explain, predict, and act. They do so by simplifying reality or by constructing artificial analogs. As a result, any one model by be insufficient to capture the complexity of a process. By applying ensembles of diverse models, we can reach deeper understanding, make better predictions, take wiser actions, implement better designs, and reveal multiple logics. This many to one approach offers the possibility of near truth exists at what Richard Levins has called “the intersection of independent lies.”","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"312 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114718358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822110
Guangxin Jiang, L. J. Hong, Barry L. Nelson
Simulation has been widely used as a tool to estimate risk measures of financial portfolios. However, the sample paths generated in the simulation study are often discarded after the estimate of the risk measure is obtained. In this article, we suggest to store the simulation data and propose a logistic regression based approach to mining them. We show that, at any time and conditioning on the market conditions at the time, we can quickly estimate the portfolio risk measures and classify the portfolio into either low risk or high risk categories. We call this problem dynamic risk monitoring. We study the properties of our estimators and classifiers, and demonstrate the effectiveness of our approach through numerical studies.
{"title":"A simulation analytics approach to dynamic risk monitoring","authors":"Guangxin Jiang, L. J. Hong, Barry L. Nelson","doi":"10.1109/WSC.2016.7822110","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822110","url":null,"abstract":"Simulation has been widely used as a tool to estimate risk measures of financial portfolios. However, the sample paths generated in the simulation study are often discarded after the estimate of the risk measure is obtained. In this article, we suggest to store the simulation data and propose a logistic regression based approach to mining them. We show that, at any time and conditioning on the market conditions at the time, we can quickly estimate the portfolio risk measures and classify the portfolio into either low risk or high risk categories. We call this problem dynamic risk monitoring. We study the properties of our estimators and classifiers, and demonstrate the effectiveness of our approach through numerical studies.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115466462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822164
Tobias Uhlig, O. Rose, S. Rank
Queuing systems of any domain oftentimes exhibit correlated arrivals that considerably influence system behavior. Unfortunately, the vast majority of simulation modeling applications and programming languages do not provide the means to properly model the corresponding input processes. In order to obtain valid models, there is a substantial need for tools capable of modeling autocorrelated input processes. Accordingly, this paper provides a review of available tools to fit and model these processes. In addition to a brief theoretical discussion of the approaches, we provide tool evaluation from a practitioners perspective. The assessment of the tools is based on their ability to model input processes that are either fitted to a trace or defined explicitly by their characteristics, i.e., the marginal distribution and autocorrelation coefficients. In our experiments we found that tools relying on autoregressive models performed the best.
{"title":"Evaluation of modeling tools for autocorrelated input processes","authors":"Tobias Uhlig, O. Rose, S. Rank","doi":"10.1109/WSC.2016.7822164","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822164","url":null,"abstract":"Queuing systems of any domain oftentimes exhibit correlated arrivals that considerably influence system behavior. Unfortunately, the vast majority of simulation modeling applications and programming languages do not provide the means to properly model the corresponding input processes. In order to obtain valid models, there is a substantial need for tools capable of modeling autocorrelated input processes. Accordingly, this paper provides a review of available tools to fit and model these processes. In addition to a brief theoretical discussion of the approaches, we provide tool evaluation from a practitioners perspective. The assessment of the tools is based on their ability to model input processes that are either fitted to a trace or defined explicitly by their characteristics, i.e., the marginal distribution and autocorrelation coefficients. In our experiments we found that tools relying on autoregressive models performed the best.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"210 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123385709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822112
E. Salimi, A. Abbas
The modeling of complex service systems entails capturing many sub-components of the system, and the dependencies that exist among them in the form of a joint probability distribution. Two common methods for constructing joint probability distributions from experts using partial information include maximum entropy methods and copula methods. In this paper we explore the performance of these methods in capturing the dependence between random variables using correlation coefficients and lower-order pairwise assessments. We focus on the case of discrete random variables, and compare the performance of these methods using a Monte Carlo simulation when the variables exhibit both independence and non-linear dependence structures. We show that the maximum entropy method with correlation coefficients and the Gaussian copula method perform similarly, while the maximum entropy method with pairwise assessments performs better particularly when the variables exhibit non-linear dependence.
{"title":"A simulation-based comparison of maximum entropy and copula methods for capturing non-linear probability dependence","authors":"E. Salimi, A. Abbas","doi":"10.1109/WSC.2016.7822112","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822112","url":null,"abstract":"The modeling of complex service systems entails capturing many sub-components of the system, and the dependencies that exist among them in the form of a joint probability distribution. Two common methods for constructing joint probability distributions from experts using partial information include maximum entropy methods and copula methods. In this paper we explore the performance of these methods in capturing the dependence between random variables using correlation coefficients and lower-order pairwise assessments. We focus on the case of discrete random variables, and compare the performance of these methods using a Monte Carlo simulation when the variables exhibit both independence and non-linear dependence structures. We show that the maximum entropy method with correlation coefficients and the Gaussian copula method perform similarly, while the maximum entropy method with pairwise assessments performs better particularly when the variables exhibit non-linear dependence.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123005269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822182
A. Hill
Norovirus is a highly contagious gastrointestinal illness that causes the rapid onset of vomiting, diarrhea and fever. The virus relies on fecal-oral transmission making children particularly susceptible because of their increased incidence of hand-to-mouth contact. Side effects from the virus' symptoms can be problematic for children, i.e. severe dehydration. This paper examines transmission of the virus among elementary school classrooms, evaluating policies to reduce the number of children who become infected. The model focuses on the daily activities that allow for students' exposure to the virus including classroom activities and lunch/recess. Two policies that limit the amount of student-student interaction and were derived from guidelines published by the Center for Disease Control were explored. The results demonstrated that implementation of either policy helps reduce the number of students who become ill and that the sooner the policy is implemented the shorter the duration of the outbreak.
诺如病毒是一种高度传染性的胃肠道疾病,可引起快速发作的呕吐、腹泻和发烧。该病毒依靠粪口传播,使儿童特别容易感染,因为他们手口接触的发生率增加。病毒症状的副作用可能会给儿童带来问题,例如严重脱水。本文研究了病毒在小学教室中的传播,评估了减少儿童感染人数的政策。该模型侧重于允许学生接触病毒的日常活动,包括课堂活动和午餐/休息。研究人员探索了两项限制学生之间互动数量的政策,这些政策来自疾病控制中心(Center for Disease Control)发布的指南。结果表明,任何一项政策的实施都有助于减少学生生病的人数,而且政策实施得越早,疫情持续的时间就越短。
{"title":"Norovirus outbreaks: Using agent-based modeling to evaluate school policies","authors":"A. Hill","doi":"10.1109/WSC.2016.7822182","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822182","url":null,"abstract":"Norovirus is a highly contagious gastrointestinal illness that causes the rapid onset of vomiting, diarrhea and fever. The virus relies on fecal-oral transmission making children particularly susceptible because of their increased incidence of hand-to-mouth contact. Side effects from the virus' symptoms can be problematic for children, i.e. severe dehydration. This paper examines transmission of the virus among elementary school classrooms, evaluating policies to reduce the number of children who become infected. The model focuses on the daily activities that allow for students' exposure to the virus including classroom activities and lunch/recess. Two policies that limit the amount of student-student interaction and were derived from guidelines published by the Center for Disease Control were explored. The results demonstrated that implementation of either policy helps reduce the number of students who become ill and that the sooner the policy is implemented the shorter the duration of the outbreak.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122720092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}