Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822330
S. Rank, F. Schulze, T. Schmidt
Many intralogistics systems expose autocorrelated arrival processes with significant influence on the systems' performance. Unfortunately there are no control strategies available which take this into account. Instead standard strategies like First Come First Served are applied which lead to systems tending to exhibit long queues and high volatility, even though these strategies perform well in the case of uncorrelated processes. So, there is a strong need for control strategies managing autocorrelated arrival processes. Accordingly this paper introduces HAFI (Highest Autocorrelated First), a new strategy which determines the processes' priority in accordance to their autocorrelation. The paper focuses on controlling autocorrelated arrival processes at a merge. The strategies First Come First Served and Longest Queue First will serve as reference. As a result and in respect to properly designed facilities, HAFI leads to comparatively short queues and waiting times as well as balanced 95th percentile values of the queue lengths of autocorrelated input processes.
{"title":"Hafi—Highest Autocorrelated First: A new priority rule to control autocorrelated input processes at merges","authors":"S. Rank, F. Schulze, T. Schmidt","doi":"10.1109/WSC.2016.7822330","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822330","url":null,"abstract":"Many intralogistics systems expose autocorrelated arrival processes with significant influence on the systems' performance. Unfortunately there are no control strategies available which take this into account. Instead standard strategies like First Come First Served are applied which lead to systems tending to exhibit long queues and high volatility, even though these strategies perform well in the case of uncorrelated processes. So, there is a strong need for control strategies managing autocorrelated arrival processes. Accordingly this paper introduces HAFI (Highest Autocorrelated First), a new strategy which determines the processes' priority in accordance to their autocorrelation. The paper focuses on controlling autocorrelated arrival processes at a merge. The strategies First Come First Served and Longest Queue First will serve as reference. As a result and in respect to properly designed facilities, HAFI leads to comparatively short queues and waiting times as well as balanced 95th percentile values of the queue lengths of autocorrelated input processes.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"14 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127593405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822159
S. Singh, R. Pimplikar, Ritwik Chaudhuri, G. Parija
In today's rapidly changing technological scenario, tech giants revise their strategic alignment every couple of years. As a result, their workforce has to be adapted to the organization's strategy. Members of the workforce who are neither relevant to the strategic alignment, nor can be made relevant by reskilling, have to be either outplaced (i.e. placed in an another job within organization) or separated from the organization. In geographies like Europe, where the cost of separation is very high, it becomes very important to make the right decision for each employee. In this paper, we describe a simulation based methodology to find the probability and time of outplacement of an employee. These numbers are inputs to a global problem of making the optimal decision for the entire workforce.
{"title":"Outplacement time and probability estimation using discrete event simulation","authors":"S. Singh, R. Pimplikar, Ritwik Chaudhuri, G. Parija","doi":"10.1109/WSC.2016.7822159","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822159","url":null,"abstract":"In today's rapidly changing technological scenario, tech giants revise their strategic alignment every couple of years. As a result, their workforce has to be adapted to the organization's strategy. Members of the workforce who are neither relevant to the strategic alignment, nor can be made relevant by reskilling, have to be either outplaced (i.e. placed in an another job within organization) or separated from the organization. In geographies like Europe, where the cost of separation is very high, it becomes very important to make the right decision for each employee. In this paper, we describe a simulation based methodology to find the probability and time of outplacement of an employee. These numbers are inputs to a global problem of making the optimal decision for the entire workforce.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"65 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128020637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822346
Jeremy R. Millar, Jason A. Blake, D. Hodson, J.O. Miller, R. Hill
This work expands the notion of unresolvable uncertainties due to modeling issues in weakly predictive simulations to include unique implementation induced sources that originate from fundamental trade-offs associated with distributed virtual environments. We consider these trade-offs in terms of the Consistency, Availability, and Partition tolerance (CAP) theorem to abstract away technical implementation details. Doing so illuminates systemic properties of weakly predictive simulations, including their ability to produce plausible responses. The plausibility property in particular is related to fairness concerns in distributed gaming and other interactive environments.
{"title":"Sources of unresolvable uncertainties in weakly predictive distributed virtual environments","authors":"Jeremy R. Millar, Jason A. Blake, D. Hodson, J.O. Miller, R. Hill","doi":"10.1109/WSC.2016.7822346","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822346","url":null,"abstract":"This work expands the notion of unresolvable uncertainties due to modeling issues in weakly predictive simulations to include unique implementation induced sources that originate from fundamental trade-offs associated with distributed virtual environments. We consider these trade-offs in terms of the Consistency, Availability, and Partition tolerance (CAP) theorem to abstract away technical implementation details. Doing so illuminates systemic properties of weakly predictive simulations, including their ability to produce plausible responses. The plausibility property in particular is related to fairness concerns in distributed gaming and other interactive environments.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133437624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822108
M. Plumlee, H. Lam
The vast majority of stochastic simulation models are imperfect in that they fail to fully emulate the entirety of real dynamics. Despite this, these imperfect models are still useful in practice, so long as one knows how the model is inexact. This inexactness is measured by a discrepancy between the proposed stochastic model and a true stochastic distribution across multiple values of some decision variables. In this paper, we propose a method to learn the discrepancy of a stochastic simulation using data collected from the system of interest. Our approach is a novel Bayesian framework that addresses the requirements for estimation of probability measures.
{"title":"Learning stochastic model discrepancy","authors":"M. Plumlee, H. Lam","doi":"10.1109/WSC.2016.7822108","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822108","url":null,"abstract":"The vast majority of stochastic simulation models are imperfect in that they fail to fully emulate the entirety of real dynamics. Despite this, these imperfect models are still useful in practice, so long as one knows how the model is inexact. This inexactness is measured by a discrepancy between the proposed stochastic model and a true stochastic distribution across multiple values of some decision variables. In this paper, we propose a method to learn the discrepancy of a stochastic simulation using data collected from the system of interest. Our approach is a novel Bayesian framework that addresses the requirements for estimation of probability measures.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131932539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822296
Yen-Shao Chen, Cheng-Hung Wu, Shi-Chung Chang
Advancements in communication and computing technologies have made promising the decentralized control of automated material handling systems (AMHS) to alleviate blocking and congestion of production flows and raise productivity in an automated large-scale factory. With the growing availability of edge computing and low-cost mobile communications, either among vehicles (V2V) or between vehicles and machines (V2M), decentralized vehicle control may exploit frequent and low latency exchanges of neighborhood information and local control computation to increase AMHS operation efficiency. In this study, a decentralized control algorithm design, BALI (blocking avoidance by exploiting location information) algorithm, exploits V2X exchanges of local information for transport job matching, blocking inference, and job exchange for vehicle dispatching in AMHS. Performance evaluation of the BALI algorithm by discrete-event simulation shows that the BALI algorithm can significantly reduce blocking and congestion in production flows as compared to commonly used Nearest Job First rule-based heuristics.
通信和计算技术的进步使自动化物料处理系统(AMHS)的分散控制成为可能,以缓解生产流程的阻塞和拥堵,提高自动化大型工厂的生产率。随着边缘计算和低成本移动通信的日益普及,无论是车辆之间(V2V)还是车辆与机器之间(V2M),分散的车辆控制可以利用频繁和低延迟的邻居信息交换和本地控制计算来提高AMHS的运行效率。本研究采用一种分散控制算法设计BALI (blocking avoidance by exploit location information)算法,利用V2X本地信息交换实现AMHS中运输作业匹配、阻塞推理和车辆调度的作业交换。通过离散事件模拟对BALI算法的性能评估表明,与常用的基于最近作业优先规则的启发式算法相比,BALI算法可以显著减少生产流中的阻塞和拥塞。
{"title":"Decentralized dispatching for blocking avoidance in automate material handling systems","authors":"Yen-Shao Chen, Cheng-Hung Wu, Shi-Chung Chang","doi":"10.1109/WSC.2016.7822296","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822296","url":null,"abstract":"Advancements in communication and computing technologies have made promising the decentralized control of automated material handling systems (AMHS) to alleviate blocking and congestion of production flows and raise productivity in an automated large-scale factory. With the growing availability of edge computing and low-cost mobile communications, either among vehicles (V2V) or between vehicles and machines (V2M), decentralized vehicle control may exploit frequent and low latency exchanges of neighborhood information and local control computation to increase AMHS operation efficiency. In this study, a decentralized control algorithm design, BALI (blocking avoidance by exploiting location information) algorithm, exploits V2X exchanges of local information for transport job matching, blocking inference, and job exchange for vehicle dispatching in AMHS. Performance evaluation of the BALI algorithm by discrete-event simulation shows that the BALI algorithm can significantly reduce blocking and congestion in production flows as compared to commonly used Nearest Job First rule-based heuristics.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125075252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822118
Marko A. Hofmann
Several papers have recently criticized the use of null hypothesis significance testing (NHST) in scientific applications of stochastic computer simulation. Their criticism can be underpinned by numerous articles from statistical methodologists. They have argued that focusing on p-values is not conducive to science, and that NHST is often dangerously misunderstood. A critical reflection of the arguments contra NHST shows, however, that although NHST is indeed ill-suited for many simulation applications and objectives it is by no means superfluous, neither in general, nor in particular for simulation.
{"title":"Null hypothesis significance testing in simulation","authors":"Marko A. Hofmann","doi":"10.1109/WSC.2016.7822118","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822118","url":null,"abstract":"Several papers have recently criticized the use of null hypothesis significance testing (NHST) in scientific applications of stochastic computer simulation. Their criticism can be underpinned by numerous articles from statistical methodologists. They have argued that focusing on p-values is not conducive to science, and that NHST is often dangerously misunderstood. A critical reflection of the arguments contra NHST shows, however, that although NHST is indeed ill-suited for many simulation applications and objectives it is by no means superfluous, neither in general, nor in particular for simulation.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125437852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822147
Dongwook Shin, M. Broadie, A. Zeevi
This paper describes and analyzes the problem of selecting the best of several alternatives (“systems”), where they are compared based on quantiles of their performances. The quantiles cannot be evaluated analytically but it is possible to sequentially sample from each system. The objective is to dynamically allocate a finite sampling budget to minimize the probability of falsely selecting non-best systems. To formulate this problem in a tractable form, we introduce an objective associated with the probability of false selection using large deviations theory and leverage it to design well-performing dynamic sampling policies. We first propose a naive policy that optimizes the aforementioned objective when the sampling budget is sufficiently large. We introduce two variants of the naive policy with the aim of improving finite-time performance; these policies retain the asymptotic performance of the naive one in some cases, while dramatically improving its finite-time performance.
{"title":"Tractable sampling strategies for quantile-based ordinal optimization","authors":"Dongwook Shin, M. Broadie, A. Zeevi","doi":"10.1109/WSC.2016.7822147","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822147","url":null,"abstract":"This paper describes and analyzes the problem of selecting the best of several alternatives (“systems”), where they are compared based on quantiles of their performances. The quantiles cannot be evaluated analytically but it is possible to sequentially sample from each system. The objective is to dynamically allocate a finite sampling budget to minimize the probability of falsely selecting non-best systems. To formulate this problem in a tractable form, we introduce an objective associated with the probability of false selection using large deviations theory and leverage it to design well-performing dynamic sampling policies. We first propose a naive policy that optimizes the aforementioned objective when the sampling budget is sufficiently large. We introduce two variants of the naive policy with the aim of improving finite-time performance; these policies retain the asymptotic performance of the naive one in some cases, while dramatically improving its finite-time performance.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131775918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822149
F. Vázquez-Abad, L. Fenn
The present paper follows up on Vázquez-Abad (2013), where we applied the ghost simulation model to a public transportation problem. The ghost simulation model replaces faster point processes (passenger arrivals) with a “fluid” model while retaining a discrete event simulation for the rest of the processes (bus dynamics). This is not an approximation, but an exact conditional expectation when the fast process is Poisson. It can be interpreted as a Filtered Monte Carlo method for fast simulation. In the current paper we develop the required theory to implement a mixed optimization procedure to find the optimal fleet size under a stationary probability constraint. It is a hybrid optimization because for each fleet size, the optimal headway is real-valued, while the fleet size is integer-valued. We exploit the structure of the problem to implement a stopped target tracking method combined with stochastic binary search.
{"title":"Mixed optimization for constrained resource allocation, an application to a local bus service","authors":"F. Vázquez-Abad, L. Fenn","doi":"10.1109/WSC.2016.7822149","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822149","url":null,"abstract":"The present paper follows up on Vázquez-Abad (2013), where we applied the ghost simulation model to a public transportation problem. The ghost simulation model replaces faster point processes (passenger arrivals) with a “fluid” model while retaining a discrete event simulation for the rest of the processes (bus dynamics). This is not an approximation, but an exact conditional expectation when the fast process is Poisson. It can be interpreted as a Filtered Monte Carlo method for fast simulation. In the current paper we develop the required theory to implement a mixed optimization procedure to find the optimal fleet size under a stationary probability constraint. It is a hybrid optimization because for each fleet size, the optimal headway is real-valued, while the fleet size is integer-valued. We exploit the structure of the problem to implement a stopped target tracking method combined with stochastic binary search.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129127269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822095
Patricia C. McGlaughlin, Alexandra Chronopoulou
The long-range dependence and self-similarity of fractional Brownian motion make it an attractive model for traffic in many data transfer networks. Reflected fractional Brownian Motion appears in the storage process of such a network. In this paper, we focus on the simulation of reflected fractional Brownian motion using a straightforward discretization scheme and we show that its strong error is of order hH, where h is the discretization step and H ∈ (0,1) is the Hurst index.
{"title":"Discretization error of reflected fractional Brownian motion","authors":"Patricia C. McGlaughlin, Alexandra Chronopoulou","doi":"10.1109/WSC.2016.7822095","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822095","url":null,"abstract":"The long-range dependence and self-similarity of fractional Brownian motion make it an attractive model for traffic in many data transfer networks. Reflected fractional Brownian Motion appears in the storage process of such a network. In this paper, we focus on the simulation of reflected fractional Brownian motion using a straightforward discretization scheme and we show that its strong error is of order hH, where h is the discretization step and H ∈ (0,1) is the Hurst index.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134188368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822152
A. Joseph, S. Bhatnagar
The cross entropy (CE) method is a model based search method to solve optimization problems where the objective function has minimal structure. The Monte-Carlo version of the CE method employs the naive sample averaging technique which is inefficient, both computationally and space wise. We provide a novel stochastic approximation version of the CE method, where the sample averaging is replaced with bootstrapping. In our approach, we reuse the previous samples based on discounted averaging, and hence it can save the overall computational and storage cost. Our algorithm is incremental in nature and possesses attractive features such as computational and storage efficiency, accuracy and stability. We provide conditions required for the algorithm to converge to the global optimum. We evaluated the algorithm on a variety of global optimization benchmark problems and the results obtained corroborate our theoretical findings.
{"title":"A randomized algorithm for continuous optimization","authors":"A. Joseph, S. Bhatnagar","doi":"10.1109/WSC.2016.7822152","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822152","url":null,"abstract":"The cross entropy (CE) method is a model based search method to solve optimization problems where the objective function has minimal structure. The Monte-Carlo version of the CE method employs the naive sample averaging technique which is inefficient, both computationally and space wise. We provide a novel stochastic approximation version of the CE method, where the sample averaging is replaced with bootstrapping. In our approach, we reuse the previous samples based on discounted averaging, and hence it can save the overall computational and storage cost. Our algorithm is incremental in nature and possesses attractive features such as computational and storage efficiency, accuracy and stability. We provide conditions required for the algorithm to converge to the global optimum. We evaluated the algorithm on a variety of global optimization benchmark problems and the results obtained corroborate our theoretical findings.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132500277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}