Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822152
A. Joseph, S. Bhatnagar
The cross entropy (CE) method is a model based search method to solve optimization problems where the objective function has minimal structure. The Monte-Carlo version of the CE method employs the naive sample averaging technique which is inefficient, both computationally and space wise. We provide a novel stochastic approximation version of the CE method, where the sample averaging is replaced with bootstrapping. In our approach, we reuse the previous samples based on discounted averaging, and hence it can save the overall computational and storage cost. Our algorithm is incremental in nature and possesses attractive features such as computational and storage efficiency, accuracy and stability. We provide conditions required for the algorithm to converge to the global optimum. We evaluated the algorithm on a variety of global optimization benchmark problems and the results obtained corroborate our theoretical findings.
{"title":"A randomized algorithm for continuous optimization","authors":"A. Joseph, S. Bhatnagar","doi":"10.1109/WSC.2016.7822152","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822152","url":null,"abstract":"The cross entropy (CE) method is a model based search method to solve optimization problems where the objective function has minimal structure. The Monte-Carlo version of the CE method employs the naive sample averaging technique which is inefficient, both computationally and space wise. We provide a novel stochastic approximation version of the CE method, where the sample averaging is replaced with bootstrapping. In our approach, we reuse the previous samples based on discounted averaging, and hence it can save the overall computational and storage cost. Our algorithm is incremental in nature and possesses attractive features such as computational and storage efficiency, accuracy and stability. We provide conditions required for the algorithm to converge to the global optimum. We evaluated the algorithm on a variety of global optimization benchmark problems and the results obtained corroborate our theoretical findings.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132500277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822304
Kan Wu, Ning Zhao
Queueing models can be used to evaluate the performance of manufacturing systems. Due to the emergence of cluster tools in contemporary production systems, proper queueing models have to be derived to evaluate the performance of machines with complex configurations. Job cascading is a common structure among cluster tools. Because of the blocking and starvation effects among servers, queue time analysis for a cluster tool with job cascading is difficult in general. Based on the insight from the reduction method, we proposed the approximate model for the mean queue time of a cascading machine subject to breakdowns. The model is validated by simulation and performs well in the examined cases.
{"title":"Mean queue time approximation for a workstation with cascading","authors":"Kan Wu, Ning Zhao","doi":"10.1109/WSC.2016.7822304","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822304","url":null,"abstract":"Queueing models can be used to evaluate the performance of manufacturing systems. Due to the emergence of cluster tools in contemporary production systems, proper queueing models have to be derived to evaluate the performance of machines with complex configurations. Job cascading is a common structure among cluster tools. Because of the blocking and starvation effects among servers, queue time analysis for a cluster tool with job cascading is difficult in general. Based on the insight from the reduction method, we proposed the approximate model for the mean queue time of a cascading machine subject to breakdowns. The model is validated by simulation and performs well in the examined cases.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132591428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822095
Patricia C. McGlaughlin, Alexandra Chronopoulou
The long-range dependence and self-similarity of fractional Brownian motion make it an attractive model for traffic in many data transfer networks. Reflected fractional Brownian Motion appears in the storage process of such a network. In this paper, we focus on the simulation of reflected fractional Brownian motion using a straightforward discretization scheme and we show that its strong error is of order hH, where h is the discretization step and H ∈ (0,1) is the Hurst index.
{"title":"Discretization error of reflected fractional Brownian motion","authors":"Patricia C. McGlaughlin, Alexandra Chronopoulou","doi":"10.1109/WSC.2016.7822095","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822095","url":null,"abstract":"The long-range dependence and self-similarity of fractional Brownian motion make it an attractive model for traffic in many data transfer networks. Reflected fractional Brownian Motion appears in the storage process of such a network. In this paper, we focus on the simulation of reflected fractional Brownian motion using a straightforward discretization scheme and we show that its strong error is of order hH, where h is the discretization step and H ∈ (0,1) is the Hurst index.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134188368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822147
Dongwook Shin, M. Broadie, A. Zeevi
This paper describes and analyzes the problem of selecting the best of several alternatives (“systems”), where they are compared based on quantiles of their performances. The quantiles cannot be evaluated analytically but it is possible to sequentially sample from each system. The objective is to dynamically allocate a finite sampling budget to minimize the probability of falsely selecting non-best systems. To formulate this problem in a tractable form, we introduce an objective associated with the probability of false selection using large deviations theory and leverage it to design well-performing dynamic sampling policies. We first propose a naive policy that optimizes the aforementioned objective when the sampling budget is sufficiently large. We introduce two variants of the naive policy with the aim of improving finite-time performance; these policies retain the asymptotic performance of the naive one in some cases, while dramatically improving its finite-time performance.
{"title":"Tractable sampling strategies for quantile-based ordinal optimization","authors":"Dongwook Shin, M. Broadie, A. Zeevi","doi":"10.1109/WSC.2016.7822147","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822147","url":null,"abstract":"This paper describes and analyzes the problem of selecting the best of several alternatives (“systems”), where they are compared based on quantiles of their performances. The quantiles cannot be evaluated analytically but it is possible to sequentially sample from each system. The objective is to dynamically allocate a finite sampling budget to minimize the probability of falsely selecting non-best systems. To formulate this problem in a tractable form, we introduce an objective associated with the probability of false selection using large deviations theory and leverage it to design well-performing dynamic sampling policies. We first propose a naive policy that optimizes the aforementioned objective when the sampling budget is sufficiently large. We introduce two variants of the naive policy with the aim of improving finite-time performance; these policies retain the asymptotic performance of the naive one in some cases, while dramatically improving its finite-time performance.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131775918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822292
Can Sun, H. Ehm, T. Rose
In the volatile semiconductor market, leading semiconductor manufacturers aim to keep their competitive advantage by providing better customization. In light of this situation, various technologies are proposed but complexity may also increase. This paper attempts to select the best strategy from the complexity perspective. We borrow the theory of change management and view each new technology as a change to the as-is one. A generic framework to decide the best approach via complexity measurement is proposed. It is applied to a case study with three technologies (shared reticle, compound lot and a combination of both), and for each one we analyze its change impact and increased complexity. This paper delivers both, a guideline on how to build up a complexity index to supplement the cost and benefits analysis, and its practical application to the decision making process to handle small volume production.
{"title":"Evaluation of small volume production solutions in semiconductor manufacturing: Analysis from a complexity perspective","authors":"Can Sun, H. Ehm, T. Rose","doi":"10.1109/WSC.2016.7822292","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822292","url":null,"abstract":"In the volatile semiconductor market, leading semiconductor manufacturers aim to keep their competitive advantage by providing better customization. In light of this situation, various technologies are proposed but complexity may also increase. This paper attempts to select the best strategy from the complexity perspective. We borrow the theory of change management and view each new technology as a change to the as-is one. A generic framework to decide the best approach via complexity measurement is proposed. It is applied to a case study with three technologies (shared reticle, compound lot and a combination of both), and for each one we analyze its change impact and increased complexity. This paper delivers both, a guideline on how to build up a complexity index to supplement the cost and benefits analysis, and its practical application to the decision making process to handle small volume production.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124296169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822167
Franco Di Pietro, G. Migoni, E. Kofman
In this article we propose a modification to the first order Linearly Implicit Quantized State System Method (LIQSS1), an algorithm for continuous system simulation that replaces the classic time discretization by the quantization of the state variables. LIQSS was designed to efficiently simulate stiff systems but it only works when the system has a particular structure. The proposed modification overcomes this limitation allowing the algorithm to efficiently simulate stiff systems with more general structures. Besides describing the new method and its software implementation, the article analyzes the algorithm performance in the simulation of a complex power electronic converter.
{"title":"Improving a Linearly Implicit Quantized State System Method","authors":"Franco Di Pietro, G. Migoni, E. Kofman","doi":"10.1109/WSC.2016.7822167","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822167","url":null,"abstract":"In this article we propose a modification to the first order Linearly Implicit Quantized State System Method (LIQSS1), an algorithm for continuous system simulation that replaces the classic time discretization by the quantization of the state variables. LIQSS was designed to efficiently simulate stiff systems but it only works when the system has a particular structure. The proposed modification overcomes this limitation allowing the algorithm to efficiently simulate stiff systems with more general structures. Besides describing the new method and its software implementation, the article analyzes the algorithm performance in the simulation of a complex power electronic converter.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123048596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822356
Mohammed Mawlana, A. Hammad
This paper presents a framework for implementing a simulation-based optimization model in a parallel computing environment on a single multi-core processor. The behavior of the model with multicore architecture is studied. In addition, the impact of multithreading on the performance of simulation-based optimization is examined. The framework is implemented using the master/slave paradigm. A case study is used to demonstrate the benefits of the proposed framework.
{"title":"Reducing computation time of stochastic simulation-based optimization using parallel computing on a single mutli-core system","authors":"Mohammed Mawlana, A. Hammad","doi":"10.1109/WSC.2016.7822356","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822356","url":null,"abstract":"This paper presents a framework for implementing a simulation-based optimization model in a parallel computing environment on a single multi-core processor. The behavior of the model with multicore architecture is studied. In addition, the impact of multithreading on the performance of simulation-based optimization is examined. The framework is implemented using the master/slave paradigm. A case study is used to demonstrate the benefits of the proposed framework.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124017874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822100
Wenjing Wang, Xi Chen
In this paper, we study the effects of using smoothed variance estimates in place of the sample variances on the performance of stochastic kriging (SK). Different variance estimation methods are investigated and it is shown through numerical examples that such a replacement leads to improved predictive performance of SK. An SK-based dual metamodeling approach is further proposed to obtain an efficient simulation budget allocation rule and consequently more accurate prediction results.
{"title":"The effects of estimation of heteroscedasticity on stochastic kriging","authors":"Wenjing Wang, Xi Chen","doi":"10.1109/WSC.2016.7822100","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822100","url":null,"abstract":"In this paper, we study the effects of using smoothed variance estimates in place of the sample variances on the performance of stochastic kriging (SK). Different variance estimation methods are investigated and it is shown through numerical examples that such a replacement leads to improved predictive performance of SK. An SK-based dual metamodeling approach is further proposed to obtain an efficient simulation budget allocation rule and consequently more accurate prediction results.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124586139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822176
Abhinav Sunderrajan, Vaisagh Viswanathan, Wentong Cai, A. Knoll
Ubiquitous data from a variety of sources such as smart phones, vehicles equipped with GPS receivers and fixed sensors makes it an exciting time for the implementation of several Advanced Traffic Information and Management Systems (ATMS). Leveraging this data for current traffic state estimation along with short term predictions of traffic flow can have far reaching implications for the next generation of Intelligent Transportation Services (ITS). In this paper, we present our proof-of-concept of such a data driven traffic simulation for the short term prediction and control of traffic flow by simulating a real world expressway through dynamic ramp-metering.
{"title":"Data driven Adaptive Traffic simulation of an expressway","authors":"Abhinav Sunderrajan, Vaisagh Viswanathan, Wentong Cai, A. Knoll","doi":"10.1109/WSC.2016.7822176","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822176","url":null,"abstract":"Ubiquitous data from a variety of sources such as smart phones, vehicles equipped with GPS receivers and fixed sensors makes it an exciting time for the implementation of several Advanced Traffic Information and Management Systems (ATMS). Leveraging this data for current traffic state estimation along with short term predictions of traffic flow can have far reaching implications for the next generation of Intelligent Transportation Services (ITS). In this paper, we present our proof-of-concept of such a data driven traffic simulation for the short term prediction and control of traffic flow by simulating a real world expressway through dynamic ramp-metering.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121362722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822220
Sokratis Papadopoulos, Elie Azar
Heating, Ventilation, and Air Conditioning (HVAC) systems account for a large share of the energy consumed in commercial buildings. Simple strategies such as adjusting HVAC set point temperatures can lead to significant energy savings at no additional financial costs. Despite their promising results, it is currently unclear if such operation strategies can have unintended consequences on other building performance metrics, such as occupants' thermal comfort and productivity. In this paper, a genetic algorithm multi-objective optimization framework is proposed to optimize the HVAC temperature set point settings in commercial buildings. Three objectives are considered, namely energy consumption, thermal comfort, and productivity. A reference medium-sized office building located in Baltimore, MD, is used as a case study to illustrate the framework's capabilities. Results highlight important tradeoffs between the considered metrics, which can guide the design of effective and comprehensive HVAC operation strategies.
{"title":"Optimizing HVAC operation in commercial buildings: A genetic algorithm multi-objective optimization framework","authors":"Sokratis Papadopoulos, Elie Azar","doi":"10.1109/WSC.2016.7822220","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822220","url":null,"abstract":"Heating, Ventilation, and Air Conditioning (HVAC) systems account for a large share of the energy consumed in commercial buildings. Simple strategies such as adjusting HVAC set point temperatures can lead to significant energy savings at no additional financial costs. Despite their promising results, it is currently unclear if such operation strategies can have unintended consequences on other building performance metrics, such as occupants' thermal comfort and productivity. In this paper, a genetic algorithm multi-objective optimization framework is proposed to optimize the HVAC temperature set point settings in commercial buildings. Three objectives are considered, namely energy consumption, thermal comfort, and productivity. A reference medium-sized office building located in Baltimore, MD, is used as a case study to illustrate the framework's capabilities. Results highlight important tradeoffs between the considered metrics, which can guide the design of effective and comprehensive HVAC operation strategies.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128946133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}