{"title":"A case study examining the impact of factor screening for Neural Network metamodels","authors":"S. Rosen, S. Guharay","doi":"10.1109/WSC.2013.6721444","DOIUrl":null,"url":null,"abstract":"Metamodeling of large-scale simulations consisting of a large number of input parameters can be very challenging. Neural Networks have shown great promise in fitting these large-scale simulations even without performing factor screening. However, factor screening is an effective method for logically reducing the dimensionality of an input space and thus enabling more feasible metamodel calibration. Applying factor screening methods before calibrating Neural Network metamodels or any metamodel can have both positive and negative effects. The critical assumption for factor screening under investigation involves the prevalence of two-way interactions that contain a variable without a significant main effect by itself. In a simulation with a large parameter space, the prevalence of two-way interactions and their contribution to the total variability in the model output is far from transparent. Important questions therefore arise regarding factor screening and Neural Network metamodels: (a) is this a process worth doing with today's more powerful computing processors, which provide a larger library of runs to do metamodeling; and (b), does erroneously screening these buried interaction terms critically impact the level of metamodel fidelity that one can achieve. In this paper we examine these questions through the construction of a case study on a large-scale simulation. This study projects regional homelessness levels per county of interest based on a large array of budget decisions and resource allocations that expand out to hundreds of input parameters.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 Winter Simulations Conference (WSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WSC.2013.6721444","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Metamodeling of large-scale simulations consisting of a large number of input parameters can be very challenging. Neural Networks have shown great promise in fitting these large-scale simulations even without performing factor screening. However, factor screening is an effective method for logically reducing the dimensionality of an input space and thus enabling more feasible metamodel calibration. Applying factor screening methods before calibrating Neural Network metamodels or any metamodel can have both positive and negative effects. The critical assumption for factor screening under investigation involves the prevalence of two-way interactions that contain a variable without a significant main effect by itself. In a simulation with a large parameter space, the prevalence of two-way interactions and their contribution to the total variability in the model output is far from transparent. Important questions therefore arise regarding factor screening and Neural Network metamodels: (a) is this a process worth doing with today's more powerful computing processors, which provide a larger library of runs to do metamodeling; and (b), does erroneously screening these buried interaction terms critically impact the level of metamodel fidelity that one can achieve. In this paper we examine these questions through the construction of a case study on a large-scale simulation. This study projects regional homelessness levels per county of interest based on a large array of budget decisions and resource allocations that expand out to hundreds of input parameters.