Chanyoung Park, Samaun Nili, Justin T. Mathew, F. Ouellet, R. Koneru, N. Kim, S. Balachandar, R. Haftka
{"title":"Uncertainty Reduction for Model Error Detection in Multiphase Shock Tube Simulation","authors":"Chanyoung Park, Samaun Nili, Justin T. Mathew, F. Ouellet, R. Koneru, N. Kim, S. Balachandar, R. Haftka","doi":"10.1115/1.4051407","DOIUrl":null,"url":null,"abstract":"\n Uncertainty quantification (UQ) is an important step in the verification and validation of scientific computing. Validation can be inconclusive when uncertainties are larger than acceptable ranges for both simulation and experiment. Therefore, uncertainty reduction (UR) is important to achieve meaningful validation. A unique approach in this paper is to separate model error from uncertainty such that UR can reveal the model error. This paper aims to share lessons learned from UQ and UR of a horizontal shock tube simulation, whose goal is to validate the particle drag force model for the compressible multiphase flow. First, simulation UQ revealed the inconsistency in simulation predictions due to the numerical flux scheme, which was clearly shown using the parametric design of experiments. By improving the numerical flux scheme, the uncertainty due to inconsistency was removed, while increasing the overall prediction error. Second, the mismatch between the geometry of the experiments and the simplified 1D simulation model was identified as a lack of knowledge. After modifying simulation conditions and experiments, it turned out that the error due to the mismatch was small, which was unexpected based on expert opinions. Last, the uncertainty in the initial volume fraction of particles was reduced based on rigorous UQ. All these UR measures worked together to reveal the hidden modeling error in the simulation predictions, which can lead to a model improvement in the future. We summarized the lessons learned from this exercise in terms of empty success, useful failure, and deceptive success.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.5000,"publicationDate":"2021-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Verification, Validation and Uncertainty Quantification","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/1.4051407","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, MECHANICAL","Score":null,"Total":0}
引用次数: 2
Abstract
Uncertainty quantification (UQ) is an important step in the verification and validation of scientific computing. Validation can be inconclusive when uncertainties are larger than acceptable ranges for both simulation and experiment. Therefore, uncertainty reduction (UR) is important to achieve meaningful validation. A unique approach in this paper is to separate model error from uncertainty such that UR can reveal the model error. This paper aims to share lessons learned from UQ and UR of a horizontal shock tube simulation, whose goal is to validate the particle drag force model for the compressible multiphase flow. First, simulation UQ revealed the inconsistency in simulation predictions due to the numerical flux scheme, which was clearly shown using the parametric design of experiments. By improving the numerical flux scheme, the uncertainty due to inconsistency was removed, while increasing the overall prediction error. Second, the mismatch between the geometry of the experiments and the simplified 1D simulation model was identified as a lack of knowledge. After modifying simulation conditions and experiments, it turned out that the error due to the mismatch was small, which was unexpected based on expert opinions. Last, the uncertainty in the initial volume fraction of particles was reduced based on rigorous UQ. All these UR measures worked together to reveal the hidden modeling error in the simulation predictions, which can lead to a model improvement in the future. We summarized the lessons learned from this exercise in terms of empty success, useful failure, and deceptive success.