Ya-tsʻêng d. Chao, Nicholas C. Lopes, Mark A. Ricklick, S. Boetcher
Validating turbulence models for cooling supercritical carbon dioxide (sCO2) in a horizontal pipe is challenging due to the lack of experimental data with spatially resolved local temperature measurements. Although many variables may be present to cause discrepancies between numerical and experimental data, this study focuses on how the choice of reference temperatures (both wall reference temperature and fluid bulk reference temperature) when calculating the heat transfer coefficient influences turbulence-model validation results. While it may seem straightforward to simply use the same parameters as the experimental setup, this has not been observed in practice. In this work, numerical simulations are performed for cooling sCO2 in a horizontal pipe for p = 8 MPa, d = 6 mm, G = 200, and 400 kg/(m2s), and qw = 12, 24, and 33 kW/m2. Local and average heat transfer coefficients with different reference temperatures, found to be frequently used in the literature, are presented and compared with commonly used experimental data. It was found that the choice of reference temperatures has a significant influence on the results of the numerical validation. Historically, the higher heat flux cases have been more difficult to validate, theorized due to using reference temperatures differing from the experiment; however, good agreement was found here using the reference temperatures that most closely matched the experiment. This not only highlights the need for careful selection of reference temperatures in simulations, but also the importance of clearly defining the reference temperature employed when reporting experimental results.
由于缺乏具有空间分辨率的局部温度测量的实验数据,验证水平管道中冷却超临界二氧化碳(sCO2)的湍流模型具有挑战性。虽然可能存在许多变量导致数值和实验数据之间的差异,但本研究的重点是计算传热系数时参考温度(壁面参考温度和流体体参考温度)的选择如何影响湍流模型验证结果。虽然简单地使用与实验设置相同的参数似乎很简单,但在实践中并未观察到这一点。在这项工作中,数值模拟了在p = 8 MPa, d = 6 mm, G = 200和400 kg/(m2s), qw = 12, 24和33 kW/m2的水平管道中冷却sCO2的情况。本文给出了文献中常用的不同参考温度下的局部传热系数和平均传热系数,并与常用的实验数据进行了比较。结果表明,参考温度的选择对数值验证结果有显著影响。从历史上看,由于使用与实验不同的参考温度,较高的热通量情况更难以验证;然而,使用与实验最接近的参考温度,这里发现了很好的一致性。这不仅强调了在模拟中仔细选择参考温度的必要性,而且还强调了在报告实验结果时明确定义参考温度的重要性。
{"title":"Effect of the Heat Transfer Coefficient Reference Temperatures on Validating Numerical Models of Supercritical CO2","authors":"Ya-tsʻêng d. Chao, Nicholas C. Lopes, Mark A. Ricklick, S. Boetcher","doi":"10.1115/1.4051637","DOIUrl":"https://doi.org/10.1115/1.4051637","url":null,"abstract":"Validating turbulence models for cooling supercritical carbon dioxide (sCO2) in a horizontal pipe is challenging due to the lack of experimental data with spatially resolved local temperature measurements. Although many variables may be present to cause discrepancies between numerical and experimental data, this study focuses on how the choice of reference temperatures (both wall reference temperature and fluid bulk reference temperature) when calculating the heat transfer coefficient influences turbulence-model validation results. While it may seem straightforward to simply use the same parameters as the experimental setup, this has not been observed in practice. In this work, numerical simulations are performed for cooling sCO2 in a horizontal pipe for p = 8 MPa, d = 6 mm, G = 200, and 400 kg/(m2s), and qw = 12, 24, and 33 kW/m2. Local and average heat transfer coefficients with different reference temperatures, found to be frequently used in the literature, are presented and compared with commonly used experimental data. It was found that the choice of reference temperatures has a significant influence on the results of the numerical validation. Historically, the higher heat flux cases have been more difficult to validate, theorized due to using reference temperatures differing from the experiment; however, good agreement was found here using the reference temperatures that most closely matched the experiment. This not only highlights the need for careful selection of reference temperatures in simulations, but also the importance of clearly defining the reference temperature employed when reporting experimental results.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.6,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47948235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Frankel, E. Wagman, R. Keedy, B. Houchens, Sarah N. Scott
Organic materials are an attractive choice for structural components due to their light weight and versatility. However, because they decompose at low temperatures relative to traditional materials, they pose a safety risk due to fire and loss of structural integrity. To quantify this risk, analysts use chemical kinetics models to describe the material pyrolysis and oxidation using thermogravimetric analysis (TGA). This process requires the calibration of many model parameters to closely match experimental data. Previous efforts in this field have largely been limited to finding a single best-fit set of parameters even though the experimental data may be very noisy. Furthermore, the chemical kinetics models are often simplified representations of the true decomposition process. The simplification induces model-form errors that the fitting process cannot capture. In this work, we propose a methodology for calibrating decomposition models to TGA data that accounts for uncertainty in the model-form and experimental data simultaneously. The methodology is applied to the decomposition of a carbon fiber epoxy composite with a three-stage reaction network and Arrhenius kinetics. The results show a good overlap between the model predictions and TGA data. Uncertainty bounds capture deviations of the model from the data. The calibrated parameter distributions are also presented. The distributions may be used in forward propagation of uncertainty in models that leverage this material.
{"title":"Embedded-Error Bayesian Calibration of Thermal Decomposition of Organic Materials","authors":"A. Frankel, E. Wagman, R. Keedy, B. Houchens, Sarah N. Scott","doi":"10.1115/1.4051638","DOIUrl":"https://doi.org/10.1115/1.4051638","url":null,"abstract":"\u0000 Organic materials are an attractive choice for structural components due to their light weight and versatility. However, because they decompose at low temperatures relative to traditional materials, they pose a safety risk due to fire and loss of structural integrity. To quantify this risk, analysts use chemical kinetics models to describe the material pyrolysis and oxidation using thermogravimetric analysis (TGA). This process requires the calibration of many model parameters to closely match experimental data. Previous efforts in this field have largely been limited to finding a single best-fit set of parameters even though the experimental data may be very noisy. Furthermore, the chemical kinetics models are often simplified representations of the true decomposition process. The simplification induces model-form errors that the fitting process cannot capture. In this work, we propose a methodology for calibrating decomposition models to TGA data that accounts for uncertainty in the model-form and experimental data simultaneously. The methodology is applied to the decomposition of a carbon fiber epoxy composite with a three-stage reaction network and Arrhenius kinetics. The results show a good overlap between the model predictions and TGA data. Uncertainty bounds capture deviations of the model from the data. The calibrated parameter distributions are also presented. The distributions may be used in forward propagation of uncertainty in models that leverage this material.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.6,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49088684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Property variations in a structure strongly impact the macroscopic mechanical performance as regions with lower strength will be prone to damage initiation or acceleration. Consideration of the variability in material property is critical for high-resolution simulations of damage initiation and propagation. While the recent progressive damage analyses consider randomness in property fields, accurately quantifying the uncertainty in damage measures remains computationally expensive. Stochastic damage analyses require extensive sampling of random property fields and numerous replications of the underlying non-linear deterministic simulations. This paper demonstrates that a Quasi Monte Carlo (QMC) method, which uses a multi-dimensional low discrepancy Sobol sequence, is a computationally economical way to obtain the mean and standard deviations in cracks evolving in composites. An Extended Finite Element Method (XFEM) method with spatially random strength fields simulates the damage initiation and evolution in a model composite. We compared the number of simulations required for Monte Carlo (MC) and QMC techniques to measure the influence of input variability on the mean crack-length in an open-hole angle-ply tensile test. We conclude that the low discrepancy sampling and QMC technique converges substantially faster than traditional MC methods.
{"title":"Quantifying Uncertainty of Damage in Composites Using a Quasi Monte Carlo Technique","authors":"Emil Pitz, K. Pochiraju","doi":"10.1115/1.4052895","DOIUrl":"https://doi.org/10.1115/1.4052895","url":null,"abstract":"\u0000 Property variations in a structure strongly impact the macroscopic mechanical performance as regions with lower strength will be prone to damage initiation or acceleration. Consideration of the variability in material property is critical for high-resolution simulations of damage initiation and propagation. While the recent progressive damage analyses consider randomness in property fields, accurately quantifying the uncertainty in damage measures remains computationally expensive. Stochastic damage analyses require extensive sampling of random property fields and numerous replications of the underlying non-linear deterministic simulations. This paper demonstrates that a Quasi Monte Carlo (QMC) method, which uses a multi-dimensional low discrepancy Sobol sequence, is a computationally economical way to obtain the mean and standard deviations in cracks evolving in composites. An Extended Finite Element Method (XFEM) method with spatially random strength fields simulates the damage initiation and evolution in a model composite. We compared the number of simulations required for Monte Carlo (MC) and QMC techniques to measure the influence of input variability on the mean crack-length in an open-hole angle-ply tensile test. We conclude that the low discrepancy sampling and QMC technique converges substantially faster than traditional MC methods.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.6,"publicationDate":"2021-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45802103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When stress concentration factors are not available in handbooks, finite element analysis has become the predominant method for determining their values. For such determinations, there is a need to know if they have sufficient accuracy. Tuned Test Problems can provide a way of assessing the accuracy of stress concentration factors found with finite elements. Here we offer a means of constructing such test problems for stress concentrations within boundaries that have local constant radii of curvature. These problems are tuned to their originating applications by sharing the same global geometries and having slightly higher peak stresses. They also have exact solutions, thereby enabling a precise determination of the errors incurred in their finite element analysis.
{"title":"On the Generation of Tuned Test Problems for Stress Concentrations","authors":"G. Sinclair, A. Kardak","doi":"10.1115/1.4052833","DOIUrl":"https://doi.org/10.1115/1.4052833","url":null,"abstract":"\u0000 When stress concentration factors are not available in handbooks, finite element analysis has become the predominant method for determining their values. For such determinations, there is a need to know if they have sufficient accuracy. Tuned Test Problems can provide a way of assessing the accuracy of stress concentration factors found with finite elements. Here we offer a means of constructing such test problems for stress concentrations within boundaries that have local constant radii of curvature. These problems are tuned to their originating applications by sharing the same global geometries and having slightly higher peak stresses. They also have exact solutions, thereby enabling a precise determination of the errors incurred in their finite element analysis.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.6,"publicationDate":"2021-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43792154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fire scene reconstruction and determining the fire evolution (i.e. item-to-item ignition events) using the post-fire compartment is an extremely difficult task because of the time-integrated nature of the observed damages. Bayesian methods are ideal for making inferences amongst hypotheses given observations and are able to naturally incorporate uncertainties. A Bayesian methodology for determining probabilities to items that may have initiated the fire in a compartment from damage signatures is developed. Exercise of this methodology requires uncertainty quantification of these damage signatures. A simple compartment configuration was used to quantify the uncertainty in damage predictions by Fire Dynamics Simulator (FDS), and a compartment evolution program, JT-risk as compared to experimentally derived damage signatures. Surrogate sensors spaced within the compartment use heat flux data collected over the course of the simulations to inform damage models. Experimental repeatability showed up to 4% uncertainty in damage signatures between replicates . Uncertainties for FDS and JT-risk ranged from 12% up to 32% when compared to experimental damages. Separately, the evolution physics of a simple three fuel package problem with surrogate damage sensors were characterized in a compartment using experimental data, FDS, and JT-risk predictions. An simple ignition model was used for each of the fuel packages. The Bayesian methodology was exercised using the damage signatures collected, cycling through each of the three fuel packages, and combined with the previously quantified uncertainties. Only reconstruction using experimental data was able to confidently predict the true hypothesis from the three scenarios.
{"title":"Experimental and Modeling Uncertainty Considerations for Determining the First Item Ignited in a Compartment Using a Bayesian Method","authors":"J. Cabrera, R. Moser, O. Ezekoye","doi":"10.1115/1.4052796","DOIUrl":"https://doi.org/10.1115/1.4052796","url":null,"abstract":"\u0000 Fire scene reconstruction and determining the fire evolution (i.e. item-to-item ignition events) using the post-fire compartment is an extremely difficult task because of the time-integrated nature of the observed damages. Bayesian methods are ideal for making inferences amongst hypotheses given observations and are able to naturally incorporate uncertainties.\u0000 A Bayesian methodology for determining probabilities to items that may have initiated the fire in a compartment from damage signatures is developed. Exercise of this methodology requires uncertainty quantification of these damage signatures. A simple compartment configuration was used to quantify the uncertainty in damage predictions by Fire Dynamics Simulator (FDS), and a compartment evolution program, JT-risk as compared to experimentally derived damage signatures. Surrogate sensors spaced within the compartment use heat flux data collected over the course of the simulations to inform damage models. Experimental repeatability showed up to 4% uncertainty in damage signatures between replicates . Uncertainties for FDS and JT-risk ranged from 12% up to 32% when compared to experimental damages.\u0000 Separately, the evolution physics of a simple three fuel package problem with surrogate damage sensors were characterized in a compartment using experimental data, FDS, and JT-risk predictions. An simple ignition model was used for each of the fuel packages. The Bayesian methodology was exercised using the damage signatures collected, cycling through each of the three fuel packages, and combined with the previously quantified uncertainties. Only reconstruction using experimental data was able to confidently predict the true hypothesis from the three scenarios.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.6,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45535369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper documents a computational fluid dynamics (CFD) validation benchmark experiment for flow through three parallel, heated channels from one plenum to another. The test section was installed into a facility designed for natural convection benchmark validation experiments. The focus of these experiments was the highly-coupled thermal-fluid dynamics that occur between mixing jets in the upper plenum of the wind tunnel. A thermal instability in mixing jets, called thermal striping, can cause damage to structures which is a concern for High Temperature Gas Reactors. Nine experimental cases were explored by varying the relative channel temperature or blower speed. The boundary conditions for CFD validation were measured and tabulated along with an uncertainty. Geometry measurements of the triple channel test section were used to make an as-built solid model for use in simulation. The outer tunnel and channel surface temperatures, the pressure drop across the test section, atmospheric conditions, and inflow into the upper plenum were measured or calculated for the boundary conditions. The air velocity and temperature were measured in the jet mixing region of the upper plenum as system response quantities.
{"title":"Benchmark Validation Experiment of Plenum-to-Plenum Flow Through Heated Parallel Channels","authors":"A. W. Parker, Barton L. Smith","doi":"10.1115/1.4052763","DOIUrl":"https://doi.org/10.1115/1.4052763","url":null,"abstract":"\u0000 This paper documents a computational fluid dynamics (CFD) validation benchmark experiment for flow through three parallel, heated channels from one plenum to another. The test section was installed into a facility designed for natural convection benchmark validation experiments. The focus of these experiments was the highly-coupled thermal-fluid dynamics that occur between mixing jets in the upper plenum of the wind tunnel. A thermal instability in mixing jets, called thermal striping, can cause damage to structures which is a concern for High Temperature Gas Reactors. Nine experimental cases were explored by varying the relative channel temperature or blower speed. The boundary conditions for CFD validation were measured and tabulated along with an uncertainty. Geometry measurements of the triple channel test section were used to make an as-built solid model for use in simulation. The outer tunnel and channel surface temperatures, the pressure drop across the test section, atmospheric conditions, and inflow into the upper plenum were measured or calculated for the boundary conditions. The air velocity and temperature were measured in the jet mixing region of the upper plenum as system response quantities.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.6,"publicationDate":"2021-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47262484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Response to “Closure on the Discussion of “Models, Uncertainty, and the Sandia V&V Challenge Problem” ” (Oberkampf, W. L., and Balch, M. S., ASME J. Verif. Valid. Uncert., 2020, 5(3), p. 035501-1)","authors":"G. Hazelrigg, G. Klutke","doi":"10.1115/1.4051591","DOIUrl":"https://doi.org/10.1115/1.4051591","url":null,"abstract":"","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.6,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42827017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modeling and Simulation (M&S) is seen as a means to mitigate the difficulties associated with increased system complexity, integration, and cross-couplings effects encountered during development of aircraft subsystems. As a consequence, knowledge of model validity is necessary for taking robust and justified design decisions. This paper presents a method for using coverage metrics to formulate an optimal model validation strategy. Three fundamentally different and industrially relevant use-cases are presented. The first use-case entails the successive identification of validation settings, and the second considers the simultaneous identification of n validation settings. The latter of these two use-cases is finally expanded to incorporate a secondary model-based objective to the optimization problem in a third use-case. The approach presented is designed to be scalable and generic to models of industrially relevant complexity. As a result, selecting experiments for validation is done objectively with little required manual effort.
{"title":"Optimal Selection of Model Validation Experiments: Guided by Coverage","authors":"Robert Hällqvist, R. Braun, M. Eek, P. Krus","doi":"10.1115/1.4051497","DOIUrl":"https://doi.org/10.1115/1.4051497","url":null,"abstract":"\u0000 Modeling and Simulation (M&S) is seen as a means to mitigate the difficulties associated with increased system complexity, integration, and cross-couplings effects encountered during development of aircraft subsystems. As a consequence, knowledge of model validity is necessary for taking robust and justified design decisions. This paper presents a method for using coverage metrics to formulate an optimal model validation strategy. Three fundamentally different and industrially relevant use-cases are presented. The first use-case entails the successive identification of validation settings, and the second considers the simultaneous identification of n validation settings. The latter of these two use-cases is finally expanded to incorporate a secondary model-based objective to the optimization problem in a third use-case. The approach presented is designed to be scalable and generic to models of industrially relevant complexity. As a result, selecting experiments for validation is done objectively with little required manual effort.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.6,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44088987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Banyay, Clarence Worrell, S. E. Sidener, Joshua S. Kaizer
We present a framework for establishing credibility of a machine learning (ML) model used to predict a key process control variable setting to maximize product quality in a component manufacturing application. Our model coupled a purely data-based ML model with a physics-based adjustment that encoded subject matter expertise of the physical process. Establishing credibility of the resulting model provided the basis for eliminating a costly intermediate testing process that was previously used to determine the control variable setting.
{"title":"Credibility Assessment of Machine Learning in a Manufacturing Process Application","authors":"G. Banyay, Clarence Worrell, S. E. Sidener, Joshua S. Kaizer","doi":"10.1115/1.4051717","DOIUrl":"https://doi.org/10.1115/1.4051717","url":null,"abstract":"\u0000 We present a framework for establishing credibility of a machine learning (ML) model used to predict a key process control variable setting to maximize product quality in a component manufacturing application. Our model coupled a purely data-based ML model with a physics-based adjustment that encoded subject matter expertise of the physical process. Establishing credibility of the resulting model provided the basis for eliminating a costly intermediate testing process that was previously used to determine the control variable setting.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.6,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41925707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chanyoung Park, Samaun Nili, Justin T. Mathew, F. Ouellet, R. Koneru, N. Kim, S. Balachandar, R. Haftka
Uncertainty quantification (UQ) is an important step in the verification and validation of scientific computing. Validation can be inconclusive when uncertainties are larger than acceptable ranges for both simulation and experiment. Therefore, uncertainty reduction (UR) is important to achieve meaningful validation. A unique approach in this paper is to separate model error from uncertainty such that UR can reveal the model error. This paper aims to share lessons learned from UQ and UR of a horizontal shock tube simulation, whose goal is to validate the particle drag force model for the compressible multiphase flow. First, simulation UQ revealed the inconsistency in simulation predictions due to the numerical flux scheme, which was clearly shown using the parametric design of experiments. By improving the numerical flux scheme, the uncertainty due to inconsistency was removed, while increasing the overall prediction error. Second, the mismatch between the geometry of the experiments and the simplified 1D simulation model was identified as a lack of knowledge. After modifying simulation conditions and experiments, it turned out that the error due to the mismatch was small, which was unexpected based on expert opinions. Last, the uncertainty in the initial volume fraction of particles was reduced based on rigorous UQ. All these UR measures worked together to reveal the hidden modeling error in the simulation predictions, which can lead to a model improvement in the future. We summarized the lessons learned from this exercise in terms of empty success, useful failure, and deceptive success.
{"title":"Uncertainty Reduction for Model Error Detection in Multiphase Shock Tube Simulation","authors":"Chanyoung Park, Samaun Nili, Justin T. Mathew, F. Ouellet, R. Koneru, N. Kim, S. Balachandar, R. Haftka","doi":"10.1115/1.4051407","DOIUrl":"https://doi.org/10.1115/1.4051407","url":null,"abstract":"\u0000 Uncertainty quantification (UQ) is an important step in the verification and validation of scientific computing. Validation can be inconclusive when uncertainties are larger than acceptable ranges for both simulation and experiment. Therefore, uncertainty reduction (UR) is important to achieve meaningful validation. A unique approach in this paper is to separate model error from uncertainty such that UR can reveal the model error. This paper aims to share lessons learned from UQ and UR of a horizontal shock tube simulation, whose goal is to validate the particle drag force model for the compressible multiphase flow. First, simulation UQ revealed the inconsistency in simulation predictions due to the numerical flux scheme, which was clearly shown using the parametric design of experiments. By improving the numerical flux scheme, the uncertainty due to inconsistency was removed, while increasing the overall prediction error. Second, the mismatch between the geometry of the experiments and the simplified 1D simulation model was identified as a lack of knowledge. After modifying simulation conditions and experiments, it turned out that the error due to the mismatch was small, which was unexpected based on expert opinions. Last, the uncertainty in the initial volume fraction of particles was reduced based on rigorous UQ. All these UR measures worked together to reveal the hidden modeling error in the simulation predictions, which can lead to a model improvement in the future. We summarized the lessons learned from this exercise in terms of empty success, useful failure, and deceptive success.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.6,"publicationDate":"2021-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46906057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}