The determination of the transverse tip deflection of an elastic, hollow, tapered, cantilever, box beam under a uniform loading applied over half the length of the beam presented in the V&V10.1 standard is used to compare the application of the validation procedures presented in the V&V10.1 and V&V20 standards. Both procedures aim to estimate the modeling error of the mathematical/computational model used in the simulations taking into account the variability of the modulus of elasticity of the material used in the beam and the rotational flexibility at the clamped end of the beam. The paper discusses the four steps of the two error quantification procedures: 1- characterization of the problem including all the assumptions and approximations made to obtain the experimental and simulation data; 2-selection of the validation variable; 3- determination of the different quantities required by the validation metrics in the two error quantification procedures; 4- outcome of the two validation procedures and its discussion. The paper also discusses the inclusion of experimental, input and numerical uncertainties (assumed or demonstrated to be negligible in V&V10.1) in the two validation approaches. This simple exercise shows that different choices are made in the two alternative approaches, which lead to different ways of characterizing the modeling error. The topics of accuracy requirements and validation comparisons (model acceptance/rejection) for engineering applications are not addressed in this paper.
{"title":"Comparison of the V&V10.1 and V&V20 Modeling Error Quantification Procedures for the V&V10.1 Example","authors":"L. Eça, K. Dowding, D. Moorcroft, U. Ghia","doi":"10.1115/1.4053881","DOIUrl":"https://doi.org/10.1115/1.4053881","url":null,"abstract":"\u0000 The determination of the transverse tip deflection of an elastic, hollow, tapered, cantilever, box beam under a uniform loading applied over half the length of the beam presented in the V&V10.1 standard is used to compare the application of the validation procedures presented in the V&V10.1 and V&V20 standards. Both procedures aim to estimate the modeling error of the mathematical/computational model used in the simulations taking into account the variability of the modulus of elasticity of the material used in the beam and the rotational flexibility at the clamped end of the beam.\u0000 The paper discusses the four steps of the two error quantification procedures: 1- characterization of the problem including all the assumptions and approximations made to obtain the experimental and simulation data; 2-selection of the validation variable; 3- determination of the different quantities required by the validation metrics in the two error quantification procedures; 4- outcome of the two validation procedures and its discussion. The paper also discusses the inclusion of experimental, input and numerical uncertainties (assumed or demonstrated to be negligible in V&V10.1) in the two validation approaches. This simple exercise shows that different choices are made in the two alternative approaches, which lead to different ways of characterizing the modeling error. The topics of accuracy requirements and validation comparisons (model acceptance/rejection) for engineering applications are not addressed in this paper.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44039917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Averages are measured in many circumstances for diagnostic, predictive, or surveillance purposes. Examples include: average stress along a beam, average speed along a section of highway, average alcohol consumption per month, average GDP over a large region, a student's average grade over 4 years of study. However, the average value of a variable reveals nothing about fluctuations of the variable along the path that is averaged. Extremes – stress concentrations, speeding violations, binge drinking, poverty and wealth, intellectual incompetence in particular topics – may be more significant than the average. This paper explores the choice of design variables and performance requirements to achieve robustness against uncertainty when interpreting an average, in face of uncertain fluctuations of the averaged variable. Extremes are not observed, but robustness against those extremes enhances the ability to interpret the observed average in terms of the extremes. The opportuneness from favorable uncertainty is also explored. We examine the design of a cantilever beam with uncertain loads. We derive 4 generic propositions, based on info-gap decision theory, that establish necessary and sufficient conditions for robust or opportune dominance, and for sympathetic relations between robustness to pernicious uncertainty and opportuneness from propitious uncertainty.
{"title":"Inferring Extreme Values from Measured Averages Under Deep Uncertainty","authors":"Y. Ben-Haim","doi":"10.1115/1.4053411","DOIUrl":"https://doi.org/10.1115/1.4053411","url":null,"abstract":"\u0000 Averages are measured in many circumstances for diagnostic, predictive, or surveillance purposes. Examples include: average stress along a beam, average speed along a section of highway, average alcohol consumption per month, average GDP over a large region, a student's average grade over 4 years of study. However, the average value of a variable reveals nothing about fluctuations of the variable along the path that is averaged. Extremes – stress concentrations, speeding violations, binge drinking, poverty and wealth, intellectual incompetence in particular topics – may be more significant than the average. This paper explores the choice of design variables and performance requirements to achieve robustness against uncertainty when interpreting an average, in face of uncertain fluctuations of the averaged variable. Extremes are not observed, but robustness against those extremes enhances the ability to interpret the observed average in terms of the extremes. The opportuneness from favorable uncertainty is also explored. We examine the design of a cantilever beam with uncertain loads. We derive 4 generic propositions, based on info-gap decision theory, that establish necessary and sufficient conditions for robust or opportune dominance, and for sympathetic relations between robustness to pernicious uncertainty and opportuneness from propitious uncertainty.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44561673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Solution verification is crucial for establishing the reliability of simulations. A central challenge is to estimate the discretization error accurately and reliably. Many approaches to this estimation are based on the observed order of accuracy; however, it may fail when the numerical solutions lie outside the asymptotic range. Here we propose a grid refinement method which adopts constant orders given by the user, called the Prescribed Orders Expansion Method (POEM). Through an iterative procedure, the user is guaranteed to obtain the dominant orders of the discretization error. The user can also compare the corresponding terms to quantify the degree of asymptotic convergence of the numerical solutions. These features ensure that the estimation of the discretization error is accurate and reliable. Moreover, the implementation of POEM is the same for any dimensions and refinement paths. We demonstrate these capabilities using some advection and diffusion problems and standard refinement paths. The computational cost of using POEM is lower if the refinement ratio is larger; however, the number of shared grid points where POEM applies also decreases, causing greater uncertainty in the global estimates of the discretization error. We find that the proportion of shared grid points is maximized when the refinement ratios are in a certain form of fractions. Furthermore, we develop the Method of Interpolating Differences between Approximate Solutions (MIDAS) for creating shared grid points in the domain. These approaches allow users of POEM to obtain a global estimate of the discretization error of lower uncertainty at a reduced computational cost.
{"title":"Estimating Discretization Error with Preset Orders of Accuracy and Fractional Refinement Ratios","authors":"S. C. Y. Lo","doi":"10.1115/1.4056491","DOIUrl":"https://doi.org/10.1115/1.4056491","url":null,"abstract":"\u0000 Solution verification is crucial for establishing the reliability of simulations. A central challenge is to estimate the discretization error accurately and reliably. Many approaches to this estimation are based on the observed order of accuracy; however, it may fail when the numerical solutions lie outside the asymptotic range. Here we propose a grid refinement method which adopts constant orders given by the user, called the Prescribed Orders Expansion Method (POEM). Through an iterative procedure, the user is guaranteed to obtain the dominant orders of the discretization error. The user can also compare the corresponding terms to quantify the degree of asymptotic convergence of the numerical solutions. These features ensure that the estimation of the discretization error is accurate and reliable. Moreover, the implementation of POEM is the same for any dimensions and refinement paths. We demonstrate these capabilities using some advection and diffusion problems and standard refinement paths. The computational cost of using POEM is lower if the refinement ratio is larger; however, the number of shared grid points where POEM applies also decreases, causing greater uncertainty in the global estimates of the discretization error. We find that the proportion of shared grid points is maximized when the refinement ratios are in a certain form of fractions. Furthermore, we develop the Method of Interpolating Differences between Approximate Solutions (MIDAS) for creating shared grid points in the domain. These approaches allow users of POEM to obtain a global estimate of the discretization error of lower uncertainty at a reduced computational cost.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":"abs/2201.00264 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"63503457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A specialized hydrodynamic simulation code has been developed to simulate one-dimensional unsteady problems involving the detonation and deflagration of high explosives. To model all the relevant physical processes in these problems, a code is required to simulate compressible hydrodynamics, unsteady thermal conduction and chemical reactions with complex rate laws. Several verification exercises are presented which test the implementation of these capabilities. The code also requires models for physics processes such as equations of state and conductivity for pure materials and mixtures as well as rate laws for chemical reactions. Additional verification tests are required to ensure these models are implemented correctly. Though this code is limited in the types of problems it can simulate, its computationally efficient formulation allow it to be used in calibration studies for reactive burn models for high explosives.
{"title":"Verification of a Specialized Hydrodynamic Simulation Code for Modeling Deflagration and Detonation of High Explosives","authors":"Stephen A. Andrews, T. Aslam","doi":"10.1115/1.4053340","DOIUrl":"https://doi.org/10.1115/1.4053340","url":null,"abstract":"\u0000 A specialized hydrodynamic simulation code has been developed to simulate one-dimensional unsteady problems involving the detonation and deflagration of high explosives. To model all the relevant physical processes in these problems, a code is required to simulate compressible hydrodynamics, unsteady thermal conduction and chemical reactions with complex rate laws. Several verification exercises are presented which test the implementation of these capabilities. The code also requires models for physics processes such as equations of state and conductivity for pure materials and mixtures as well as rate laws for chemical reactions. Additional verification tests are required to ensure these models are implemented correctly. Though this code is limited in the types of problems it can simulate, its computationally efficient formulation allow it to be used in calibration studies for reactive burn models for high explosives.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41801975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We summarise the results of a computational study involved with Uncertainty Quantification (UQ) in a benchmark turbulent burner flame simulation. UQ analysis of this simulation enables one to analyse the convergence performance of one of the most widely-used uncertainty propagation techniques, Polynomial Chaos Expansion (PCE) at varying levels of system smoothness. This is possible because in the burner flame simulations, the smoothness of the time-dependent temperature, which is the study's QoI is found to evolve with the flame development state. This analysis is deemed important as it is known that PCE cannot accurately surrogate non-smooth QoIs and thus perform convergent UQ. While this restriction is known and gets accounted for, there is no understanding whether there is a quantifiable scaling relationship between the PCE's convergence metrics and the level of QoI's smoothness. It is found that the level of QoI-smoothness can be quantified by its standard deviation allowing to observe the effect of QoI's level of smoothness on the PCE's convergence performance. It is found that for our flow scenario, there exists a power-law relationship between a comparative parameter, defined to measure the PCE's convergence performance relative to Monte Carlo sampling, and the QoI's standard deviation, which allows us to make a more weighted decision on the choice of the uncertainty propagation technique.
{"title":"Uncertainty Quantification of Time-Dependent Quantities in a System with Adjustable Level of Smoothness","authors":"Marks Legkovskis, P. Thomas, M. Auinger","doi":"10.1115/1.4053161","DOIUrl":"https://doi.org/10.1115/1.4053161","url":null,"abstract":"\u0000 We summarise the results of a computational study involved with Uncertainty Quantification (UQ) in a benchmark turbulent burner flame simulation. UQ analysis of this simulation enables one to analyse the convergence performance of one of the most widely-used uncertainty propagation techniques, Polynomial Chaos Expansion (PCE) at varying levels of system smoothness. This is possible because in the burner flame simulations, the smoothness of the time-dependent temperature, which is the study's QoI is found to evolve with the flame development state. This analysis is deemed important as it is known that PCE cannot accurately surrogate non-smooth QoIs and thus perform convergent UQ. While this restriction is known and gets accounted for, there is no understanding whether there is a quantifiable scaling relationship between the PCE's convergence metrics and the level of QoI's smoothness. It is found that the level of QoI-smoothness can be quantified by its standard deviation allowing to observe the effect of QoI's level of smoothness on the PCE's convergence performance. It is found that for our flow scenario, there exists a power-law relationship between a comparative parameter, defined to measure the PCE's convergence performance relative to Monte Carlo sampling, and the QoI's standard deviation, which allows us to make a more weighted decision on the choice of the uncertainty propagation technique.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45278630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Campione, J. A. Stephens, Nevin Martin, Aubrey Eckert, L. Warne, Gabriel Huerta, R. Pfeiffer, Adam Jones
High-quality factor resonant cavities are challenging structures to model in electromagnetics owing to their large sensitivity to minute parameter changes. Therefore, uncertainty quantification (UQ) strategies are pivotal to understanding key parameters affecting the cavity response. We discuss here some of these strategies focusing on shielding effectiveness (SE) properties of a canonical slotted cylindrical cavity that will be used to develop credibility evidence in support of predictions made using computational simulations for this application.
{"title":"Developing Uncertainty Quantification Strategies in Electromagnetic Problems Involving Highly Resonant Cavities","authors":"S. Campione, J. A. Stephens, Nevin Martin, Aubrey Eckert, L. Warne, Gabriel Huerta, R. Pfeiffer, Adam Jones","doi":"10.1115/1.4051906","DOIUrl":"https://doi.org/10.1115/1.4051906","url":null,"abstract":"\u0000 High-quality factor resonant cavities are challenging structures to model in electromagnetics owing to their large sensitivity to minute parameter changes. Therefore, uncertainty quantification (UQ) strategies are pivotal to understanding key parameters affecting the cavity response. We discuss here some of these strategies focusing on shielding effectiveness (SE) properties of a canonical slotted cylindrical cavity that will be used to develop credibility evidence in support of predictions made using computational simulations for this application.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47861455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ya-tsʻêng d. Chao, Nicholas C. Lopes, Mark A. Ricklick, S. Boetcher
Validating turbulence models for cooling supercritical carbon dioxide (sCO2) in a horizontal pipe is challenging due to the lack of experimental data with spatially resolved local temperature measurements. Although many variables may be present to cause discrepancies between numerical and experimental data, this study focuses on how the choice of reference temperatures (both wall reference temperature and fluid bulk reference temperature) when calculating the heat transfer coefficient influences turbulence-model validation results. While it may seem straightforward to simply use the same parameters as the experimental setup, this has not been observed in practice. In this work, numerical simulations are performed for cooling sCO2 in a horizontal pipe for p = 8 MPa, d = 6 mm, G = 200, and 400 kg/(m2s), and qw = 12, 24, and 33 kW/m2. Local and average heat transfer coefficients with different reference temperatures, found to be frequently used in the literature, are presented and compared with commonly used experimental data. It was found that the choice of reference temperatures has a significant influence on the results of the numerical validation. Historically, the higher heat flux cases have been more difficult to validate, theorized due to using reference temperatures differing from the experiment; however, good agreement was found here using the reference temperatures that most closely matched the experiment. This not only highlights the need for careful selection of reference temperatures in simulations, but also the importance of clearly defining the reference temperature employed when reporting experimental results.
由于缺乏具有空间分辨率的局部温度测量的实验数据,验证水平管道中冷却超临界二氧化碳(sCO2)的湍流模型具有挑战性。虽然可能存在许多变量导致数值和实验数据之间的差异,但本研究的重点是计算传热系数时参考温度(壁面参考温度和流体体参考温度)的选择如何影响湍流模型验证结果。虽然简单地使用与实验设置相同的参数似乎很简单,但在实践中并未观察到这一点。在这项工作中,数值模拟了在p = 8 MPa, d = 6 mm, G = 200和400 kg/(m2s), qw = 12, 24和33 kW/m2的水平管道中冷却sCO2的情况。本文给出了文献中常用的不同参考温度下的局部传热系数和平均传热系数,并与常用的实验数据进行了比较。结果表明,参考温度的选择对数值验证结果有显著影响。从历史上看,由于使用与实验不同的参考温度,较高的热通量情况更难以验证;然而,使用与实验最接近的参考温度,这里发现了很好的一致性。这不仅强调了在模拟中仔细选择参考温度的必要性,而且还强调了在报告实验结果时明确定义参考温度的重要性。
{"title":"Effect of the Heat Transfer Coefficient Reference Temperatures on Validating Numerical Models of Supercritical CO2","authors":"Ya-tsʻêng d. Chao, Nicholas C. Lopes, Mark A. Ricklick, S. Boetcher","doi":"10.1115/1.4051637","DOIUrl":"https://doi.org/10.1115/1.4051637","url":null,"abstract":"Validating turbulence models for cooling supercritical carbon dioxide (sCO2) in a horizontal pipe is challenging due to the lack of experimental data with spatially resolved local temperature measurements. Although many variables may be present to cause discrepancies between numerical and experimental data, this study focuses on how the choice of reference temperatures (both wall reference temperature and fluid bulk reference temperature) when calculating the heat transfer coefficient influences turbulence-model validation results. While it may seem straightforward to simply use the same parameters as the experimental setup, this has not been observed in practice. In this work, numerical simulations are performed for cooling sCO2 in a horizontal pipe for p = 8 MPa, d = 6 mm, G = 200, and 400 kg/(m2s), and qw = 12, 24, and 33 kW/m2. Local and average heat transfer coefficients with different reference temperatures, found to be frequently used in the literature, are presented and compared with commonly used experimental data. It was found that the choice of reference temperatures has a significant influence on the results of the numerical validation. Historically, the higher heat flux cases have been more difficult to validate, theorized due to using reference temperatures differing from the experiment; however, good agreement was found here using the reference temperatures that most closely matched the experiment. This not only highlights the need for careful selection of reference temperatures in simulations, but also the importance of clearly defining the reference temperature employed when reporting experimental results.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47948235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Frankel, E. Wagman, R. Keedy, B. Houchens, Sarah N. Scott
Organic materials are an attractive choice for structural components due to their light weight and versatility. However, because they decompose at low temperatures relative to traditional materials, they pose a safety risk due to fire and loss of structural integrity. To quantify this risk, analysts use chemical kinetics models to describe the material pyrolysis and oxidation using thermogravimetric analysis (TGA). This process requires the calibration of many model parameters to closely match experimental data. Previous efforts in this field have largely been limited to finding a single best-fit set of parameters even though the experimental data may be very noisy. Furthermore, the chemical kinetics models are often simplified representations of the true decomposition process. The simplification induces model-form errors that the fitting process cannot capture. In this work, we propose a methodology for calibrating decomposition models to TGA data that accounts for uncertainty in the model-form and experimental data simultaneously. The methodology is applied to the decomposition of a carbon fiber epoxy composite with a three-stage reaction network and Arrhenius kinetics. The results show a good overlap between the model predictions and TGA data. Uncertainty bounds capture deviations of the model from the data. The calibrated parameter distributions are also presented. The distributions may be used in forward propagation of uncertainty in models that leverage this material.
{"title":"Embedded-Error Bayesian Calibration of Thermal Decomposition of Organic Materials","authors":"A. Frankel, E. Wagman, R. Keedy, B. Houchens, Sarah N. Scott","doi":"10.1115/1.4051638","DOIUrl":"https://doi.org/10.1115/1.4051638","url":null,"abstract":"\u0000 Organic materials are an attractive choice for structural components due to their light weight and versatility. However, because they decompose at low temperatures relative to traditional materials, they pose a safety risk due to fire and loss of structural integrity. To quantify this risk, analysts use chemical kinetics models to describe the material pyrolysis and oxidation using thermogravimetric analysis (TGA). This process requires the calibration of many model parameters to closely match experimental data. Previous efforts in this field have largely been limited to finding a single best-fit set of parameters even though the experimental data may be very noisy. Furthermore, the chemical kinetics models are often simplified representations of the true decomposition process. The simplification induces model-form errors that the fitting process cannot capture. In this work, we propose a methodology for calibrating decomposition models to TGA data that accounts for uncertainty in the model-form and experimental data simultaneously. The methodology is applied to the decomposition of a carbon fiber epoxy composite with a three-stage reaction network and Arrhenius kinetics. The results show a good overlap between the model predictions and TGA data. Uncertainty bounds capture deviations of the model from the data. The calibrated parameter distributions are also presented. The distributions may be used in forward propagation of uncertainty in models that leverage this material.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49088684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Property variations in a structure strongly impact the macroscopic mechanical performance as regions with lower strength will be prone to damage initiation or acceleration. Consideration of the variability in material property is critical for high-resolution simulations of damage initiation and propagation. While the recent progressive damage analyses consider randomness in property fields, accurately quantifying the uncertainty in damage measures remains computationally expensive. Stochastic damage analyses require extensive sampling of random property fields and numerous replications of the underlying non-linear deterministic simulations. This paper demonstrates that a Quasi Monte Carlo (QMC) method, which uses a multi-dimensional low discrepancy Sobol sequence, is a computationally economical way to obtain the mean and standard deviations in cracks evolving in composites. An Extended Finite Element Method (XFEM) method with spatially random strength fields simulates the damage initiation and evolution in a model composite. We compared the number of simulations required for Monte Carlo (MC) and QMC techniques to measure the influence of input variability on the mean crack-length in an open-hole angle-ply tensile test. We conclude that the low discrepancy sampling and QMC technique converges substantially faster than traditional MC methods.
{"title":"Quantifying Uncertainty of Damage in Composites Using a Quasi Monte Carlo Technique","authors":"Emil Pitz, K. Pochiraju","doi":"10.1115/1.4052895","DOIUrl":"https://doi.org/10.1115/1.4052895","url":null,"abstract":"\u0000 Property variations in a structure strongly impact the macroscopic mechanical performance as regions with lower strength will be prone to damage initiation or acceleration. Consideration of the variability in material property is critical for high-resolution simulations of damage initiation and propagation. While the recent progressive damage analyses consider randomness in property fields, accurately quantifying the uncertainty in damage measures remains computationally expensive. Stochastic damage analyses require extensive sampling of random property fields and numerous replications of the underlying non-linear deterministic simulations. This paper demonstrates that a Quasi Monte Carlo (QMC) method, which uses a multi-dimensional low discrepancy Sobol sequence, is a computationally economical way to obtain the mean and standard deviations in cracks evolving in composites. An Extended Finite Element Method (XFEM) method with spatially random strength fields simulates the damage initiation and evolution in a model composite. We compared the number of simulations required for Monte Carlo (MC) and QMC techniques to measure the influence of input variability on the mean crack-length in an open-hole angle-ply tensile test. We conclude that the low discrepancy sampling and QMC technique converges substantially faster than traditional MC methods.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2021-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45802103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When stress concentration factors are not available in handbooks, finite element analysis has become the predominant method for determining their values. For such determinations, there is a need to know if they have sufficient accuracy. Tuned Test Problems can provide a way of assessing the accuracy of stress concentration factors found with finite elements. Here we offer a means of constructing such test problems for stress concentrations within boundaries that have local constant radii of curvature. These problems are tuned to their originating applications by sharing the same global geometries and having slightly higher peak stresses. They also have exact solutions, thereby enabling a precise determination of the errors incurred in their finite element analysis.
{"title":"On the Generation of Tuned Test Problems for Stress Concentrations","authors":"G. Sinclair, A. Kardak","doi":"10.1115/1.4052833","DOIUrl":"https://doi.org/10.1115/1.4052833","url":null,"abstract":"\u0000 When stress concentration factors are not available in handbooks, finite element analysis has become the predominant method for determining their values. For such determinations, there is a need to know if they have sufficient accuracy. Tuned Test Problems can provide a way of assessing the accuracy of stress concentration factors found with finite elements. Here we offer a means of constructing such test problems for stress concentrations within boundaries that have local constant radii of curvature. These problems are tuned to their originating applications by sharing the same global geometries and having slightly higher peak stresses. They also have exact solutions, thereby enabling a precise determination of the errors incurred in their finite element analysis.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2021-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43792154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}