This article discusses numerical errors in unsteady flow simulations, which may include round-off, statistical, iterative, and time and space discretization errors. The estimation of iterative and discretization errors and the influence of the initial condition on unsteady flows that become periodic are discussed. In this latter case, the goal is to determine the simulation time required to reduce the influence of the initial condition to negligible levels. Two one-dimensional, unsteady manufactured solutions are used to illustrate the interference between the different types of numerical errors. One solution is periodic and the other includes a transient region before it reaches a steady-state. The results show that for a selected grid and time-step, statistical convergence of the periodic solution may be achieved at significant lower error levels than those of iterative and discretization errors. However, statistical convergence deteriorates when iterative convergence criteria become less demanding, grids are refined, and Courant number increased.For statistically converged solutions of the periodic flow and for the transient solution, iterative convergence criteria required to obtain a negligible influence of the iterative error when compared to the discretization error are more strict than typical values found in the open literature. More demanding criteria are required when the grid is refined and/or the Courant number is increased. When the numerical error is dominated by the iterative error, it is pointless to refine the grid and/or reduce the time-step. For solutions with a numerical error dominated by the discretization error, three different techniques are applied to illustrate how the discretization uncertainty can be estimated, using grid/time refinement studies: three data points at a fixed Courant number; five data points involving three time steps for the same grid and three grids for the same time-step; five data points including at least two grids and two time steps. The latter two techniques distinguish between space and time convergence, whereas the first one combines the effect of the two discretization errors.
{"title":"Numerical Errors in Unsteady Flow Simulations","authors":"L. Eça, G. Vaz, S. Toxopeus, M. Hoekstra","doi":"10.1115/1.4043975","DOIUrl":"https://doi.org/10.1115/1.4043975","url":null,"abstract":"This article discusses numerical errors in unsteady flow simulations, which may include round-off, statistical, iterative, and time and space discretization errors. The estimation of iterative and discretization errors and the influence of the initial condition on unsteady flows that become periodic are discussed. In this latter case, the goal is to determine the simulation time required to reduce the influence of the initial condition to negligible levels. Two one-dimensional, unsteady manufactured solutions are used to illustrate the interference between the different types of numerical errors. One solution is periodic and the other includes a transient region before it reaches a steady-state. The results show that for a selected grid and time-step, statistical convergence of the periodic solution may be achieved at significant lower error levels than those of iterative and discretization errors. However, statistical convergence deteriorates when iterative convergence criteria become less demanding, grids are refined, and Courant number increased.For statistically converged solutions of the periodic flow and for the transient solution, iterative convergence criteria required to obtain a negligible influence of the iterative error when compared to the discretization error are more strict than typical values found in the open literature. More demanding criteria are required when the grid is refined and/or the Courant number is increased. When the numerical error is dominated by the iterative error, it is pointless to refine the grid and/or reduce the time-step. For solutions with a numerical error dominated by the discretization error, three different techniques are applied to illustrate how the discretization uncertainty can be estimated, using grid/time refinement studies: three data points at a fixed Courant number; five data points involving three time steps for the same grid and three grids for the same time-step; five data points including at least two grids and two time steps. The latter two techniques distinguish between space and time convergence, whereas the first one combines the effect of the two discretization errors.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1115/1.4043975","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45726190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The transient response analysis is important for the design and evaluation of uncertain engineering systems under impact excitations. In this paper, statistical energy analysis (SEA) is developed to evaluate the high-frequency transient energy response of two-subsystem models considering interval uncertainties. Affine arithmetic (AA) and a subinterval technique are introduced into SEA to improve the computational accuracy. Numerical simulations on a coupled-plate and a plate-cavity system considering interval uncertainties are performed. The analysis precision of the proposed approach is validated by Monte Carlo (MC) method. The results show that the analysis precision of the proposed method decreases with the increasing uncertainty level of parameters. The computational accuracy of the proposed method can be significantly improved by employing AA and subinterval technique.
{"title":"Prediction of Transient Statistical Energy Response for Two-Subsystem Models Considering Interval Uncertainty","authors":"Chen Qiang, Q. Fei, Shaoqing Wu, Yanbin Li","doi":"10.1115/1.4045201","DOIUrl":"https://doi.org/10.1115/1.4045201","url":null,"abstract":"\u0000 The transient response analysis is important for the design and evaluation of uncertain engineering systems under impact excitations. In this paper, statistical energy analysis (SEA) is developed to evaluate the high-frequency transient energy response of two-subsystem models considering interval uncertainties. Affine arithmetic (AA) and a subinterval technique are introduced into SEA to improve the computational accuracy. Numerical simulations on a coupled-plate and a plate-cavity system considering interval uncertainties are performed. The analysis precision of the proposed approach is validated by Monte Carlo (MC) method. The results show that the analysis precision of the proposed method decreases with the increasing uncertainty level of parameters. The computational accuracy of the proposed method can be significantly improved by employing AA and subinterval technique.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48079516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The popular use of response surface methodology (RSM) accelerates the solutions of parameter identification and response analysis issues. However, accurate RSM models subject to aleatory and epistemic uncertainties are still challenging to construct, especially for multidimensional inputs, which is widely existed in real-world problems. In this study, an adaptive interval response surface methodology (AIRSM) based on extended active subspaces is proposed for mixed random and interval uncertainties. Based on the idea of subspace dimension reduction, extended active subspaces are given for mixed uncertainties, and interval active variable representation is derived for the construction of AIRSM. A weighted response surface strategy is introduced and tested for predicting the accurate boundary. Moreover, an interval dynamic correlation index is defined, and significance check and cross validation are reformulated in active subspaces to evaluate the AIRSM. The effectiveness of AIRSM is demonstrated on two test examples: three-dimensional nonlinear function and speed reducer design. They both possess a dominant one-dimensional active subspace with small estimation error, and the accuracy of AIRSM is verified by comparing with full-dimensional Monte Carlo simulates, thus providing a potential template for tackling high-dimensional problems involving mixed aleatory and interval uncertainties.
{"title":"An Adaptive Response Surface Methodology Based on Active Subspaces for Mixed Random and Interval Uncertainties","authors":"Xingzhi Hu, Yanhui Duan, Ruili Wang, Xiao Liang, Jiangtao Chen","doi":"10.1115/1.4045200","DOIUrl":"https://doi.org/10.1115/1.4045200","url":null,"abstract":"\u0000 The popular use of response surface methodology (RSM) accelerates the solutions of parameter identification and response analysis issues. However, accurate RSM models subject to aleatory and epistemic uncertainties are still challenging to construct, especially for multidimensional inputs, which is widely existed in real-world problems. In this study, an adaptive interval response surface methodology (AIRSM) based on extended active subspaces is proposed for mixed random and interval uncertainties. Based on the idea of subspace dimension reduction, extended active subspaces are given for mixed uncertainties, and interval active variable representation is derived for the construction of AIRSM. A weighted response surface strategy is introduced and tested for predicting the accurate boundary. Moreover, an interval dynamic correlation index is defined, and significance check and cross validation are reformulated in active subspaces to evaluate the AIRSM. The effectiveness of AIRSM is demonstrated on two test examples: three-dimensional nonlinear function and speed reducer design. They both possess a dominant one-dimensional active subspace with small estimation error, and the accuracy of AIRSM is verified by comparing with full-dimensional Monte Carlo simulates, thus providing a potential template for tackling high-dimensional problems involving mixed aleatory and interval uncertainties.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48853050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Partitioned analysis enables numerical representation of complex systems through the coupling of smaller, simpler constituent models, each representing a different phenomenon, domain, scale, or functional component. Through this coupling, inputs and outputs of constituent models are exchanged in an iterative manner until a converged solution satisfies all constituents. In practical applications, numerical models may not be available for all constituents due to lack of understanding of the behavior of a constituent and the inability to conduct separate-effect experiments to investigate the behavior of the constituent in an isolated manner. In such cases, empirical representations of missing constituents have the opportunity to be inferred using integral-effect experiments, which capture the behavior of the system as a whole. Herein, we propose a Bayesian inference-based approach to estimate missing constituent models from available integral-effect experiments. Significance of this novel approach is demonstrated through the inference of a material plasticity constituent integrated with a finite element model to enable efficient multiscale elasto-plastic simulations.
{"title":"A Bayesian Inference-Based Approach to Empirical Training of Strongly Coupled Constituent Models","authors":"G. Flynn, Evan Chodora, S. Atamturktur, D. Brown","doi":"10.1115/1.4044804","DOIUrl":"https://doi.org/10.1115/1.4044804","url":null,"abstract":"\u0000 Partitioned analysis enables numerical representation of complex systems through the coupling of smaller, simpler constituent models, each representing a different phenomenon, domain, scale, or functional component. Through this coupling, inputs and outputs of constituent models are exchanged in an iterative manner until a converged solution satisfies all constituents. In practical applications, numerical models may not be available for all constituents due to lack of understanding of the behavior of a constituent and the inability to conduct separate-effect experiments to investigate the behavior of the constituent in an isolated manner. In such cases, empirical representations of missing constituents have the opportunity to be inferred using integral-effect experiments, which capture the behavior of the system as a whole. Herein, we propose a Bayesian inference-based approach to estimate missing constituent models from available integral-effect experiments. Significance of this novel approach is demonstrated through the inference of a material plasticity constituent integrated with a finite element model to enable efficient multiscale elasto-plastic simulations.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49587963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The often-competing goals of optimization and reliability design amplify the importance of verification, validation, and uncertainty quantification (VVUQ) to achieve sufficient reliability. Evaluation of a system's reliability presents practical challenges given the large number of permutations of conditions that may exist over the system's operational lifecycle. Uncertainty and variability sources are not always well defined and are sometimes not possible to predict, yielding traditional uncertainty quantification (UQ) techniques insufficient. A variability-based method is proposed to bridge this gap in state-of-the-art UQ practice where sources of uncertainty and variability cannot be readily quantified. At the point of incipient structural failure, the structural response becomes highly variable and sensitive to minor perturbations in conditions. This characteristic provides a powerful opportunity to determine the critical failure conditions and to assess the resulting structural reliability through an alternative variability-based method. Nonhierarchical clustering, proximity analysis, and the use of stability indicators are combined to identify the loci of conditions that lead to a rapid evolution of the response toward a failure condition. The method's utility is demonstrated through its application to a simple nonlinear dynamic single-degree-of-freedom structural model. In addition to the L2 norm, a new stability indicator is proposed called the “instability index,” which is a function of both the L2 norm and the calculated proximity to adjacent loci of conditions with differing structural response. The instability index provides a rapidly achieved quantitative measure of the relative stability of the system for all possible loci of conditions.
{"title":"Prediction of Structural Reliability Through an Alternative Variability-Based Methodology","authors":"K. Haas","doi":"10.1115/vvs2019-5150","DOIUrl":"https://doi.org/10.1115/vvs2019-5150","url":null,"abstract":"\u0000 The often-competing goals of optimization and reliability design amplify the importance of verification, validation, and uncertainty quantification (VVUQ) to achieve sufficient reliability. Evaluation of a system's reliability presents practical challenges given the large number of permutations of conditions that may exist over the system's operational lifecycle. Uncertainty and variability sources are not always well defined and are sometimes not possible to predict, yielding traditional uncertainty quantification (UQ) techniques insufficient. A variability-based method is proposed to bridge this gap in state-of-the-art UQ practice where sources of uncertainty and variability cannot be readily quantified. At the point of incipient structural failure, the structural response becomes highly variable and sensitive to minor perturbations in conditions. This characteristic provides a powerful opportunity to determine the critical failure conditions and to assess the resulting structural reliability through an alternative variability-based method. Nonhierarchical clustering, proximity analysis, and the use of stability indicators are combined to identify the loci of conditions that lead to a rapid evolution of the response toward a failure condition. The method's utility is demonstrated through its application to a simple nonlinear dynamic single-degree-of-freedom structural model. In addition to the L2 norm, a new stability indicator is proposed called the “instability index,” which is a function of both the L2 norm and the calculated proximity to adjacent loci of conditions with differing structural response. The instability index provides a rapidly achieved quantitative measure of the relative stability of the system for all possible loci of conditions.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41794438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data from the distributed control system (DCS) or supervisory control and data acquisition (SCADA) system provide useful information critical to the evaluation of the performance and transportation efficiency of a gas pipeline system with compressor stations. The pipeline performance data provide correction factors for compressors as part of the operation optimization of natural gas transmission pipelines. This paper presents methods, procedures, and an example of model validation-based performance analysis of a gas pipeline based on actual system operational data. An analysis approach based on statistical methods is demonstrated with actual DCS gas pipeline measurement data. These methods offer practical ways to validate the pipeline hydraulics model using the DCS data. The validated models are then used as performance analysis tools in assessing the pipeline hydraulics parameters that influence the pressure drop in the pipeline such as corrosion (inside diameter change), roughness changes, or basic sediment and water deposition.
{"title":"Data Analysis and Model Validation of Natural Gas Transmission Pipeline With Compressor Station","authors":"David Cheng","doi":"10.1115/1.4045386","DOIUrl":"https://doi.org/10.1115/1.4045386","url":null,"abstract":"\u0000 Data from the distributed control system (DCS) or supervisory control and data acquisition (SCADA) system provide useful information critical to the evaluation of the performance and transportation efficiency of a gas pipeline system with compressor stations. The pipeline performance data provide correction factors for compressors as part of the operation optimization of natural gas transmission pipelines. This paper presents methods, procedures, and an example of model validation-based performance analysis of a gas pipeline based on actual system operational data. An analysis approach based on statistical methods is demonstrated with actual DCS gas pipeline measurement data. These methods offer practical ways to validate the pipeline hydraulics model using the DCS data. The validated models are then used as performance analysis tools in assessing the pipeline hydraulics parameters that influence the pressure drop in the pipeline such as corrosion (inside diameter change), roughness changes, or basic sediment and water deposition.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47763221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computational physicists are commonly faced with the task of resolving discrepancies between the predictions of a complex, integrated multiphysics numerical simulation, and corresponding experimental datasets. Such efforts commonly require a slow iterative procedure. However, a different approach is available in casesx where the multiphysics system of interest admits closed-form analytic solutions. In this situation, the ambiguity is conveniently broken into separate consideration of theory–simulation comparisons (issues of verification) and theory–data comparisons (issues of validation). We demonstrate this methodology via application to the specific example of a fluid-instability-based ejecta source model under development at Los Alamos National Laboratory and implemented in flag, a Los Alamos continuum mechanics code. The formalism is conducted in the forward sense (i.e., from source to measurement) and enables us to compute, purely analytically, time-dependent piezoelectric ejecta mass measurements for a specific class of explosively driven metal coupon experiments. We incorporate published measurement uncertainties on relevant experimental parameters to estimate a time-dependent uncertainty on these analytic predictions. This motivates the introduction of a “compatibility score” metric, our primary tool for quantitative analysis of the RMI + SSVD model. Finally, we derive a modification to the model, based on boundary condition considerations, that substantially improves its predictions.
{"title":"Analytic Solutions as a Tool for Verification and Validation of a Multiphysics Model","authors":"I. Tregillis","doi":"10.2172/1542799","DOIUrl":"https://doi.org/10.2172/1542799","url":null,"abstract":"\u0000 Computational physicists are commonly faced with the task of resolving discrepancies between the predictions of a complex, integrated multiphysics numerical simulation, and corresponding experimental datasets. Such efforts commonly require a slow iterative procedure. However, a different approach is available in casesx where the multiphysics system of interest admits closed-form analytic solutions. In this situation, the ambiguity is conveniently broken into separate consideration of theory–simulation comparisons (issues of verification) and theory–data comparisons (issues of validation). We demonstrate this methodology via application to the specific example of a fluid-instability-based ejecta source model under development at Los Alamos National Laboratory and implemented in flag, a Los Alamos continuum mechanics code. The formalism is conducted in the forward sense (i.e., from source to measurement) and enables us to compute, purely analytically, time-dependent piezoelectric ejecta mass measurements for a specific class of explosively driven metal coupon experiments. We incorporate published measurement uncertainties on relevant experimental parameters to estimate a time-dependent uncertainty on these analytic predictions. This motivates the introduction of a “compatibility score” metric, our primary tool for quantitative analysis of the RMI + SSVD model. Finally, we derive a modification to the model, based on boundary condition considerations, that substantially improves its predictions.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41625636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While Reynolds-averaged simulations have found success in the evaluation of many canonical shear flows and moderately separated flows, their application to highly separated flows have shown notable deficiencies. This study aimed to investigate these deficiencies in the eddy-viscosity formulation of four commonly used turbulence models under separated flow in an attempt to aid in the improved formulation of such models. Analyses are performed on the flow field around a wall-mounted cube (WMC) at a Reynolds number of 40,000 based on the cube height, h, and freestream velocity, U0. While a common occurrence in industrial applications, this type of flow constitutes a complex structure exhibiting a large separated wake region, high anisotropy, and multiple vortex structures. As well, interactions between vortices developed off of different faces of the cube significantly alter the overall flow characteristics, posing a significant challenge for the commonly used industrial turbulence models. Comparison of mean flow characteristics show remarkable agreement between experimental values and turbulence models which are capable of predicting transitional flow. Evaluation of turbulence parameters show the general underestimation of Reynolds stress for transitional models, while fully turbulent models show this value to be overestimated, resulting in completely disparate representations of mean flow structures between the two classes of models (transitional and fully turbulent).
{"title":"High-Resolution RANS Simulations of Flow Past a Surface-Mounted Cube Using Eddy-Viscosity Closure Models","authors":"M. Goldbach, M. Uddin","doi":"10.1115/1.4044695","DOIUrl":"https://doi.org/10.1115/1.4044695","url":null,"abstract":"\u0000 While Reynolds-averaged simulations have found success in the evaluation of many canonical shear flows and moderately separated flows, their application to highly separated flows have shown notable deficiencies. This study aimed to investigate these deficiencies in the eddy-viscosity formulation of four commonly used turbulence models under separated flow in an attempt to aid in the improved formulation of such models. Analyses are performed on the flow field around a wall-mounted cube (WMC) at a Reynolds number of 40,000 based on the cube height, h, and freestream velocity, U0. While a common occurrence in industrial applications, this type of flow constitutes a complex structure exhibiting a large separated wake region, high anisotropy, and multiple vortex structures. As well, interactions between vortices developed off of different faces of the cube significantly alter the overall flow characteristics, posing a significant challenge for the commonly used industrial turbulence models. Comparison of mean flow characteristics show remarkable agreement between experimental values and turbulence models which are capable of predicting transitional flow. Evaluation of turbulence parameters show the general underestimation of Reynolds stress for transitional models, while fully turbulent models show this value to be overestimated, resulting in completely disparate representations of mean flow structures between the two classes of models (transitional and fully turbulent).","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46587355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An end-to-end example of the application of the procedures of verification, validation, and uncertainty quantification (VVUQ) is presented with reference to mathematical models formulated for the prediction of fatigue failure in the high cycle range. A validation metric based on the log likelihood function is defined. It is shown that the functional forms of the notch sensitivity factors proposed by Neuber and Peterson cannot be validated but a revised form can be. Calibration and validation are based on published records of fatigue tests performed on notch-free and notched test coupons fabricated from aluminum alloy and alloy steel sheets.
{"title":"Validation of Notch Sensitivity Factors","authors":"B. Szabó, R. Actis, D. Rusk","doi":"10.1115/1.4044236","DOIUrl":"https://doi.org/10.1115/1.4044236","url":null,"abstract":"An end-to-end example of the application of the procedures of verification, validation, and uncertainty quantification (VVUQ) is presented with reference to mathematical models formulated for the prediction of fatigue failure in the high cycle range. A validation metric based on the log likelihood function is defined. It is shown that the functional forms of the notch sensitivity factors proposed by Neuber and Peterson cannot be validated but a revised form can be. Calibration and validation are based on published records of fatigue tests performed on notch-free and notched test coupons fabricated from aluminum alloy and alloy steel sheets.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46046539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computational fluid dynamic (CFD) techniques have played a significant role in improving the efficiency of the hydraulic turbines. To achieve safe and reliable design, numerical results should be trustworthy and free from any suspicion. Proper verification and validation (V&V) are vital to obtain credible results. In this work, first we present verification of a numerical model, Francis turbine, using different approaches to ensure minimum discretization errors and proper convergence. Then, we present detailed validation of the numerical model. Two operating conditions, best efficiency point (BEP) (100% load) and part load (67.2% load), are selected for the study. Turbine head, power, efficiency, and local pressure are used for validation. The pressure data are validated in time- and frequency-domains at sensitive locations in the turbine. We also investigated the different boundary conditions, turbulence intensity, and time-steps. The results showed that, while assessing the convergence history, convergence of local pressure/velocity in the turbine is important in addition to the mass and momentum parameters. Furthermore, error in hydraulic efficiency can be misleading, and effort should make to determine the errors in torque, head, and flow rate separately. The total error is 9.82% at critical locations in the turbine. The paper describes a customized V&V approach for the turbines that will help users to determine total error and to establish credibility of numerical models within hydraulic turbines.
{"title":"A Systematic Validation of a Francis Turbine Under Design and Off-Design Loads","authors":"C. Trivedi","doi":"10.1115/1.4043965","DOIUrl":"https://doi.org/10.1115/1.4043965","url":null,"abstract":"Computational fluid dynamic (CFD) techniques have played a significant role in improving the efficiency of the hydraulic turbines. To achieve safe and reliable design, numerical results should be trustworthy and free from any suspicion. Proper verification and validation (V&V) are vital to obtain credible results. In this work, first we present verification of a numerical model, Francis turbine, using different approaches to ensure minimum discretization errors and proper convergence. Then, we present detailed validation of the numerical model. Two operating conditions, best efficiency point (BEP) (100% load) and part load (67.2% load), are selected for the study. Turbine head, power, efficiency, and local pressure are used for validation. The pressure data are validated in time- and frequency-domains at sensitive locations in the turbine. We also investigated the different boundary conditions, turbulence intensity, and time-steps. The results showed that, while assessing the convergence history, convergence of local pressure/velocity in the turbine is important in addition to the mass and momentum parameters. Furthermore, error in hydraulic efficiency can be misleading, and effort should make to determine the errors in torque, head, and flow rate separately. The total error is 9.82% at critical locations in the turbine. The paper describes a customized V&V approach for the turbines that will help users to determine total error and to establish credibility of numerical models within hydraulic turbines.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1115/1.4043965","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43001066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}