M. Horner, Stephen M. Luke, K. Genc, T. Pietila, R. Cotton, Benjamin Ache, Z. Levine, Kevin Townsend
Patient-specific computational modeling is increasingly used to assist with visualization, planning, and execution of medical treatments. This trend is placing more reliance on medical imaging to provide accurate representations of anatomical structures. Digital image analysis is used to extract anatomical data for use in clinical assessment/planning. However, the presence of image artifacts, whether due to interactions between the physical object and the scanning modality or the scanning process, can degrade image accuracy. The process of extracting anatomical structures from the medical images introduces additional sources of variability, e.g., when thresholding or when eroding along apparent edges of biological structures. An estimate of the uncertainty associated with extracting anatomical data from medical images would therefore assist with assessing the reliability of patient-specific treatment plans. To this end, two image datasets were developed and analyzed using standard image analysis procedures. The first dataset was developed by performing a "virtual voxelization" of a CAD model of a sphere, representing the idealized scenario of no error in the image acquisition and reconstruction algorithms (i.e., a perfect scan). The second dataset was acquired by scanning three spherical balls using a laboratory-grade CT scanner. For the idealized sphere, the error in sphere diameter was less than or equal to 2% if 5 or more voxels were present across the diameter. The measurement error degraded to approximately 4% for a similar degree of voxelization of the physical phantom. The adaptation of established thresholding procedures to improve segmentation accuracy was also investigated.
{"title":"Towards Estimating the Uncertainty Associated with Three-Dimensional Geometry Reconstructed from Medical Image Data.","authors":"M. Horner, Stephen M. Luke, K. Genc, T. Pietila, R. Cotton, Benjamin Ache, Z. Levine, Kevin Townsend","doi":"10.1115/1.4045487","DOIUrl":"https://doi.org/10.1115/1.4045487","url":null,"abstract":"Patient-specific computational modeling is increasingly used to assist with visualization, planning, and execution of medical treatments. This trend is placing more reliance on medical imaging to provide accurate representations of anatomical structures. Digital image analysis is used to extract anatomical data for use in clinical assessment/planning. However, the presence of image artifacts, whether due to interactions between the physical object and the scanning modality or the scanning process, can degrade image accuracy. The process of extracting anatomical structures from the medical images introduces additional sources of variability, e.g., when thresholding or when eroding along apparent edges of biological structures. An estimate of the uncertainty associated with extracting anatomical data from medical images would therefore assist with assessing the reliability of patient-specific treatment plans. To this end, two image datasets were developed and analyzed using standard image analysis procedures. The first dataset was developed by performing a \"virtual voxelization\" of a CAD model of a sphere, representing the idealized scenario of no error in the image acquisition and reconstruction algorithms (i.e., a perfect scan). The second dataset was acquired by scanning three spherical balls using a laboratory-grade CT scanner. For the idealized sphere, the error in sphere diameter was less than or equal to 2% if 5 or more voxels were present across the diameter. The measurement error degraded to approximately 4% for a similar degree of voxelization of the physical phantom. The adaptation of established thresholding procedures to improve segmentation accuracy was also investigated.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":"155 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77614129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Soil drilling operation has become one of the most important interests to researchers due to its many applications in engineering systems. Auger drilling is one of the ideal methods in many applications such as pile foundation engineering, sampling test for geological, and space sciences. However, the dominant factor in determination of drilling parameters drilling operations experience. Therefore, soil-drilling process using auger drilling is studied to obtain the controlling parameters and to optimize these parameters to improve drilling performance which enables proper selection of machine for a required job. One of the main challenges that faces researchers during using of modeling techniques to define the soil drilling problem is the complex nonlinear behavior of the drilled medium itself due to its discontinuity and heterogeneous formation. This article presents two models that can be used to predict the total resistive forces which affect the auger during soil drilling operations. The first proposed model discusses the problem analytically in a way that depends on empirical data that can be collected from previous experience. The second model discusses the problem numerically with less depending on empirical experienced data. The analytical model is developed using matlab® interface, while the numerical model is developed using discrete element method (DEM) using edem software. A simplified auger drilling machine is built in the soil–tool interaction laboratory, Military Technical College to obtain experimental results that can be used to verify the presented models. Data acquisition measuring system is established to obtain experimental results using a labview® software which enables displaying and recording the measured data collected mainly from transducers planted in the test rig. Both analytical and numerical model results are compared to experimental values to aid in developing the presented parametric study that can be used to define the working parameters during drilling operations in different types of soils. Uncertainty calculations have been applied to ensure the reliability of the models. The combined calculated uncertainty leads to the level of confidence of about 95%.
{"title":"Analytical and Numerical Modeling of Soil Cutting and Transportation During Auger Drilling Operation","authors":"M. Abdeldayem, M. Mabrouk, M. Abo-Elnor","doi":"10.1115/imece2019-10311","DOIUrl":"https://doi.org/10.1115/imece2019-10311","url":null,"abstract":"\u0000 Soil drilling operation has become one of the most important interests to researchers due to its many applications in engineering systems. Auger drilling is one of the ideal methods in many applications such as pile foundation engineering, sampling test for geological, and space sciences. However, the dominant factor in determination of drilling parameters drilling operations experience. Therefore, soil-drilling process using auger drilling is studied to obtain the controlling parameters and to optimize these parameters to improve drilling performance which enables proper selection of machine for a required job. One of the main challenges that faces researchers during using of modeling techniques to define the soil drilling problem is the complex nonlinear behavior of the drilled medium itself due to its discontinuity and heterogeneous formation. This article presents two models that can be used to predict the total resistive forces which affect the auger during soil drilling operations. The first proposed model discusses the problem analytically in a way that depends on empirical data that can be collected from previous experience. The second model discusses the problem numerically with less depending on empirical experienced data. The analytical model is developed using matlab® interface, while the numerical model is developed using discrete element method (DEM) using edem software. A simplified auger drilling machine is built in the soil–tool interaction laboratory, Military Technical College to obtain experimental results that can be used to verify the presented models. Data acquisition measuring system is established to obtain experimental results using a labview® software which enables displaying and recording the measured data collected mainly from transducers planted in the test rig. Both analytical and numerical model results are compared to experimental values to aid in developing the presented parametric study that can be used to define the working parameters during drilling operations in different types of soils. Uncertainty calculations have been applied to ensure the reliability of the models. The combined calculated uncertainty leads to the level of confidence of about 95%.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48003820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses the modeling and simulations of deteriorated turbulent heat transfer (DTHT) for a wall-heated fluid flows, which can be observed in gas-cooled nuclear power reactors during pressurized conduction cooldown (PCC) event due to loss of force circulation flow. The DTHT regime is defined as the deterioration of normal turbulent heat transport due to increase of acceleration and buoyancy forces. The computational fluid dynamics (CFD) tools such as Nek5000 and STAR-CCM+ can help to analyze the DTHT phenomena in reactors for efficient thermal-fluid designs. Three-dimensional (3D) CFD nonisothermal modeling and simulations were performed in a wall-heated circular tube. The simulation results were validated with two different CFD tools, Nek5000 and STAR-CCM+, and validated with an experimental data. The predicted bulk temperatures were identical in both CFD tools, as expected. Good agreement between simulated results and measured data were obtained for wall temperatures along the tube axis using Nek5000. In STAR-CCM+, the under-predicted wall temperatures were mainly due to higher turbulence in the wall region. In STAR-CCM+, the predicted DTHT was over 48% at outlet when compared to inlet heat transfer values.
{"title":"Modeling and Simulations of Deteriorated Turbulent Heat Transfer in Wall-Heated Cylindrical Tube","authors":"P. Vegendla, R. Hu","doi":"10.1115/1.4045522","DOIUrl":"https://doi.org/10.1115/1.4045522","url":null,"abstract":"\u0000 This paper discusses the modeling and simulations of deteriorated turbulent heat transfer (DTHT) for a wall-heated fluid flows, which can be observed in gas-cooled nuclear power reactors during pressurized conduction cooldown (PCC) event due to loss of force circulation flow. The DTHT regime is defined as the deterioration of normal turbulent heat transport due to increase of acceleration and buoyancy forces. The computational fluid dynamics (CFD) tools such as Nek5000 and STAR-CCM+ can help to analyze the DTHT phenomena in reactors for efficient thermal-fluid designs. Three-dimensional (3D) CFD nonisothermal modeling and simulations were performed in a wall-heated circular tube. The simulation results were validated with two different CFD tools, Nek5000 and STAR-CCM+, and validated with an experimental data. The predicted bulk temperatures were identical in both CFD tools, as expected. Good agreement between simulated results and measured data were obtained for wall temperatures along the tube axis using Nek5000. In STAR-CCM+, the under-predicted wall temperatures were mainly due to higher turbulence in the wall region. In STAR-CCM+, the predicted DTHT was over 48% at outlet when compared to inlet heat transfer values.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42035878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Present research inspects the performance of rotor–bearing–coupling system in the presence of active magnetic bearings (AMBs). A methodology is suggested to quantify various fault characteristics along with AMB characteristic parameters of a coupled turbine generator system. A simplest possible turbogenerator system is modeled to analyze coupling misalignment. Conventional methodology to estimate dynamic system parameters based on forced response information is not enough for AMB-integrated rotor system because it requires current information along with displacement information. The controlling current of AMB is tuned and controlled with a controller of proportional–integral–derivative (PID) type. A numerical technique (Lagrange's equation) is applied to get equations of motion (EOM). Runge–Kutta technique is used to obtain EOM to acquire the time domain responses. The fast Fourier transformation (FFT) is applied on obtained responses to acquire responses in the frequency domain, and full spectrum technique is applied to propose the methodology. A methodology that depends on the least squares regression approach is proposed to evaluate the multifault parameters of AMB-integrated rotor system. The robustness of the algorithm is checked against various levels of noise and modeling error and observed efficient. An appreciable reduction in misalignment forces and moments is observed by using AMBs.
{"title":"Characteristic Parameters Estimation of Active Magnetic Bearings in a Coupled Rotor System","authors":"Sampath Kumar Kuppa, M. Lal","doi":"10.1115/1.4045295","DOIUrl":"https://doi.org/10.1115/1.4045295","url":null,"abstract":"\u0000 Present research inspects the performance of rotor–bearing–coupling system in the presence of active magnetic bearings (AMBs). A methodology is suggested to quantify various fault characteristics along with AMB characteristic parameters of a coupled turbine generator system. A simplest possible turbogenerator system is modeled to analyze coupling misalignment. Conventional methodology to estimate dynamic system parameters based on forced response information is not enough for AMB-integrated rotor system because it requires current information along with displacement information. The controlling current of AMB is tuned and controlled with a controller of proportional–integral–derivative (PID) type. A numerical technique (Lagrange's equation) is applied to get equations of motion (EOM). Runge–Kutta technique is used to obtain EOM to acquire the time domain responses. The fast Fourier transformation (FFT) is applied on obtained responses to acquire responses in the frequency domain, and full spectrum technique is applied to propose the methodology. A methodology that depends on the least squares regression approach is proposed to evaluate the multifault parameters of AMB-integrated rotor system. The robustness of the algorithm is checked against various levels of noise and modeling error and observed efficient. An appreciable reduction in misalignment forces and moments is observed by using AMBs.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44928609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Submodeling enables finite element engineers to focus analysis on the subregion containing the stress concentrator of interest with consequent computational savings. Such benefits are only really gained if the boundary conditions on the edges of the subregion that are drawn from an initial global finite element analysis (FEA) are verified to have been captured sufficiently accurately. Here, we offer a two-pronged approach aimed at realizing such solution verification. The first element of this approach is an improved means of assessing the error induced by submodel boundary conditions. The second element is a systematic sizing of the submodel region so that boundary-condition errors become acceptable. The resulting submodel procedure is demonstrated on a series of two-dimensional (2D) configurations with significant stress concentrations: four test problems and one application. For the test problems, the assessment means are uniformly successful in determining when submodel boundary conditions are accurate and when they are not. When, at first, they are not, the sizing approach is also consistently successful in enlarging submodel regions until submodel boundary conditions do become sufficiently accurate.
{"title":"Verification of Submodeling for the Finite Element Analysis of Stress Concentrations","authors":"A. Kardak, G. Sinclair","doi":"10.1115/1.4045232","DOIUrl":"https://doi.org/10.1115/1.4045232","url":null,"abstract":"\u0000 Submodeling enables finite element engineers to focus analysis on the subregion containing the stress concentrator of interest with consequent computational savings. Such benefits are only really gained if the boundary conditions on the edges of the subregion that are drawn from an initial global finite element analysis (FEA) are verified to have been captured sufficiently accurately. Here, we offer a two-pronged approach aimed at realizing such solution verification. The first element of this approach is an improved means of assessing the error induced by submodel boundary conditions. The second element is a systematic sizing of the submodel region so that boundary-condition errors become acceptable. The resulting submodel procedure is demonstrated on a series of two-dimensional (2D) configurations with significant stress concentrations: four test problems and one application. For the test problems, the assessment means are uniformly successful in determining when submodel boundary conditions are accurate and when they are not. When, at first, they are not, the sizing approach is also consistently successful in enlarging submodel regions until submodel boundary conditions do become sufficiently accurate.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41626747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a statistical methodology for a quantified validation of the OCARINa simulation tool, which models the unprotected transient overpower (UTOP) accidents. This validation on CABRI experiments is based on a best-estimate plus uncertainties (BEPU) approach. To achieve this, a general methodology based on recent statistical techniques is developed. In particular, a method for the quantification of multivariate data is applied for the visualization of simulator outputs and their comparison with experiments. Still for validation purposes, a probabilistic indicator is proposed to quantify the degree of agreement between the simulator OCARINa and the experiments, taking into account both experimental uncertainties and those on OCARINa inputs. Going beyond a qualitative validation, this work is of great interest for the verification, validation and uncertainty quantification or evaluation model development and assessment process approaches, which leads to the qualification of scientific calculation tools. Finally, for an in-depth analysis of the influence of uncertain parameters, a sensitivity analysis based on recent dependence measures is also performed. The usefulness of the statistical methodology is demonstrated on CABRI-E7 and CABRI-E12 tests. For each case, the BEPU propagation study is carried out performing 1000 Monte Carlo simulations with the OCARINa tool, with nine uncertain input parameters. The validation indicators provide a quantitative conclusion on the validation of the OCARINa tool on both transients and highlight future efforts to strengthen the demonstration of validation of safety tools. The sensitivity analysis improves the understanding of the OCARINa tool and the underlying UTOP scenario.
{"title":"Statistical Methodology for a Quantified Validation of Sodium Fast Reactor Simulation Tools","authors":"N. Marie, A. Marrel, K. Herbreteau","doi":"10.1115/1.4045233","DOIUrl":"https://doi.org/10.1115/1.4045233","url":null,"abstract":"\u0000 This paper presents a statistical methodology for a quantified validation of the OCARINa simulation tool, which models the unprotected transient overpower (UTOP) accidents. This validation on CABRI experiments is based on a best-estimate plus uncertainties (BEPU) approach. To achieve this, a general methodology based on recent statistical techniques is developed. In particular, a method for the quantification of multivariate data is applied for the visualization of simulator outputs and their comparison with experiments. Still for validation purposes, a probabilistic indicator is proposed to quantify the degree of agreement between the simulator OCARINa and the experiments, taking into account both experimental uncertainties and those on OCARINa inputs. Going beyond a qualitative validation, this work is of great interest for the verification, validation and uncertainty quantification or evaluation model development and assessment process approaches, which leads to the qualification of scientific calculation tools. Finally, for an in-depth analysis of the influence of uncertain parameters, a sensitivity analysis based on recent dependence measures is also performed. The usefulness of the statistical methodology is demonstrated on CABRI-E7 and CABRI-E12 tests. For each case, the BEPU propagation study is carried out performing 1000 Monte Carlo simulations with the OCARINa tool, with nine uncertain input parameters. The validation indicators provide a quantitative conclusion on the validation of the OCARINa tool on both transients and highlight future efforts to strengthen the demonstration of validation of safety tools. The sensitivity analysis improves the understanding of the OCARINa tool and the underlying UTOP scenario.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41957003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Probabilistic modeling methods are increasingly being employed in engineering applications. These approaches make inferences about the distribution for output quantities of interest. A challenge in applying probabilistic computer models (simulators) is validating output distributions against samples from observational data. An ideal validation metric is one that intuitively provides information on key differences between the simulator output and observational distributions, such as statistical distances/divergences. Within the literature, only a small set of statistical distances/divergences have been utilized for this task; often selected based on user experience and without reference to the wider variety available. As a result, this paper offers a unifying framework of statistical distances/divergences, categorizing those implemented within the literature, providing a greater understanding of their benefits, and offering new potential measures as validation metrics. In this paper, two families of measures for quantifying differences between distributions, that encompass the existing statistical distances/divergences within the literature, are analyzed: f-divergence and integral probability metrics (IPMs). Specific measures from these families are highlighted, providing an assessment of current and new validation metrics, with a discussion of their merits in determining simulator adequacy, offering validation metrics with greater sensitivity in quantifying differences across the range of probability mass.
{"title":"A Unifying Framework for Probabilistic Validation Metrics","authors":"P. Gardner, C. Lord, R. Barthorpe","doi":"10.1115/1.4045296","DOIUrl":"https://doi.org/10.1115/1.4045296","url":null,"abstract":"\u0000 Probabilistic modeling methods are increasingly being employed in engineering applications. These approaches make inferences about the distribution for output quantities of interest. A challenge in applying probabilistic computer models (simulators) is validating output distributions against samples from observational data. An ideal validation metric is one that intuitively provides information on key differences between the simulator output and observational distributions, such as statistical distances/divergences. Within the literature, only a small set of statistical distances/divergences have been utilized for this task; often selected based on user experience and without reference to the wider variety available. As a result, this paper offers a unifying framework of statistical distances/divergences, categorizing those implemented within the literature, providing a greater understanding of their benefits, and offering new potential measures as validation metrics. In this paper, two families of measures for quantifying differences between distributions, that encompass the existing statistical distances/divergences within the literature, are analyzed: f-divergence and integral probability metrics (IPMs). Specific measures from these families are highlighted, providing an assessment of current and new validation metrics, with a discussion of their merits in determining simulator adequacy, offering validation metrics with greater sensitivity in quantifying differences across the range of probability mass.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42413232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the model form uncertainty or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation and/or experimental outcomes. These uncertainties can be in the form of aleatory uncertainties due to randomness or epistemic uncertainties due to lack of knowledge. Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME V&V 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values is generated from computational fluid dynamics simulations of a multi-element airfoil. A simplified model is then developed using thin airfoil theory. This simplified model is then assessed using the synthetic experimental data. Each of these validation/calibration approaches are assessed for the ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available.
{"title":"Assessment of Model Validation, Calibration, and Prediction Approaches in the Presence of Uncertainty","authors":"N. W. Whiting","doi":"10.1115/1.4056285","DOIUrl":"https://doi.org/10.1115/1.4056285","url":null,"abstract":"\u0000 Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the model form uncertainty or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation and/or experimental outcomes. These uncertainties can be in the form of aleatory uncertainties due to randomness or epistemic uncertainties due to lack of knowledge. Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME V&V 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values is generated from computational fluid dynamics simulations of a multi-element airfoil. A simplified model is then developed using thin airfoil theory. This simplified model is then assessed using the synthetic experimental data. Each of these validation/calibration approaches are assessed for the ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43124512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Here, we develop a statistical basis for limited adverse testing. This type of testing simultaneously evaluates system performance against minimum requirements and minimizes costs, particularly for large-scale engineering projects. Because testing is often expensive and narrow in scope, the data obtained are relatively limited—precisely the opposite of the recent big data movement but no less compelling. Although a remarkably common approach for industrial and large-scale government projects, a statistical basis for adverse testing remains poorly explored. Here, we prove mathematically, under specific conditions, that setting each independent variable to an adverse condition leads to a similar level of adversity in the dependent variable. For example, setting all normally distributed independent variables to at least their 95th percentile values leads to a result at the 95th percentile. The analysis considers sample size estimates to clarify the value of replicates in this type of testing, determines how many of the independent variables must be set to adverse condition values, and highlights the essential assumptions, so that engineers, statisticians, and subject matter experts know when this statistical framework may be applied successfully and design testing to satisfy statistical requisites.
{"title":"Statistics for Testing Under Adverse Conditions","authors":"L. Pease, K. Anderson, J. Bamberger, M. Minette","doi":"10.1115/1.4045117","DOIUrl":"https://doi.org/10.1115/1.4045117","url":null,"abstract":"\u0000 Here, we develop a statistical basis for limited adverse testing. This type of testing simultaneously evaluates system performance against minimum requirements and minimizes costs, particularly for large-scale engineering projects. Because testing is often expensive and narrow in scope, the data obtained are relatively limited—precisely the opposite of the recent big data movement but no less compelling. Although a remarkably common approach for industrial and large-scale government projects, a statistical basis for adverse testing remains poorly explored. Here, we prove mathematically, under specific conditions, that setting each independent variable to an adverse condition leads to a similar level of adversity in the dependent variable. For example, setting all normally distributed independent variables to at least their 95th percentile values leads to a result at the 95th percentile. The analysis considers sample size estimates to clarify the value of replicates in this type of testing, determines how many of the independent variables must be set to adverse condition values, and highlights the essential assumptions, so that engineers, statisticians, and subject matter experts know when this statistical framework may be applied successfully and design testing to satisfy statistical requisites.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49505621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper summarizes an emerging process to establish credibility for surrogate models that cover multidimensional, continuous solution spaces. Various features lead to disagreement between the surrogate model's results and results from more precise computational benchmark solutions. In our verification process, this disagreement is quantified using descriptive statistics to support uncertainty quantification, sensitivity analysis, and surrogate model assessments. Our focus is stress-intensity factor (SIF) solutions. SIFs can be evaluated from simulations (e.g., finite element analyses), but these simulations require significant preprocessing, computational resources, and expertise to produce a credible result. It is not tractable (or necessary) to simulate a SIF for every crack front. Instead, most engineering analyses of fatigue crack growth (FCG) employ surrogate SIF solutions based on some combination of mechanics, interpolation, and SIF solutions extracted from earlier analyses. SIF values from surrogate solutions vary with local stress profiles and nondimensional degrees-of-freedom that define the geometry. The verification process evaluates the selected stress profiles and the sampled geometries using the surrogate model and a benchmark code (abaqus). The benchmark code employs a Python scripting interface to automate model development, execution, and extraction of key results. The ratio of the test code SIF to the benchmark code SIF measures the credibility of the solution. Descriptive statistics of these ratios provide convenient measures of relative surrogate quality. Thousands of analyses support visualization of the surrogate model's credibility, e.g., by rank-ordering of the credibility measure.
{"title":"Verification of Stress-Intensity Factor Solutions by Uncertainty Quantification","authors":"J. Sobotka, R. Mcclung","doi":"10.1115/1.4044868","DOIUrl":"https://doi.org/10.1115/1.4044868","url":null,"abstract":"\u0000 This paper summarizes an emerging process to establish credibility for surrogate models that cover multidimensional, continuous solution spaces. Various features lead to disagreement between the surrogate model's results and results from more precise computational benchmark solutions. In our verification process, this disagreement is quantified using descriptive statistics to support uncertainty quantification, sensitivity analysis, and surrogate model assessments. Our focus is stress-intensity factor (SIF) solutions. SIFs can be evaluated from simulations (e.g., finite element analyses), but these simulations require significant preprocessing, computational resources, and expertise to produce a credible result. It is not tractable (or necessary) to simulate a SIF for every crack front. Instead, most engineering analyses of fatigue crack growth (FCG) employ surrogate SIF solutions based on some combination of mechanics, interpolation, and SIF solutions extracted from earlier analyses. SIF values from surrogate solutions vary with local stress profiles and nondimensional degrees-of-freedom that define the geometry. The verification process evaluates the selected stress profiles and the sampled geometries using the surrogate model and a benchmark code (abaqus). The benchmark code employs a Python scripting interface to automate model development, execution, and extraction of key results. The ratio of the test code SIF to the benchmark code SIF measures the credibility of the solution. Descriptive statistics of these ratios provide convenient measures of relative surrogate quality. Thousands of analyses support visualization of the surrogate model's credibility, e.g., by rank-ordering of the credibility measure.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47970700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}