Matthew R. Balcer, M. Aristizábal, Juan Sebastian Rincon Tabares, Arturo Montoya, David Restrepo, H. Millwater
A derivative-based Uncertainty Quantification (UQ) method called HYPAD-UQ that utilizes sensitivities from a computational model was developed to approximate the statistical moments and Sobol' indices of the model output. HYPercomplex Automatic Differentiation (HYPAD) was used as a means to obtain accurate high-order partial derivatives from computational models such as finite element analyses. These sensitivities are used to construct a surrogate model of the output using a Taylor series expansion and subsequently used to estimate statistical moments (mean, variance, skewness, and kurtosis) and Sobol' indices using algebraic expansions. The uncertainty in a transient linear heat transfer analysis was quantified with HYPAD-UQ using first-order through seventh-order partial derivatives with respect to seven random variables encompassing material properties, geometry, and boundary conditions. Random sampling of the analytical solution and the regression-based stochastic perturbation finite element method were also conducted to compare accuracy and computational cost. The results indicate that HYPAD-UQ has superior accuracy for the same computational effort compared to the regression-based stochastic perturbation finite element method. Sensitivities calculated with HYPAD can allow higher-order Taylor series expansions to be an effective and practical UQ method.
{"title":"HYPAD-UQ: A Derivative-based Uncertainty Quantification Method Using a Hypercomplex Finite Element Method","authors":"Matthew R. Balcer, M. Aristizábal, Juan Sebastian Rincon Tabares, Arturo Montoya, David Restrepo, H. Millwater","doi":"10.1115/1.4062459","DOIUrl":"https://doi.org/10.1115/1.4062459","url":null,"abstract":"\u0000 A derivative-based Uncertainty Quantification (UQ) method called HYPAD-UQ that utilizes sensitivities from a computational model was developed to approximate the statistical moments and Sobol' indices of the model output. HYPercomplex Automatic Differentiation (HYPAD) was used as a means to obtain accurate high-order partial derivatives from computational models such as finite element analyses. These sensitivities are used to construct a surrogate model of the output using a Taylor series expansion and subsequently used to estimate statistical moments (mean, variance, skewness, and kurtosis) and Sobol' indices using algebraic expansions. The uncertainty in a transient linear heat transfer analysis was quantified with HYPAD-UQ using first-order through seventh-order partial derivatives with respect to seven random variables encompassing material properties, geometry, and boundary conditions. Random sampling of the analytical solution and the regression-based stochastic perturbation finite element method were also conducted to compare accuracy and computational cost. The results indicate that HYPAD-UQ has superior accuracy for the same computational effort compared to the regression-based stochastic perturbation finite element method. Sensitivities calculated with HYPAD can allow higher-order Taylor series expansions to be an effective and practical UQ method.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49386200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Liew, D. Read, May L. Martin, P. Bradley, J. Geaney
It is well documented that the microstructure and properties of electrodeposited films, such as LIGA Ni and its alloys, are highly sensitive to processing conditions hence the literature shows large discrepancies in mechanical properties, even for similar alloys. Given this expected material variability as well as the experimental challenges with small-scale mechanical testing, measurement uncertainties are needed for property values to be applied appropriately, and yet are uncommon in micro- and meso-scale tensile testing studies. In a separate paper we reported the elastic-plastic properties of 200 μm -thick freestanding films of LIGA-fabricated nanocrystalline Ni-10 %Fe and microcrystalline Ni-10 %Co, with specimen gauge widths ranging from 75 μm to 700 μm, and tensile tested at strain rates 0.001 s-1 and 1 s-1. The loads were applied by commercial miniature and benchtop load frames, and strain was measured by digital image correlation. In this paper we examine the measurement uncertainties in the ultimate tensile strength, apparent Young's modulus, 0.2 % offset yield strength, and strain hardening parameters. For several of these properties, the standard deviation cannot be interpreted as the statistical scatter because the measurement uncertainty was larger. Microplasticity affects the modulus measurement, thus we recommended measuring the modulus after cyclic loading. These measurement uncertainty issues might be relevant to similar works on small-scale tensile testing and might help the reader to interpret the discrepancies in literature values of mechanical properties for LIGA and electrodeposited films.
{"title":"Elastic-plastic Properties of Meso-scale Electrodeposited Liga Nickel Alloy Films: Analysis of Measurement Uncertainties","authors":"L. Liew, D. Read, May L. Martin, P. Bradley, J. Geaney","doi":"10.1115/1.4062106","DOIUrl":"https://doi.org/10.1115/1.4062106","url":null,"abstract":"\u0000 It is well documented that the microstructure and properties of electrodeposited films, such as LIGA Ni and its alloys, are highly sensitive to processing conditions hence the literature shows large discrepancies in mechanical properties, even for similar alloys. Given this expected material variability as well as the experimental challenges with small-scale mechanical testing, measurement uncertainties are needed for property values to be applied appropriately, and yet are uncommon in micro- and meso-scale tensile testing studies. In a separate paper we reported the elastic-plastic properties of 200 μm -thick freestanding films of LIGA-fabricated nanocrystalline Ni-10 %Fe and microcrystalline Ni-10 %Co, with specimen gauge widths ranging from 75 μm to 700 μm, and tensile tested at strain rates 0.001 s-1 and 1 s-1. The loads were applied by commercial miniature and benchtop load frames, and strain was measured by digital image correlation. In this paper we examine the measurement uncertainties in the ultimate tensile strength, apparent Young's modulus, 0.2 % offset yield strength, and strain hardening parameters. For several of these properties, the standard deviation cannot be interpreted as the statistical scatter because the measurement uncertainty was larger. Microplasticity affects the modulus measurement, thus we recommended measuring the modulus after cyclic loading. These measurement uncertainty issues might be relevant to similar works on small-scale tensile testing and might help the reader to interpret the discrepancies in literature values of mechanical properties for LIGA and electrodeposited films.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45822778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tracking of the position and orientation of a moving object by a camera can be accomplished by attaching a 2D marker with a specific pattern on the object. Recently, we have developed a projection-based surgical navigation system that can accurately guide in real time the pre-operative plan of resection in orthopedic surgery, such as joint replacement or wide-resection of osteosarcoma (bone tumor). To this end, it is important to study the accuracy of registration and tracking due to various sources of errors, such as the printing resolution and quality of the 2D marker. In this study, we investigate and provide an analysis of error and uncertainty for real-time tracking using a 2D marker with a camera. Experiments and computational simulations were conducted to quantify the estimation of errors in position and orientation due to the printing error of 2D markers using a 600-dpi laser printer. In addition, a theory of uncertainty propagation in a form of congruence transformation was derived for such systems and is illustrated with experimental results.
{"title":"Experimental and Computational Study of Error and Uncertainty in Real-Time Camera-Based Tracking of a 2D Marker for Orthopedic Surgical Navigation","authors":"Guangyu He, A. Fakhari, F. Khan, I. Kao","doi":"10.1115/1.4062137","DOIUrl":"https://doi.org/10.1115/1.4062137","url":null,"abstract":"\u0000 Tracking of the position and orientation of a moving object by a camera can be accomplished by attaching a 2D marker with a specific pattern on the object. Recently, we have developed a projection-based surgical navigation system that can accurately guide in real time the pre-operative plan of resection in orthopedic surgery, such as joint replacement or wide-resection of osteosarcoma (bone tumor). To this end, it is important to study the accuracy of registration and tracking due to various sources of errors, such as the printing resolution and quality of the 2D marker. In this study, we investigate and provide an analysis of error and uncertainty for real-time tracking using a 2D marker with a camera. Experiments and computational simulations were conducted to quantify the estimation of errors in position and orientation due to the printing error of 2D markers using a 600-dpi laser printer. In addition, a theory of uncertainty propagation in a form of congruence transformation was derived for such systems and is illustrated with experimental results.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42969685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Campos, Andrés Elías Ajras, Lucas Guillermo Goytiño, M. Piovan
This paper is devoted to evaluating the quantification of uncertainty involved in the study of Aeolian vibrations of Optical Ground Wire (OPGW) cable systems installed on overhead power transmission lines. The Energy Balance Method (EBM) is widely used to estimate the severity of steady-state Aeolian vibrations. Although the EBM requires some experimental characterization of system parameters (as indicated by international Standards), it is necessary to mention that such a procedure is connected with uncertainties which makes it difficult for the proper homologation of the cable systems. In this article, the parametric probabilistic approach is employed to quantify the level of uncertainty associated with the EBM in the study of Aeolian vibrations of OPGW. The relevant parameters of the EBM (damper properties, cable self-damping, and the power imparted by the wind) are assumed as random variables whose distribution is deduced by means of the Maximum Entropy Principle. Then a Monte Carlo simulation is performed, and the input and output uncertainties are contrasted. Finally, a global sensitivity analysis is conducted to identify the Sobol' indices. Results indicate that parameters related to self-damping and damper are the most influential on uncertainty and output variability. In this sense, the present framework constitutes a powerful tool in the robust design of damper systems for OPGW cables.
{"title":"Uncertainties Propagation and Global Sensitivity Analysis of the Aeolian Vibration of OPGW Cables","authors":"D. Campos, Andrés Elías Ajras, Lucas Guillermo Goytiño, M. Piovan","doi":"10.1115/1.4056976","DOIUrl":"https://doi.org/10.1115/1.4056976","url":null,"abstract":"\u0000 This paper is devoted to evaluating the quantification of uncertainty involved in the study of Aeolian vibrations of Optical Ground Wire (OPGW) cable systems installed on overhead power transmission lines. The Energy Balance Method (EBM) is widely used to estimate the severity of steady-state Aeolian vibrations. Although the EBM requires some experimental characterization of system parameters (as indicated by international Standards), it is necessary to mention that such a procedure is connected with uncertainties which makes it difficult for the proper homologation of the cable systems. In this article, the parametric probabilistic approach is employed to quantify the level of uncertainty associated with the EBM in the study of Aeolian vibrations of OPGW. The relevant parameters of the EBM (damper properties, cable self-damping, and the power imparted by the wind) are assumed as random variables whose distribution is deduced by means of the Maximum Entropy Principle. Then a Monte Carlo simulation is performed, and the input and output uncertainties are contrasted. Finally, a global sensitivity analysis is conducted to identify the Sobol' indices. Results indicate that parameters related to self-damping and damper are the most influential on uncertainty and output variability. In this sense, the present framework constitutes a powerful tool in the robust design of damper systems for OPGW cables.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45213501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew C. Ledwith, R. Hill, L. Champagne, Edward D. White
Determining whether a computational model is valid for its intended use requires the rigorous assessment of agreement between observed system responses of the computational model and the corresponding real world system or process of interest. In this article, a new method for assessing the validity of computational models is proposed based upon the probability of agreement (PoA) approach. The proposed method quantifies the probability that observed simulation and system response differences are small enough to be considered acceptable, and hence the two systems can be used interchangeably. Rather than relying on Boolean-based statistical tests and procedures, the distance-based probability of agreement validation metric (PoAVM) assesses the similarity of system responses used to predict system behaviors by comparing the distributions of output behavior. The corresponding PoA plot serves as a useful tool for summarizing agreement transparently and directly while accounting for potentially complicated bias and variability structures. A general procedure for employing the proposed computational model validation method is provided which leverages bootstrapping to overcome the fact that in most situations where computational models are employed, one's ability to collect real world data is limited. The new method is demonstrated and contextualized through an illustrative application based upon empirical data from a transient-phase assembly line manufacturing process and a discussion on its desirability based upon an established validation framework.
{"title":"Probabilities of Agreement for Computational Model Validation","authors":"Matthew C. Ledwith, R. Hill, L. Champagne, Edward D. White","doi":"10.1115/1.4056862","DOIUrl":"https://doi.org/10.1115/1.4056862","url":null,"abstract":"\u0000 Determining whether a computational model is valid for its intended use requires the rigorous assessment of agreement between observed system responses of the computational model and the corresponding real world system or process of interest. In this article, a new method for assessing the validity of computational models is proposed based upon the probability of agreement (PoA) approach. The proposed method quantifies the probability that observed simulation and system response differences are small enough to be considered acceptable, and hence the two systems can be used interchangeably. Rather than relying on Boolean-based statistical tests and procedures, the distance-based probability of agreement validation metric (PoAVM) assesses the similarity of system responses used to predict system behaviors by comparing the distributions of output behavior. The corresponding PoA plot serves as a useful tool for summarizing agreement transparently and directly while accounting for potentially complicated bias and variability structures. A general procedure for employing the proposed computational model validation method is provided which leverages bootstrapping to overcome the fact that in most situations where computational models are employed, one's ability to collect real world data is limited. The new method is demonstrated and contextualized through an illustrative application based upon empirical data from a transient-phase assembly line manufacturing process and a discussion on its desirability based upon an established validation framework.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49082811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Lemaire, G. Vaz, Menno Deij ‐ van Rijswijk, S. Turnock
The overset method and associated interpolation schemes are usually thoroughly verified only on synthetic or academic test cases for which conclusions might not directly translate to real engineering problems. In the present work, an overset grid method is used to simulate a rudder-propeller flow, for which a comprehensive verification and validation study is performed. Three overset interpolation schemes (from first to third order) are tested to quantify and qualify numerical errors on integral quantities, mass imbalance, flow features and rudder pressure distributions. The performance overhead is also measured to help make accuracy-performance balance decisions. Rigorous solution verification is performed to estimate time and space discretisation, iterative and statistical uncertainties. Validation of the rudder flow against experimental data is also done. The results show that, while the choice of interpolation scheme has minimal impact on time-averaged integral quantities (like forces), they do influence the smoothness of the time signals, with the first order scheme resulting in large intensity high-frequency temporal oscillations. Lower order interpolation methods also produce more interpolation artefacts in fringe cells, which are then convected downstream. Mass imbalance is also affected by the interpolation scheme, with higher order schemes (third order) resulting in an order of magnitude lower flux errors. The limitations of first order schemes do not, however, result in significant lower computational overhead, with the second order being even cheaper than the first order one in the tested implementation. Lastly, validation shows promising results with rudder forces within 10% of the experiments.
{"title":"Influence of Interpolation Scheme On the Accuracy of Overset Method for Computing Rudder-propeller Interaction","authors":"S. Lemaire, G. Vaz, Menno Deij ‐ van Rijswijk, S. Turnock","doi":"10.1115/1.4056681","DOIUrl":"https://doi.org/10.1115/1.4056681","url":null,"abstract":"\u0000 The overset method and associated interpolation schemes are usually thoroughly verified only on synthetic or academic test cases for which conclusions might not directly translate to real engineering problems. In the present work, an overset grid method is used to simulate a rudder-propeller flow, for which a comprehensive verification and validation study is performed. Three overset interpolation schemes (from first to third order) are tested to quantify and qualify numerical errors on integral quantities, mass imbalance, flow features and rudder pressure distributions. The performance overhead is also measured to help make accuracy-performance balance decisions. Rigorous solution verification is performed to estimate time and space discretisation, iterative and statistical uncertainties. Validation of the rudder flow against experimental data is also done.\u0000 The results show that, while the choice of interpolation scheme has minimal impact on time-averaged integral quantities (like forces), they do influence the smoothness of the time signals, with the first order scheme resulting in large intensity high-frequency temporal oscillations. Lower order interpolation methods also produce more interpolation artefacts in fringe cells, which are then convected downstream. Mass imbalance is also affected by the interpolation scheme, with higher order schemes (third order) resulting in an order of magnitude lower flux errors. The limitations of first order schemes do not, however, result in significant lower computational overhead, with the second order being even cheaper than the first order one in the tested implementation. Lastly, validation shows promising results with rudder forces within 10% of the experiments.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42858651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joshua J. E. Blauer, R. Gray, D. Swenson, P. Pathmanathan
Survival rates for sudden cardiac death treated with external defibrillation are estimated to be up to five times greater compared to cardio-pulmonary resuscitation alone. Computational modeling can be used to investigate the relationship between patch location and defibrillation efficacy. However, credibility of model predictions is unclear. The aims of this paper are to: (1) assess credibility of a commonly used computational approach for predicting impact of patch relocation on defibrillation efficacy; and (2) provide a concrete biomedical example of a model validation study with supporting applicability analysis, to systematically assess the relevance of the validation study for a proposed model context of use (COU). Using an electrostatic heart and torso computational model, simulations were compared against experimental recordings from a swine subject with external patches and multiple body surface and intracardiac recording electrodes. Applicability of this swine validation study to the human COU was assessed using an applicability analysis framework. Knowledge gaps identified by the applicability analysis were addressed using sensitivity analysis. In the swine validation study, quantitative agreement (R2=0.85) was observed between predicted and observed potentials at both surface and intracardiac electrodes using a left-right patch placement. Applicability analysis identified uncertainty in tissue conductivities as one of the main potential sources of unreliability; however, a sensitivity analysis demonstrated that uncertainty in conductivity parameters had relatively little impact on model predictions (less than 10% relative change for two-fold conductivity changes). We believe the results support pursuing human simulations further to evaluate impact of patch relocation.
{"title":"Validation and Applicability Analysis of a Computational Model of External Defibrillation","authors":"Joshua J. E. Blauer, R. Gray, D. Swenson, P. Pathmanathan","doi":"10.1115/1.4056596","DOIUrl":"https://doi.org/10.1115/1.4056596","url":null,"abstract":"\u0000 Survival rates for sudden cardiac death treated with external defibrillation are estimated to be up to five times greater compared to cardio-pulmonary resuscitation alone. Computational modeling can be used to investigate the relationship between patch location and defibrillation efficacy. However, credibility of model predictions is unclear. The aims of this paper are to: (1) assess credibility of a commonly used computational approach for predicting impact of patch relocation on defibrillation efficacy; and (2) provide a concrete biomedical example of a model validation study with supporting applicability analysis, to systematically assess the relevance of the validation study for a proposed model context of use (COU). Using an electrostatic heart and torso computational model, simulations were compared against experimental recordings from a swine subject with external patches and multiple body surface and intracardiac recording electrodes. Applicability of this swine validation study to the human COU was assessed using an applicability analysis framework. Knowledge gaps identified by the applicability analysis were addressed using sensitivity analysis. In the swine validation study, quantitative agreement (R2=0.85) was observed between predicted and observed potentials at both surface and intracardiac electrodes using a left-right patch placement. Applicability analysis identified uncertainty in tissue conductivities as one of the main potential sources of unreliability; however, a sensitivity analysis demonstrated that uncertainty in conductivity parameters had relatively little impact on model predictions (less than 10% relative change for two-fold conductivity changes). We believe the results support pursuing human simulations further to evaluate impact of patch relocation.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41777465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan Diego Ceballos Payares, Maika Karen Gambús Ordaz, Samuel Fernando Muñoz Navarro
Surfactant flooding comes up as a potential enhanced oil recovery method for heavy oil exploitation as a solution to energy losses in thermal processes. This research aims to use an inverse problem to determine the most suitable application scenario of this process at the laboratory scale by the use of reservoir simulation. First of all, a numerical laboratory model was built, representing an alkali-surfactant flooding test of crude oil with a viscosity of 1800 MPa*s; secondly, an optimization process was carried out, where operational parameters, such as, alkali and surfactant concentration, size of the main chemical slug and injection rate, were evaluated. Results showed that the most suitable scenario consisted on the injection of 0.5 PV from the combined mixture 0.2% Na2CO3 + 0.2% NaOH + 100 ppm Surfactant at an injection rate of 0.1 cm3/min, getting a final chemical-oil relation about 0.0010 cm3 of chemical per cm3 of oil.
{"title":"Technical Evaluation of a Surfactant Injection Process for Heavy Oil Recovery by Laboratory-Scale Numerical Simulation","authors":"Juan Diego Ceballos Payares, Maika Karen Gambús Ordaz, Samuel Fernando Muñoz Navarro","doi":"10.1115/1.4056550","DOIUrl":"https://doi.org/10.1115/1.4056550","url":null,"abstract":"\u0000 Surfactant flooding comes up as a potential enhanced oil recovery method for heavy oil exploitation as a solution to energy losses in thermal processes. This research aims to use an inverse problem to determine the most suitable application scenario of this process at the laboratory scale by the use of reservoir simulation. First of all, a numerical laboratory model was built, representing an alkali-surfactant flooding test of crude oil with a viscosity of 1800 MPa*s; secondly, an optimization process was carried out, where operational parameters, such as, alkali and surfactant concentration, size of the main chemical slug and injection rate, were evaluated. Results showed that the most suitable scenario consisted on the injection of 0.5 PV from the combined mixture 0.2% Na2CO3 + 0.2% NaOH + 100 ppm Surfactant at an injection rate of 0.1 cm3/min, getting a final chemical-oil relation about 0.0010 cm3 of chemical per cm3 of oil.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42658314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew White, S. Mahadevan, Jason Schmucker, Alexander Karl
Model validation for real-world systems involves multiple sources of uncertainty, multivariate model outputs, and often a limited number of measurement samples. These factors preclude the use of many existing validation metrics, or at least limit the ability of the practitioner to derive insights from computed metrics. This paper seeks to extend the area metric (univariate only) and the model reliability metric (univariate and multivariate) to account for these issues. The model reliability metric was found to be more extendable to multivariate outputs, whereas the area metric presented some difficulties. Metrics of different types (area and model reliability), dimensionality (univariate and multivariate), and objective (bias effects, shape effects, or both) are used together in a ‘multi-metric’ approach that provides a more informative validation assessment. The univariate metrics can be used for output-by-output model diagnosis and the multivariate metrics contributes an overall model assessment that includes correlation among the outputs. The extensions to the validation metrics in this paper address limited measurement sample size, improve the interpretability of the metric results by separating the effects of distribution bias and shape, and enhance the model reliability metric's tolerance parameter. The proposed validation approach is demonstrated with a bivariate numerical example and then applied to a gas turbine engine heat transfer model.
{"title":"Multi-Metric Validation Under Uncertainty for Multivariate Model Outputs and Limited Measurements","authors":"Andrew White, S. Mahadevan, Jason Schmucker, Alexander Karl","doi":"10.1115/1.4056548","DOIUrl":"https://doi.org/10.1115/1.4056548","url":null,"abstract":"\u0000 Model validation for real-world systems involves multiple sources of uncertainty, multivariate model outputs, and often a limited number of measurement samples. These factors preclude the use of many existing validation metrics, or at least limit the ability of the practitioner to derive insights from computed metrics. This paper seeks to extend the area metric (univariate only) and the model reliability metric (univariate and multivariate) to account for these issues. The model reliability metric was found to be more extendable to multivariate outputs, whereas the area metric presented some difficulties. Metrics of different types (area and model reliability), dimensionality (univariate and multivariate), and objective (bias effects, shape effects, or both) are used together in a ‘multi-metric’ approach that provides a more informative validation assessment. The univariate metrics can be used for output-by-output model diagnosis and the multivariate metrics contributes an overall model assessment that includes correlation among the outputs. The extensions to the validation metrics in this paper address limited measurement sample size, improve the interpretability of the metric results by separating the effects of distribution bias and shape, and enhance the model reliability metric's tolerance parameter. The proposed validation approach is demonstrated with a bivariate numerical example and then applied to a gas turbine engine heat transfer model.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48798425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}