Harrison Helmich, Charles J. Doherty, Donald Costello, Michael Kutzer
The United States Navy intends to increase the amount of uncrewed aircraft in a carrier air wing. To support this increase, carrier based uncrewed aircraft will be required to have some level of autonomy as there will be situations where a human cannot be in/on the loop. However, there is no existing and approved method to certify autonomy within Naval Aviation. In support of generating certification evidence for autonomy, the United States Naval Academy has created a training and evaluation system to provide quantifiable metrics for feedback performance in autonomous systems. The preliminary use-case for this work focuses on autonomous aerial refueling. Prior demonstrations of autonomous aerial refueling have leveraged a deep neural network (DNN) for processing visual feedback to approximate the relative position of an aerial refueling drogue. The training and evaluation system proposed in this work simulates the relative motion between the aerial refueling drogue and feedback camera system using industrial robotics. Ground truth measurements of the pose between camera and drogue is measured using a commercial motion capture system. Preliminary results demonstrate calibration methods providing ground truth measurements with millimeter precision. Leveraging this calibration, the proposed system is capable of providing large-scale data sets for DNN training and evaluation against a precise ground truth.
{"title":"Automatic Ground-Truth Image Labeling for Deep Neural Network Training and Evaluation Using Industrial Robotics and Motion Capture","authors":"Harrison Helmich, Charles J. Doherty, Donald Costello, Michael Kutzer","doi":"10.1115/1.4064311","DOIUrl":"https://doi.org/10.1115/1.4064311","url":null,"abstract":"The United States Navy intends to increase the amount of uncrewed aircraft in a carrier air wing. To support this increase, carrier based uncrewed aircraft will be required to have some level of autonomy as there will be situations where a human cannot be in/on the loop. However, there is no existing and approved method to certify autonomy within Naval Aviation. In support of generating certification evidence for autonomy, the United States Naval Academy has created a training and evaluation system to provide quantifiable metrics for feedback performance in autonomous systems. The preliminary use-case for this work focuses on autonomous aerial refueling. Prior demonstrations of autonomous aerial refueling have leveraged a deep neural network (DNN) for processing visual feedback to approximate the relative position of an aerial refueling drogue. The training and evaluation system proposed in this work simulates the relative motion between the aerial refueling drogue and feedback camera system using industrial robotics. Ground truth measurements of the pose between camera and drogue is measured using a commercial motion capture system. Preliminary results demonstrate calibration methods providing ground truth measurements with millimeter precision. Leveraging this calibration, the proposed system is capable of providing large-scale data sets for DNN training and evaluation against a precise ground truth.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":"9 4","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139170985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kathryn M Barker, Jennifer Gayles, Mariam Diakité, Florentine Gracia Diantisa, Rebecka Lundgren
Program description: Growing Up GREAT! (GUG) is a sexual and reproductive health (SRH) program for adolescents aged 10-14 years in Kinshasa, Democratic Republic of the Congo (DRC). The multilevel program takes an ecological approach to foster community examination of gender inequitable norms and to increase adolescents' SRH knowledge, skills, and gender-equitable attitudes. GUG design, piloting, and scale-up were informed by a theory of change and responsive feedback mechanisms (RFMs) during piloting and scale-up.
Responsive feedback mechanisms: The program engaged stakeholders via quarterly learning meetings to review monitoring data, evaluation results, and practice-based knowledge and to subsequently identify challenges and develop solutions. The program commissioned rapid research on specific intervention elements to improve implementation and documented scale-up learnings using the World Health Organization/ExpandNet framework.
Achievements: RFMs used in the pilot period allowed the program to address community concerns by intensifying orientation activities with parents and schools, shifting the calendar of activities to increase male engagement, and increasing facilitator training length to improve facilitation quality. Using RFMs during scale-up prompted further adaptations for program sustainability, including recommendations for task-shifting from NGO facilitators to community health workers.
Conclusion: GUG used RFMs from pilot through scale-up to foster a learning culture among local partners, implementers at headquarters, and global research partners. Using responsive feedback (RF) enabled timely response to the evolving implementation context, resulting in strategic program adaptations that fostered increased community support of the project. Other successes due, at least in part, to this RF approach include incorporation of the program into DRC's national adolescent health strategy, and rapid response to the COVID-19 pandemic in educational strategies for program beneficiaries.
{"title":"Using Responsive Feedback in Scaling a Gender Norms-Shifting Adolescent Sexual and Reproductive Health Intervention in the Democratic Republic of Congo.","authors":"Kathryn M Barker, Jennifer Gayles, Mariam Diakité, Florentine Gracia Diantisa, Rebecka Lundgren","doi":"10.9745/GHSP-D-22-00208","DOIUrl":"10.9745/GHSP-D-22-00208","url":null,"abstract":"<p><strong>Program description: </strong>Growing Up GREAT! (GUG) is a sexual and reproductive health (SRH) program for adolescents aged 10-14 years in Kinshasa, Democratic Republic of the Congo (DRC). The multilevel program takes an ecological approach to foster community examination of gender inequitable norms and to increase adolescents' SRH knowledge, skills, and gender-equitable attitudes. GUG design, piloting, and scale-up were informed by a theory of change and responsive feedback mechanisms (RFMs) during piloting and scale-up.</p><p><strong>Responsive feedback mechanisms: </strong>The program engaged stakeholders via quarterly learning meetings to review monitoring data, evaluation results, and practice-based knowledge and to subsequently identify challenges and develop solutions. The program commissioned rapid research on specific intervention elements to improve implementation and documented scale-up learnings using the World Health Organization/ExpandNet framework.</p><p><strong>Achievements: </strong>RFMs used in the pilot period allowed the program to address community concerns by intensifying orientation activities with parents and schools, shifting the calendar of activities to increase male engagement, and increasing facilitator training length to improve facilitation quality. Using RFMs during scale-up prompted further adaptations for program sustainability, including recommendations for task-shifting from NGO facilitators to community health workers.</p><p><strong>Conclusion: </strong>GUG used RFMs from pilot through scale-up to foster a learning culture among local partners, implementers at headquarters, and global research partners. Using responsive feedback (RF) enabled timely response to the evolving implementation context, resulting in strategic program adaptations that fostered increased community support of the project. Other successes due, at least in part, to this RF approach include incorporation of the program into DRC's national adolescent health strategy, and rapid response to the COVID-19 pandemic in educational strategies for program beneficiaries.</p>","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10727463/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75098227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract A verification study was conducted on an URANS (Unsteady Reynolds-Averaged Navier-Stoke) simulation of flow around a 5:1 rectangular cylinder at a Reynolds number of 56,700 (based on the cylinder depth) using the k-ω SST (Shear Stress Transport) turbulence model and the γ-Reθ transition model for three types of grids (a fully structured grid and two hybrid grids generated using Delaunay and advancing front techniques). The Grid Convergence Index (GCI) and Least Squares (LS) procedures were employed to estimate discretization error and associated uncertainties. The result indicates that the LS procedure provides the most reliable estimates of discretization error uncertainties for solution variables in the structure grid from the k-ω SST model. From the six solution variables, the highest relative uncertainty was typically observed in the rms of lift coefficient, followed by time-averaged reattachment length and peak of rms of pressure coefficient. The solution variable with the lowest uncertainty was Strouhal number, followed by time-averaged drag coefficient. It is also noted that the GCI and LS procedures produce noticeably different uncertainty estimates, primarily due to inconsistences in their estimated observed orders of accuracy and safety factors. To successfully apply the procedures to practical problems, further research is required to reliably estimate uncertainties in solutions with “noisy” grid convergence behaviors and observed orders of accuracy.
{"title":"A Solution Verification Study For Urans Simulations of Flow Over a 5:1 Rectangular Cylinder Using Grid Convergence Index And Least Squares Procedures","authors":"TarakN Nandi, DongHun Yeo","doi":"10.1115/1.4063818","DOIUrl":"https://doi.org/10.1115/1.4063818","url":null,"abstract":"Abstract A verification study was conducted on an URANS (Unsteady Reynolds-Averaged Navier-Stoke) simulation of flow around a 5:1 rectangular cylinder at a Reynolds number of 56,700 (based on the cylinder depth) using the k-ω SST (Shear Stress Transport) turbulence model and the γ-Reθ transition model for three types of grids (a fully structured grid and two hybrid grids generated using Delaunay and advancing front techniques). The Grid Convergence Index (GCI) and Least Squares (LS) procedures were employed to estimate discretization error and associated uncertainties. The result indicates that the LS procedure provides the most reliable estimates of discretization error uncertainties for solution variables in the structure grid from the k-ω SST model. From the six solution variables, the highest relative uncertainty was typically observed in the rms of lift coefficient, followed by time-averaged reattachment length and peak of rms of pressure coefficient. The solution variable with the lowest uncertainty was Strouhal number, followed by time-averaged drag coefficient. It is also noted that the GCI and LS procedures produce noticeably different uncertainty estimates, primarily due to inconsistences in their estimated observed orders of accuracy and safety factors. To successfully apply the procedures to practical problems, further research is required to reliably estimate uncertainties in solutions with “noisy” grid convergence behaviors and observed orders of accuracy.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":"46 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135824231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aldo Gargiulo, Julie E Duetsch-Patel, Aurelien Borgoltz, William Devenport, Christopher J Roy, K. Todd Lowe
Abstract The Benchmark Validation Experiment for RANS/LES Investigations (BeVERLI) aims to produce an experimental dataset of three-dimensional non-equilibrium turbulent boundary layers with various levels of separation that, for the first time, meets the most exacting requirements of computational fluid dynamics validation. The application of simulations and modeling in high-consequence engineering environments has become increasingly prominent in the past two decades, considerably raising the standards and demands of model validation and forcing a significant paradigm shift in the design of corresponding validation experiments. In this paper, based on the experiences of project BeVERLI, we present strategies for designing and executing validation experiments, hoping to ease the transition into this new era of fluid dynamics experimentation and help upcoming validation experiments succeed. We discuss the selection of a flow for validation, the synergistic use of simulations and experiments, cross-institutional collaborations, and tools, such as model scans, time-dependent measurements, and repeated and redundant measurements. The proposed strategies are shown to successfully mitigate risks and enable the methodical identification, measurement, uncertainty quantification, and characterization of critical flow features, boundary conditions, and corresponding sensitivities, promoting the highest levels of model validation experiment completeness per Oberkampf and Smith. Furthermore, the applicability of these strategies to estimating critical and difficult-to-obtain bias error uncertainties of different measurement systems, e.g., the underprediction of high-order statistical moments from particle image velocimetry velocity field data due to spatial filtering effects, and to systematically assessing the quality of uncertainty estimates is shown.
{"title":"Strategies for Computational Fluid Dynamics Validation Experiments","authors":"Aldo Gargiulo, Julie E Duetsch-Patel, Aurelien Borgoltz, William Devenport, Christopher J Roy, K. Todd Lowe","doi":"10.1115/1.4063639","DOIUrl":"https://doi.org/10.1115/1.4063639","url":null,"abstract":"Abstract The Benchmark Validation Experiment for RANS/LES Investigations (BeVERLI) aims to produce an experimental dataset of three-dimensional non-equilibrium turbulent boundary layers with various levels of separation that, for the first time, meets the most exacting requirements of computational fluid dynamics validation. The application of simulations and modeling in high-consequence engineering environments has become increasingly prominent in the past two decades, considerably raising the standards and demands of model validation and forcing a significant paradigm shift in the design of corresponding validation experiments. In this paper, based on the experiences of project BeVERLI, we present strategies for designing and executing validation experiments, hoping to ease the transition into this new era of fluid dynamics experimentation and help upcoming validation experiments succeed. We discuss the selection of a flow for validation, the synergistic use of simulations and experiments, cross-institutional collaborations, and tools, such as model scans, time-dependent measurements, and repeated and redundant measurements. The proposed strategies are shown to successfully mitigate risks and enable the methodical identification, measurement, uncertainty quantification, and characterization of critical flow features, boundary conditions, and corresponding sensitivities, promoting the highest levels of model validation experiment completeness per Oberkampf and Smith. Furthermore, the applicability of these strategies to estimating critical and difficult-to-obtain bias error uncertainties of different measurement systems, e.g., the underprediction of high-order statistical moments from particle image velocimetry velocity field data due to spatial filtering effects, and to systematically assessing the quality of uncertainty estimates is shown.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135351524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Here we offer an approach for being reasonably sure that finite element determinations of stress concentration factors are accurate enough to be included in engineering handbooks. The approach has two contributors. The first consists of analyzing a stress concentration on a sequence of systematically refined meshes until the error estimates of ASME have that sufficient accuracy has been achieved. The second consists of constructing a test problem with an exact and somewhat higher value of its stress concentration factor, then analyzing this test problem with the same sequence of meshes and showing that, in fact, sufficient accuracy has been achieved. In combination, these two means of verification are applied to a series of U-notches in a plate under tension. Together they show that it is reasonable to regard finite element values of stress concentration factors on the finest meshes as being accurate to three significant figures. Given this level of accuracy it is then also reasonable to use the approach to verify other existing stress concentration factors and resolve any discrepancies between them, as well as to verify new stress concentration factors.
{"title":"On the Verification of Finite Element Determinations of Stress Concentration Factors for Handbooks","authors":"A. Kardak, G. Sinclair","doi":"10.1115/1.4063064","DOIUrl":"https://doi.org/10.1115/1.4063064","url":null,"abstract":"\u0000 Here we offer an approach for being reasonably sure that finite element determinations of stress concentration factors are accurate enough to be included in engineering handbooks. The approach has two contributors. The first consists of analyzing a stress concentration on a sequence of systematically refined meshes until the error estimates of ASME have that sufficient accuracy has been achieved. The second consists of constructing a test problem with an exact and somewhat higher value of its stress concentration factor, then analyzing this test problem with the same sequence of meshes and showing that, in fact, sufficient accuracy has been achieved. In combination, these two means of verification are applied to a series of U-notches in a plate under tension. Together they show that it is reasonable to regard finite element values of stress concentration factors on the finest meshes as being accurate to three significant figures. Given this level of accuracy it is then also reasonable to use the approach to verify other existing stress concentration factors and resolve any discrepancies between them, as well as to verify new stress concentration factors.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47703623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roll decay of David Taylor Model Basin (DTMB) Model 5720, a 23rd scale free-running model of the research vessel (R/V) Melville, is evaluated with uncertainty estimates. Experimental roll-decay time series was accurately modeled as an exponentially decaying cosine function, which is the solution of a second-order ordinary differential equation for damping coefficient of less than one (N < 1). The curve-fit provides damping coefficient (N), period (T), and offset. Roll period in calm water was dependent on Froude number (Fr) and initial roll angle (a). Roll decay data are from 76 runs for three nominal Froude numbers, Fr = 0, 0.15, and 0.22. The initial roll angle variation was 30 to 250. The natural roll period was 2.139 10.041 s 11.9 %). The decay coefficient data were approximated by a plane in three dimensions with Fr and initial roll amplitudes (a) as the independent variables. Curve-fit results are compared to decay coefficient by log decrement and period from time between zero crossings. Examples demonstrate average values for a single roll decay event from log decrement are the same as values by the curve-fitting method within uncertainty estimates. The uncertainty estimate for the decay coefficient is significantly less by curve-fit method in comparison to log-decrement method. By log decrement, the relative uncertainty increases with decreasing roll amplitude peak; consequently, focus should be on the damping coefficient at the largest peaks, where the uncertainty is the smallest.
{"title":"Analysis of Roll Decay for Surface-ship Model Experiments with Uncertainty Estimates","authors":"J. Park","doi":"10.1115/1.4063010","DOIUrl":"https://doi.org/10.1115/1.4063010","url":null,"abstract":"\u0000 Roll decay of David Taylor Model Basin (DTMB) Model 5720, a 23rd scale free-running model of the research vessel (R/V) Melville, is evaluated with uncertainty estimates. Experimental roll-decay time series was accurately modeled as an exponentially decaying cosine function, which is the solution of a second-order ordinary differential equation for damping coefficient of less than one (N < 1). The curve-fit provides damping coefficient (N), period (T), and offset. Roll period in calm water was dependent on Froude number (Fr) and initial roll angle (a). Roll decay data are from 76 runs for three nominal Froude numbers, Fr = 0, 0.15, and 0.22. The initial roll angle variation was 30 to 250. The natural roll period was 2.139 10.041 s 11.9 %). The decay coefficient data were approximated by a plane in three dimensions with Fr and initial roll amplitudes (a) as the independent variables. Curve-fit results are compared to decay coefficient by log decrement and period from time between zero crossings. Examples demonstrate average values for a single roll decay event from log decrement are the same as values by the curve-fitting method within uncertainty estimates. The uncertainty estimate for the decay coefficient is significantly less by curve-fit method in comparison to log-decrement method. By log decrement, the relative uncertainty increases with decreasing roll amplitude peak; consequently, focus should be on the damping coefficient at the largest peaks, where the uncertainty is the smallest.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41477660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Variability in multiple independent input parameters makes it difficult to estimate the resultant variability in a system's overall response. The Propagation of Errors (PE) and Monte-Carlo (MC) techniques are two major methods to predict the variability of a system. However, the formalism of PE can lead to an inaccurate estimate for systems that have parameters varying over a wide range. For the latter, the results give a direct estimate of the variance of the response, but for complex systems with many parameters, the number of trials necessary to yield an accurate estimate can be sizeable to the point the technique becomes impractical. The effectiveness of a designed experiment (orthogonal array) methodology, as employed in Taguchi Tolerance Design (TD) method to estimate variability in complex systems is studied. We use a linear elastic 3-point bending beam model and a nonlinear extended finite elements crack growth model to test and compare the PE and MC methods with the TD method. Results from an MC estimate, using 10,000 trials, serve as a reference to verify the result in both cases. We find that the PE method works suboptimal for a coefficient of variation above 5% in the input variables. In addition, we find that the TD method works very well with moderately sized trials of designed experiment for both models. Our results demonstrate how the variability estimation methods perform in the deterministic domain of numerical simulations and can assist in designing physical tests by providing a guideline performance measure.
由于多个独立输入参数存在变异,因此很难估算系统整体响应的变异性。误差传播(PE)和蒙特卡洛(MC)技术是预测系统变异性的两种主要方法。然而,对于参数变化范围较大的系统,PE 的形式主义会导致估计结果不准确。对于后者,其结果可直接估算出响应的方差,但对于参数较多的复杂系统,要获得准确的估算结果,所需的试验次数可能会非常多,以至于该技术变得不切实际。我们研究了田口公差设计(TD)方法中采用的设计实验(正交阵列)方法在估计复杂系统变异性方面的有效性。我们使用线性弹性三点弯曲梁模型和非线性扩展有限元裂纹生长模型来测试和比较 PE 和 MC 方法与 TD 方法。使用 10,000 次试验得出的 MC 估计结果可作为验证两种方法结果的参考。我们发现,当输入变量的变异系数超过 5%时,PE 方法的效果并不理想。此外,我们还发现 TD 方法在两种模型中都能很好地使用中等规模的设计试验。我们的结果表明了变异性估计方法在数值模拟的确定性领域中的表现,并通过提供指导性的性能测量方法来帮助设计物理试验。
{"title":"Variability Estimation in a Non-Linear Crack Growth Simulation Model with Controlled Parameters Using Designed Experiments Testing","authors":"Seungju Yeoa, Paul Funkenbuscha, H. Askari","doi":"10.1115/1.4064053","DOIUrl":"https://doi.org/10.1115/1.4064053","url":null,"abstract":"Variability in multiple independent input parameters makes it difficult to estimate the resultant variability in a system's overall response. The Propagation of Errors (PE) and Monte-Carlo (MC) techniques are two major methods to predict the variability of a system. However, the formalism of PE can lead to an inaccurate estimate for systems that have parameters varying over a wide range. For the latter, the results give a direct estimate of the variance of the response, but for complex systems with many parameters, the number of trials necessary to yield an accurate estimate can be sizeable to the point the technique becomes impractical. The effectiveness of a designed experiment (orthogonal array) methodology, as employed in Taguchi Tolerance Design (TD) method to estimate variability in complex systems is studied. We use a linear elastic 3-point bending beam model and a nonlinear extended finite elements crack growth model to test and compare the PE and MC methods with the TD method. Results from an MC estimate, using 10,000 trials, serve as a reference to verify the result in both cases. We find that the PE method works suboptimal for a coefficient of variation above 5% in the input variables. In addition, we find that the TD method works very well with moderately sized trials of designed experiment for both models. Our results demonstrate how the variability estimation methods perform in the deterministic domain of numerical simulations and can assist in designing physical tests by providing a guideline performance measure.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":"59 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139357521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Colmenares F., M. Abuhegazy, Y. Peet, S. Murman, S. Poroseva
Understanding spatial development of a turbulent mixing layer is essential for many engineering applications. However, the flow development is difficult to replicate in physical or numerical experiments. For this reason, the most attractive method for the mixing layer analysis is the direct numerical simulation (DNS), with the most control over the simulation inputs and free from modeling assumptions. However, the DNS cost often prevents conducting the sensitivity analysis of the simulation results to variations in the numerical procedure and thus, separating numerical and physical effects. In the current paper, effects of the computational domain dimensions on statistics collected from DNS of a spatially developing incompressible turbulent mixing layer are analyzed with the focus on determining the domain dimensions suitable for studying the flow asymptotic state. In the simulations, the mixing layer develops between two co-flowing laminar boundary layers formed on two sides of a sharp-ended splitter plate of a finite thickness with characteristics close to those of the un-tripped boundary layers in the experiments by J. H. Bell, R. D. Mehta, AIAA Journal, 28 (12), 2034 (1990). The simulations were conducted using the spectral-element code Nek5000.
了解湍流混合层的空间发展对许多工程应用至关重要。然而,在物理或数值实验中,流动的发展是难以复制的。由于这个原因,混合层分析最吸引人的方法是直接数值模拟(DNS),它对模拟输入有最大的控制,并且不需要建模假设。然而,DNS成本常常妨碍对数值过程的变化进行模拟结果的敏感性分析,从而分离数值和物理效应。本文分析了计算域维数对空间发展不可压缩湍流混合层的DNS统计量的影响,重点讨论了适合研究流动渐近状态的域维数。在模拟中,混合层在有限厚度的尖头分流板的两侧形成的两个共流层流边界层之间发展,其特征与J. H. Bell, R. D. Mehta, AIAA学报,28(12),2034(1990)的实验中未跳脱边界层的特征接近。用谱元代码Nek5000进行了模拟。
{"title":"Sensitivity Analysis of Direct Numerical Simulation of a Spatially Developing Turbulent Mixing Layer to the Domain Dimensions","authors":"J. Colmenares F., M. Abuhegazy, Y. Peet, S. Murman, S. Poroseva","doi":"10.1115/1.4062770","DOIUrl":"https://doi.org/10.1115/1.4062770","url":null,"abstract":"\u0000 Understanding spatial development of a turbulent mixing layer is essential for many engineering applications. However, the flow development is difficult to replicate in physical or numerical experiments. For this reason, the most attractive method for the mixing layer analysis is the direct numerical simulation (DNS), with the most control over the simulation inputs and free from modeling assumptions. However, the DNS cost often prevents conducting the sensitivity analysis of the simulation results to variations in the numerical procedure and thus, separating numerical and physical effects. In the current paper, effects of the computational domain dimensions on statistics collected from DNS of a spatially developing incompressible turbulent mixing layer are analyzed with the focus on determining the domain dimensions suitable for studying the flow asymptotic state. In the simulations, the mixing layer develops between two co-flowing laminar boundary layers formed on two sides of a sharp-ended splitter plate of a finite thickness with characteristics close to those of the un-tripped boundary layers in the experiments by J. H. Bell, R. D. Mehta, AIAA Journal, 28 (12), 2034 (1990). The simulations were conducted using the spectral-element code Nek5000.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42671684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anton van Beek, A. Giuntoli, Nitin K. Hansoge, S. Keten, Wei Chen
While most calibration methods focus on inferring a set of model parameters that are unknown but assumed to be constant, many models have parameters that have a functional relation with the controllable input variables. Formulating a low-dimensional approximation of these calibration functions allows modelers to use low-fidelity models to explore phenomena at lengths and time scales unattainable with their high-fidelity sources. While functional calibration methods are available for low-dimensional problems (e.g., one to three unknown calibration functions), exploring high-dimensional spaces of unknown calibration functions (e.g., more than ten) is still a challenging task due to its computational cost and the risk for identifiability issues. To address this challenge, we introduce a semiparametric calibration method that uses an approximate Bayesian computation scheme to quantify the uncertainty in the unknown calibration functions and uses this insight to identify what functions can be replaced with low-dimensional approximations. Through a test problem and a coarse-grained model of an epoxy resin, we demonstrate that the introduced method enables the identification of a low-dimensional set of calibration functions with a limited compromise in calibration accuracy. The novelty of the presented method is the ability to synthesize domain knowledge from various sources (i.e., physical experiments, simulation models, and expert insight) to enable high-dimensional functional calibration without the need for prior knowledge on the class of unknown calibration functions.
{"title":"Semi-Parametric Functional Calibration Using Uncertainty Quantification Based Decision Support","authors":"Anton van Beek, A. Giuntoli, Nitin K. Hansoge, S. Keten, Wei Chen","doi":"10.1115/1.4062694","DOIUrl":"https://doi.org/10.1115/1.4062694","url":null,"abstract":"\u0000 While most calibration methods focus on inferring a set of model parameters that are unknown but assumed to be constant, many models have parameters that have a functional relation with the controllable input variables. Formulating a low-dimensional approximation of these calibration functions allows modelers to use low-fidelity models to explore phenomena at lengths and time scales unattainable with their high-fidelity sources. While functional calibration methods are available for low-dimensional problems (e.g., one to three unknown calibration functions), exploring high-dimensional spaces of unknown calibration functions (e.g., more than ten) is still a challenging task due to its computational cost and the risk for identifiability issues. To address this challenge, we introduce a semiparametric calibration method that uses an approximate Bayesian computation scheme to quantify the uncertainty in the unknown calibration functions and uses this insight to identify what functions can be replaced with low-dimensional approximations. Through a test problem and a coarse-grained model of an epoxy resin, we demonstrate that the introduced method enables the identification of a low-dimensional set of calibration functions with a limited compromise in calibration accuracy. The novelty of the presented method is the ability to synthesize domain knowledge from various sources (i.e., physical experiments, simulation models, and expert insight) to enable high-dimensional functional calibration without the need for prior knowledge on the class of unknown calibration functions.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42600694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work performs systematic studies for code verification for turbulence modeling in our research CFD code SENSEI. Turbulence modeling verification cases including cross term sinusoidal manufactured solutions and a few exact solutions are used to justify the proper Spalart-Allmaras and Menter's SST turbulence modeling implementation of the SENSEI CFD code. The observed order of accuracy matches fairly well with the formal order for both the 2D/3D steady-state and 2D unsteady flows when using the cross term sinusoidal manufactured solutions. This work concludes that it is important to keep the spatial discretization error in a similar order of magnitude as the temporal error in order to avoid erroneous analysis when performing combined spatial and temporal order analysis. Since explicit time marching scheme typically requires smaller time step size compared to implicit time marching schemes due to stability constraints, multiple implicit schemes such as the Singly-Diagonally Implicit Runge-Kutta multi-stage scheme and three point backward scheme are used in our work to mitigate the stability constraints.
{"title":"Code Verification For The SENSEI CFD Code","authors":"Weicheng Xue, Hongyu Wang, Christopher J. Roy","doi":"10.1115/1.4062609","DOIUrl":"https://doi.org/10.1115/1.4062609","url":null,"abstract":"\u0000 This work performs systematic studies for code verification for turbulence modeling in our research CFD code SENSEI. Turbulence modeling verification cases including cross term sinusoidal manufactured solutions and a few exact solutions are used to justify the proper Spalart-Allmaras and Menter's SST turbulence modeling implementation of the SENSEI CFD code. The observed order of accuracy matches fairly well with the formal order for both the 2D/3D steady-state and 2D unsteady flows when using the cross term sinusoidal manufactured solutions. This work concludes that it is important to keep the spatial discretization error in a similar order of magnitude as the temporal error in order to avoid erroneous analysis when performing combined spatial and temporal order analysis. Since explicit time marching scheme typically requires smaller time step size compared to implicit time marching schemes due to stability constraints, multiple implicit schemes such as the Singly-Diagonally Implicit Runge-Kutta multi-stage scheme and three point backward scheme are used in our work to mitigate the stability constraints.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47268661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}