Pub Date : 2022-01-01DOI: 10.1615/int.j.uncertaintyquantification.2022038966
O. Chatrabgoun, M. Esmaeilbeigi, M. Cheraghi, A. Daneshkhah
{"title":"Stable Likelihood Computation for Machine Learning of Linear Differential Operators with Gaussian Processes","authors":"O. Chatrabgoun, M. Esmaeilbeigi, M. Cheraghi, A. Daneshkhah","doi":"10.1615/int.j.uncertaintyquantification.2022038966","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2022038966","url":null,"abstract":"","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"20 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67531128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1615/int.j.uncertaintyquantification.2022038086
T. Butler, T. Wildey, Wenjuan Zhang
{"title":"$L^p$ Convergence of Approximate Maps and Probability Densities for Forward and Inverse Problems in Uncertainty Quantification","authors":"T. Butler, T. Wildey, Wenjuan Zhang","doi":"10.1615/int.j.uncertaintyquantification.2022038086","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2022038086","url":null,"abstract":"","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67531047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1615/int.j.uncertaintyquantification.2022038338
P. Song, X.Q. Wang, M. Mignolet
{"title":"MAXIMUM ENTROPY UNCERTAINTY MODELING AT THE FINITE ELEMENT LEVEL FOR HEATED STRUCTURES","authors":"P. Song, X.Q. Wang, M. Mignolet","doi":"10.1615/int.j.uncertaintyquantification.2022038338","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2022038338","url":null,"abstract":"","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67531085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1615/int.j.uncertaintyquantification.2022042145
Niklas Miska, D. Balzani
{"title":"Method for the Analysis of Epistemic and Aleatory Uncertainties for a Reliable Evaluation of Failure of Engineering Structures","authors":"Niklas Miska, D. Balzani","doi":"10.1615/int.j.uncertaintyquantification.2022042145","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2022042145","url":null,"abstract":"","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67531143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1615/int.j.uncertaintyquantification.2022044335
Hanyan Huang, Qizhe Li, Sha Xie, Lin Chen, Zecong Liu
{"title":"An enhanced framework for Morris by combining with a sequential sampling strategy","authors":"Hanyan Huang, Qizhe Li, Sha Xie, Lin Chen, Zecong Liu","doi":"10.1615/int.j.uncertaintyquantification.2022044335","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2022044335","url":null,"abstract":"","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67531300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1615/int.j.uncertaintyquantification.2022042928
J. Coheur, T. Magin, P. Chatelain, M. Arnst
For space missions involving atmospheric entry, a thermal protection system is essential to shield the spacecraft and its payload from the severe aerothermal loads. Carbon/phenolic composite materials have gained renewed interest to serve as ablative thermal protection materials (TPMs). New experimental data relevant to the pyrolytic decomposition of the phenolic resin used in such carbon/phenolic composite TPMs have recently been published in the literature. In this paper, we infer from these new experimental data an uncertainty-quantified pyrolysis model. We adopt a Bayesian probabilistic approach to account for uncertainties in the model identification. We use an approximate likelihood function involving a weighted distance between the model predictions and the time-dependent experimental data. To sample from the posterior, we use a gradient-informed Markov chain Monte Carlo method, namely, a method based on an Itô stochastic differential equation, with an adaptive selection of the numerical parameters. To select the decomposition mechanisms to be represented in the pyrolysis model, we proceed by progressively increasing the complexity of the pyrolysis model until a satisfactory fit to the data is ultimately obtained. The pyrolysis model thus obtained involves six reactions and has 48 parameters. We demonstrate the use of the identified pyrolysis model in a numerical simulation of heat shield surface recession in a Martian entry.
{"title":"Bayesian identification of pyrolysis model parameters for thermal protection materials using an adaptive gradient-informed sampling algorithm with application to a Mars atmospheric entry","authors":"J. Coheur, T. Magin, P. Chatelain, M. Arnst","doi":"10.1615/int.j.uncertaintyquantification.2022042928","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2022042928","url":null,"abstract":"For space missions involving atmospheric entry, a thermal protection system is essential to shield the spacecraft and its payload from the severe aerothermal loads. Carbon/phenolic composite materials have gained renewed interest to serve as ablative thermal protection materials (TPMs). New experimental data relevant to the pyrolytic decomposition of the phenolic resin used in such carbon/phenolic composite TPMs have recently been published in the literature. In this paper, we infer from these new experimental data an uncertainty-quantified pyrolysis model. We adopt a Bayesian probabilistic approach to account for uncertainties in the model identification. We use an approximate likelihood function involving a weighted distance between the model predictions and the time-dependent experimental data. To sample from the posterior, we use a gradient-informed Markov chain Monte Carlo method, namely, a method based on an Itô stochastic differential equation, with an adaptive selection of the numerical parameters. To select the decomposition mechanisms to be represented in the pyrolysis model, we proceed by progressively increasing the complexity of the pyrolysis model until a satisfactory fit to the data is ultimately obtained. The pyrolysis model thus obtained involves six reactions and has 48 parameters. We demonstrate the use of the identified pyrolysis model in a numerical simulation of heat shield surface recession in a Martian entry.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67531182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1615/int.j.uncertaintyquantification.2022038435
Mohammad S. Ghavami, Bedrich Sousedik, Hooshang Dabbagh, Morad Ahmadnasab
We develop a stochastic Galerkin finite element method for nonlinear elasticity and apply it to reinforced concrete members with random material properties. The strategy is based on the modified Newton-Raphson method, which consists of an incremental loading process and a linearization scheme applied at each load increment. We consider that the material properties are given by a stochastic expansion in the so-called generalized polynomial chaos (gPC) framework. We search the gPC expansion of the displacement, which is then used to update the gPC expansions of the stress, strain, and internal forces. The proposed method is applied to a reinforced concrete beam with uncertain initial concrete modulus of elasticity and a shear wall with uncertain maximum compressive stress of concrete, and the results are compared to those of stochastic collocation and Monte Carlo methods. Since the systems of equations obtained in the linearization scheme using the stochastic Galerkin method are very large, and there are typically many load increments, we also studied iterative solution using preconditioned conjugate gradients. The efficiency of the proposed method is illustrated by a set of numerical experiments.
{"title":"STOCHASTIC GALERKIN FINITE ELEMENT METHOD FOR NONLINEAR ELASTICITY AND APPLICATION TO REINFORCED CONCRETE MEMBERS","authors":"Mohammad S. Ghavami, Bedrich Sousedik, Hooshang Dabbagh, Morad Ahmadnasab","doi":"10.1615/int.j.uncertaintyquantification.2022038435","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2022038435","url":null,"abstract":"We develop a stochastic Galerkin finite element method for nonlinear elasticity and apply it to reinforced concrete members with random material properties. The strategy is based on the modified Newton-Raphson method, which consists of an incremental loading process and a linearization scheme applied at each load increment. We consider that the material properties are given by a stochastic expansion in the so-called generalized polynomial chaos (gPC) framework. We search the gPC expansion of the displacement, which is then used to update the gPC expansions of the stress, strain, and internal forces. The proposed method is applied to a reinforced concrete beam with uncertain initial concrete modulus of elasticity and a shear wall with uncertain maximum compressive stress of concrete, and the results are compared to those of stochastic collocation and Monte Carlo methods. Since the systems of equations obtained in the linearization scheme using the stochastic Galerkin method are very large, and there are typically many load increments, we also studied iterative solution using preconditioned conjugate gradients. The efficiency of the proposed method is illustrated by a set of numerical experiments.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"20 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1615/int.j.uncertaintyquantification.2022039245
Robin R.P. Callens, Matthias G.R. Faess, David Moens
This paper presents a multilevel quasi-Monte Carlo method for interval analysis, as a computationally efficient method for high-dimensional linear models. Interval analysis typically requires a global optimization procedure to calculate the interval bounds on the output side of a computational model. The main issue of such a procedure is that it requires numerous full-scale model evaluations. Even when simplified approaches such as the vertex method are applied, the required number of model evaluations scales combinatorially with the number of input intervals. This increase in required model evaluations is especially problematic for highly detailed numerical models containing thousands or even millions of degrees of freedom. In the context of probabilistic forward uncertainty propagation, multifidelity techniques such as multilevel quasi-Monte Carlo show great potential to reduce the computational cost. However, their translation to an interval context is not straightforward due to the fundamental differences between interval and probabilistic methods. In this work, we introduce a multilevel quasi-Monte Carlo framework. First, the input intervals are transformed to Cauchy random variables. Then, based on these Cauchy random variables, a multilevel sampling is designed. Finally, the corresponding model responses are post-processed to estimate the intervals on the output quantities with high accuracy. Two numerical examples show that the technique is very efficient for a medium to a high number of input intervals. This is in comparison with traditional propagation approaches for interval analysis and with results well within a predefined tolerance.
{"title":"MULTILEVEL QUASI-MONTE CARLO FOR INTERVAL ANALYSIS","authors":"Robin R.P. Callens, Matthias G.R. Faess, David Moens","doi":"10.1615/int.j.uncertaintyquantification.2022039245","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2022039245","url":null,"abstract":"This paper presents a multilevel quasi-Monte Carlo method for interval analysis, as a computationally efficient method for high-dimensional linear models. Interval analysis typically requires a global optimization procedure to calculate the interval bounds on the output side of a computational model. The main issue of such a procedure is that it requires numerous full-scale model evaluations. Even when simplified approaches such as the vertex method are applied, the required number of model evaluations scales combinatorially with the number of input intervals. This increase in required model evaluations is especially problematic for highly detailed numerical models containing thousands or even millions of degrees of freedom. In the context of probabilistic forward uncertainty propagation, multifidelity techniques such as multilevel quasi-Monte Carlo show great potential to reduce the computational cost. However, their translation to an interval context is not straightforward due to the fundamental differences between interval and probabilistic methods. In this work, we introduce a multilevel quasi-Monte Carlo framework. First, the input intervals are transformed to Cauchy random variables. Then, based on these Cauchy random variables, a multilevel sampling is designed. Finally, the corresponding model responses are post-processed to estimate the intervals on the output quantities with high accuracy. Two numerical examples show that the technique is very efficient for a medium to a high number of input intervals. This is in comparison with traditional propagation approaches for interval analysis and with results well within a predefined tolerance.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"20 3-4 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-05DOI: 10.1615/int.j.uncertaintyquantification.2023038376
Ayao Ehara, S. Guillas
Investigating uncertainties in computer simulations can be prohibitive in terms of computational costs, since the simulator needs to be run over a large number of input values. Building an emulator, i.e. a statistical surrogate model of the simulator constructed using a design of experiments made of a comparatively small number of evaluations of the forward solver, greatly alleviates the computational burden to carry out such investigations. Nevertheless, this can still be above the computational budget for many studies. Two major approaches have been used to reduce the budget needed to build the emulator: efficient design of experiments, such as sequential designs, and combining training data of different degrees of sophistication in a so-called multi-fidelity method, or multilevel in case these fidelities are ordered typically for increasing resolutions. We present here a novel method that combines both approaches, the multilevel adaptive sequential design of computer experiments (MLASCE) in the framework of Gaussian process (GP) emulators. We make use of reproducing kernel Hilbert spaces as a tool for our GP approximations of the differences between two consecutive levels. This dual strategy allows us to allocate efficiently limited computational resources over simulations of different levels of fidelity and build the GP emulator. The allocation of computational resources is shown to be the solution of a simple optimization problem in a special case where we theoretically prove the validity of our approach. Our proposed method is compared with other existing models of multi-fidelity Gaussian process emulation. Gains in orders of magnitudes in accuracy or computing budgets are demonstrated in some of numerical examples for some settings.
{"title":"An adaptive strategy for sequential designs of multilevel computer experiments","authors":"Ayao Ehara, S. Guillas","doi":"10.1615/int.j.uncertaintyquantification.2023038376","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2023038376","url":null,"abstract":"Investigating uncertainties in computer simulations can be prohibitive in terms of computational costs, since the simulator needs to be run over a large number of input values. Building an emulator, i.e. a statistical surrogate model of the simulator constructed using a design of experiments made of a comparatively small number of evaluations of the forward solver, greatly alleviates the computational burden to carry out such investigations. Nevertheless, this can still be above the computational budget for many studies. Two major approaches have been used to reduce the budget needed to build the emulator: efficient design of experiments, such as sequential designs, and combining training data of different degrees of sophistication in a so-called multi-fidelity method, or multilevel in case these fidelities are ordered typically for increasing resolutions. We present here a novel method that combines both approaches, the multilevel adaptive sequential design of computer experiments (MLASCE) in the framework of Gaussian process (GP) emulators. We make use of reproducing kernel Hilbert spaces as a tool for our GP approximations of the differences between two consecutive levels. This dual strategy allows us to allocate efficiently limited computational resources over simulations of different levels of fidelity and build the GP emulator. The allocation of computational resources is shown to be the solution of a simple optimization problem in a special case where we theoretically prove the validity of our approach. Our proposed method is compared with other existing models of multi-fidelity Gaussian process emulation. Gains in orders of magnitudes in accuracy or computing budgets are demonstrated in some of numerical examples for some settings.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2021-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47683184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-12DOI: 10.1615/int.j.uncertaintyquantification.2022038478
I. Langmore, M. Dikovsky, S. Geraedts, P. Norgaard, R. V. Behren
The Hamiltonian Monte Carlo (HMC) method allows sampling from continuous densities. Favorable scaling with dimension has led to wide adoption of HMC by the statistics community. Modern auto-differentiating software should allow more widespread usage in Bayesian inverse problems. This paper analyzes two major difficulties encoun-tered using HMC for inverse problems: poor conditioning and multi-modality. Novel results on preconditioning and replica exchange Monte Carlo parameter selection are presented in the context of spectroscopy. Recommendations are given for the number of integration steps as well as step size, preconditioner type and fitting, annealing form and schedule. These recommendations are analyzed rigorously in the Gaussian case, and shown to generalize in a fusion plasma reconstruction.
{"title":"Hamiltonian Monte Carlo in Inverse Problems; Ill-Conditioning and Multi-Modality.","authors":"I. Langmore, M. Dikovsky, S. Geraedts, P. Norgaard, R. V. Behren","doi":"10.1615/int.j.uncertaintyquantification.2022038478","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2022038478","url":null,"abstract":"The Hamiltonian Monte Carlo (HMC) method allows sampling from continuous densities. Favorable scaling with dimension has led to wide adoption of HMC by the statistics community. Modern auto-differentiating software should allow more widespread usage in Bayesian inverse problems. This paper analyzes two major difficulties encoun-tered using HMC for inverse problems: poor conditioning and multi-modality. Novel results on preconditioning and replica exchange Monte Carlo parameter selection are presented in the context of spectroscopy. Recommendations are given for the number of integration steps as well as step size, preconditioner type and fitting, annealing form and schedule. These recommendations are analyzed rigorously in the Gaussian case, and shown to generalize in a fusion plasma reconstruction.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2021-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46843901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}