Nada Cvetković, Han Cheng Lie, Harshit Bansal, Karen Veroy
SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 3, Page 723-758, September 2024. Abstract.In statistical inference, a discrepancy between the parameter-to-observable map that generates the data and the parameter-to-observable map that is used for inference can lead to misspecified likelihoods and thus to incorrect estimates. In many inverse problems, the parameter-to-observable map is the composition of a linear state-to-observable map called an “observation operator” and a possibly nonlinear parameter-to-state map called the “model.” We consider such Bayesian inverse problems where the discrepancy in the parameter-to-observable map is due to the use of an approximate model that differs from the best model, i.e., to nonzero “model error.” Multiple approaches have been proposed to address such discrepancies, each leading to a specific posterior. We show how to use local Lipschitz stability estimates of posteriors with respect to likelihood perturbations to bound the Kullback–Leibler divergence of the posterior of each approach with respect to the posterior associated to the best model. Our bounds lead to criteria for choosing observation operators that mitigate the effect of model error for Bayesian inverse problems of this type. We illustrate the feasibility of one such criterion on an advection-diffusion-reaction PDE inverse problem and use this example to discuss the importance and challenges of model error-aware inference.
{"title":"Choosing Observation Operators to Mitigate Model Error in Bayesian Inverse Problems","authors":"Nada Cvetković, Han Cheng Lie, Harshit Bansal, Karen Veroy","doi":"10.1137/23m1602140","DOIUrl":"https://doi.org/10.1137/23m1602140","url":null,"abstract":"SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 3, Page 723-758, September 2024. <br/> Abstract.In statistical inference, a discrepancy between the parameter-to-observable map that generates the data and the parameter-to-observable map that is used for inference can lead to misspecified likelihoods and thus to incorrect estimates. In many inverse problems, the parameter-to-observable map is the composition of a linear state-to-observable map called an “observation operator” and a possibly nonlinear parameter-to-state map called the “model.” We consider such Bayesian inverse problems where the discrepancy in the parameter-to-observable map is due to the use of an approximate model that differs from the best model, i.e., to nonzero “model error.” Multiple approaches have been proposed to address such discrepancies, each leading to a specific posterior. We show how to use local Lipschitz stability estimates of posteriors with respect to likelihood perturbations to bound the Kullback–Leibler divergence of the posterior of each approach with respect to the posterior associated to the best model. Our bounds lead to criteria for choosing observation operators that mitigate the effect of model error for Bayesian inverse problems of this type. We illustrate the feasibility of one such criterion on an advection-diffusion-reaction PDE inverse problem and use this example to discuss the importance and challenges of model error-aware inference.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141585368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Margot Herin, Marouane Il Idrissi, Vincent Chabridon, Bertrand Iooss
SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 667-692, June 2024. Abstract.Performing (variance-based) global sensitivity analysis (GSA) with dependent inputs has recently benefited from cooperative game theory concepts, leading to meaningful sensitivity indices suitable with dependent inputs. The “Shapley effects,” i.e., the Shapley values transposed to variance-based GSA problems, are an example of such indices. However, these indices exhibit a particular behavior that can be undesirable: an exogenous input (i.e., which is not explicitly included in the structural equations of the model) can be associated with a strictly positive index when it is correlated to endogenous inputs. This paper investigates using a different allocation, called the “proportional values” for GSA purposes. First, an extension of this allocation is proposed to make it suitable for variance-based GSA. A novel GSA index is then defined: the proportional marginal effect (PME). The notion of exogeneity is formally defined in the context of variance-based GSA. It is shown that the PMEs are more discriminant than the Shapley values and allow the distinction of exogenous variables, even when they are correlated to endogenous inputs. Moreover, their behavior is compared to the Shapley effects on analytical toy cases and more realistic use cases.
{"title":"Proportional Marginal Effects for Global Sensitivity Analysis","authors":"Margot Herin, Marouane Il Idrissi, Vincent Chabridon, Bertrand Iooss","doi":"10.1137/22m153032x","DOIUrl":"https://doi.org/10.1137/22m153032x","url":null,"abstract":"SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 667-692, June 2024. <br/> Abstract.Performing (variance-based) global sensitivity analysis (GSA) with dependent inputs has recently benefited from cooperative game theory concepts, leading to meaningful sensitivity indices suitable with dependent inputs. The “Shapley effects,” i.e., the Shapley values transposed to variance-based GSA problems, are an example of such indices. However, these indices exhibit a particular behavior that can be undesirable: an exogenous input (i.e., which is not explicitly included in the structural equations of the model) can be associated with a strictly positive index when it is correlated to endogenous inputs. This paper investigates using a different allocation, called the “proportional values” for GSA purposes. First, an extension of this allocation is proposed to make it suitable for variance-based GSA. A novel GSA index is then defined: the proportional marginal effect (PME). The notion of exogeneity is formally defined in the context of variance-based GSA. It is shown that the PMEs are more discriminant than the Shapley values and allow the distinction of exogenous variables, even when they are correlated to endogenous inputs. Moreover, their behavior is compared to the Shapley effects on analytical toy cases and more realistic use cases.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141508626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 693-721, June 2024. Abstract.We present a combination technique based on mixed differences of both spatial approximations and quadrature formulae for the stochastic variables to solve efficiently a class of optimal control problems (OCPs) constrained by random partial differential equations. The method requires to solve the OCP for several low-fidelity spatial grids and quadrature formulae for the objective functional. All the computed solutions are then linearly combined to get a final approximation which, under suitable regularity assumptions, preserves the same accuracy of fine tensor product approximations, while drastically reducing the computational cost. The combination technique involves only tensor product quadrature formulae, and thus the discretized OCPs preserve the (possible) convexity of the continuous OCP. Hence, the combination technique avoids the inconveniences of multilevel Monte Carlo and/or sparse grids approaches but remains suitable for high-dimensional problems. The manuscript presents an a priori procedure to choose the most important mixed differences and an analysis stating that the asymptotic complexity is exclusively determined by the spatial solver. Numerical experiments validate the results.
{"title":"A Combination Technique for Optimal Control Problems Constrained by Random PDEs","authors":"Fabio Nobile, Tommaso Vanzan","doi":"10.1137/22m1532263","DOIUrl":"https://doi.org/10.1137/22m1532263","url":null,"abstract":"SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 693-721, June 2024. <br/> Abstract.We present a combination technique based on mixed differences of both spatial approximations and quadrature formulae for the stochastic variables to solve efficiently a class of optimal control problems (OCPs) constrained by random partial differential equations. The method requires to solve the OCP for several low-fidelity spatial grids and quadrature formulae for the objective functional. All the computed solutions are then linearly combined to get a final approximation which, under suitable regularity assumptions, preserves the same accuracy of fine tensor product approximations, while drastically reducing the computational cost. The combination technique involves only tensor product quadrature formulae, and thus the discretized OCPs preserve the (possible) convexity of the continuous OCP. Hence, the combination technique avoids the inconveniences of multilevel Monte Carlo and/or sparse grids approaches but remains suitable for high-dimensional problems. The manuscript presents an a priori procedure to choose the most important mixed differences and an analysis stating that the asymptotic complexity is exclusively determined by the spatial solver. Numerical experiments validate the results.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 646-666, June 2024. Abstract. The multivariate adaptive regression spline (MARS) approach of Friedman [J. H. Friedman, Ann. Statist., 19 (1991), pp. 1–67] and its Bayesian counterpart [D. Francom et al., Statist. Sinica, 28 (2018), pp. 791–816] are effective approaches for the emulation of computer models. The traditional assumption of Gaussian errors limits the usefulness of MARS, and many popular alternatives, when dealing with stochastic computer models. We propose a generalized Bayesian MARS (GBMARS) framework which admits the broad class of generalized hyperbolic distributions as the induced likelihood function. This allows us to develop tools for the emulation of stochastic simulators which are parsimonious, scalable, and interpretable and require minimal tuning, while providing powerful predictive and uncertainty quantification capabilities. GBMARS is capable of robust regression with t distributions, quantile regression with asymmetric Laplace distributions, and a general form of “Normal-Wald” regression in which the shape of the error distribution and the structure of the mean function are learned simultaneously. We demonstrate the effectiveness of GBMARS on various stochastic computer models, and we show that it compares favorably to several popular alternatives.
SIAM/ASA 不确定性量化期刊》第 12 卷第 2 期第 646-666 页,2024 年 6 月。 摘要。弗里德曼的多变量自适应回归样条线(MARS)方法[J. H. Friedman, Ann. Statist., 19 (1991), pp.在处理随机计算机模型时,传统的高斯误差假设限制了 MARS 以及许多流行的替代方法的实用性。我们提出了一种广义贝叶斯 MARS(GBMARS)框架,它允许将广义双曲分布作为诱导似然函数。这使我们能够开发出用于模拟随机模拟器的工具,这些工具简洁、可扩展、可解释,只需最小的调整,同时提供强大的预测和不确定性量化能力。GBMARS 能够进行 t 分布的稳健回归、非对称拉普拉斯分布的量子回归,以及 "Normal-Wald "回归的一般形式,其中误差分布的形状和均值函数的结构是同时学习的。我们在各种随机计算机模型上演示了 GBMARS 的有效性,并表明它优于几种流行的替代方法。
{"title":"Generalized Bayesian MARS: Tools for Stochastic Computer Model Emulation","authors":"Kellin N. Rumsey, Devin Francom, Andy Shen","doi":"10.1137/23m1577122","DOIUrl":"https://doi.org/10.1137/23m1577122","url":null,"abstract":"SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 646-666, June 2024. <br/> Abstract. The multivariate adaptive regression spline (MARS) approach of Friedman [J. H. Friedman, Ann. Statist., 19 (1991), pp. 1–67] and its Bayesian counterpart [D. Francom et al., Statist. Sinica, 28 (2018), pp. 791–816] are effective approaches for the emulation of computer models. The traditional assumption of Gaussian errors limits the usefulness of MARS, and many popular alternatives, when dealing with stochastic computer models. We propose a generalized Bayesian MARS (GBMARS) framework which admits the broad class of generalized hyperbolic distributions as the induced likelihood function. This allows us to develop tools for the emulation of stochastic simulators which are parsimonious, scalable, and interpretable and require minimal tuning, while providing powerful predictive and uncertainty quantification capabilities. GBMARS is capable of robust regression with t distributions, quantile regression with asymmetric Laplace distributions, and a general form of “Normal-Wald” regression in which the shape of the error distribution and the structure of the mean function are learned simultaneously. We demonstrate the effectiveness of GBMARS on various stochastic computer models, and we show that it compares favorably to several popular alternatives.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141508625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp A. Guth, Claudia Schillings, Simon Weissmann
SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 614-645, June 2024. Abstract.We propose a general framework for machine learning based optimization under uncertainty. Our approach replaces the complex forward model by a surrogate, which is learned simultaneously in a one-shot sense when solving the optimal control problem. Our approach relies on a reformulation of the problem as a penalized empirical risk minimization problem for which we provide a consistency analysis in terms of large data and increasing penalty parameter. To solve the resulting problem, we suggest a stochastic gradient method with adaptive control of the penalty parameter and prove convergence under suitable assumptions on the surrogate model. Numerical experiments illustrate the results for linear and nonlinear surrogate models.
{"title":"One-Shot Learning of Surrogates in PDE-Constrained Optimization under Uncertainty","authors":"Philipp A. Guth, Claudia Schillings, Simon Weissmann","doi":"10.1137/23m1553170","DOIUrl":"https://doi.org/10.1137/23m1553170","url":null,"abstract":"SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 614-645, June 2024. <br/> Abstract.We propose a general framework for machine learning based optimization under uncertainty. Our approach replaces the complex forward model by a surrogate, which is learned simultaneously in a one-shot sense when solving the optimal control problem. Our approach relies on a reformulation of the problem as a penalized empirical risk minimization problem for which we provide a consistency analysis in terms of large data and increasing penalty parameter. To solve the resulting problem, we suggest a stochastic gradient method with adaptive control of the penalty parameter and prove convergence under suitable assumptions on the surrogate model. Numerical experiments illustrate the results for linear and nonlinear surrogate models.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141508731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 549-578, June 2024. Abstract.Most inverse problems from physical sciences are formulated as PDE-constrained optimization problems. This involves identifying unknown parameters in equations by optimizing the model to generate PDE solutions that closely match measured data. The formulation is powerful and widely used in many science and engineering fields. However, one crucial assumption is that the unknown parameter must be deterministic. In reality, however, many problems are stochastic in nature, and the unknown parameter is random. The challenge then becomes recovering the full distribution of this unknown random parameter. It is a much more complex task. In this paper, we examine this problem in a general setting. In particular, we conceptualize the PDE solver as a push-forward map that pushes the parameter distribution to the generated data distribution. In this way, the SDE-constrained optimization translates to minimizing the distance between the generated distribution and the measurement distribution. We then formulate a gradient flow equation to seek the ground-truth parameter probability distribution. This opens up a new paradigm for extending many techniques in PDE-constrained optimization to optimization for systems with stochasticity.
{"title":"Differential Equation–Constrained Optimization with Stochasticity","authors":"Qin Li, Li Wang, Yunan Yang","doi":"10.1137/23m1571162","DOIUrl":"https://doi.org/10.1137/23m1571162","url":null,"abstract":"SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 549-578, June 2024. <br/> Abstract.Most inverse problems from physical sciences are formulated as PDE-constrained optimization problems. This involves identifying unknown parameters in equations by optimizing the model to generate PDE solutions that closely match measured data. The formulation is powerful and widely used in many science and engineering fields. However, one crucial assumption is that the unknown parameter must be deterministic. In reality, however, many problems are stochastic in nature, and the unknown parameter is random. The challenge then becomes recovering the full distribution of this unknown random parameter. It is a much more complex task. In this paper, we examine this problem in a general setting. In particular, we conceptualize the PDE solver as a push-forward map that pushes the parameter distribution to the generated data distribution. In this way, the SDE-constrained optimization translates to minimizing the distance between the generated distribution and the measurement distribution. We then formulate a gradient flow equation to seek the ground-truth parameter probability distribution. This opens up a new paradigm for extending many techniques in PDE-constrained optimization to optimization for systems with stochasticity.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141508628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 579-613, June 2024. Abstract.This paper is concerned with the random effect of the noise dispersion for the stochastic logarithmic Schrödinger equation emerged from the optical fibre with dispersion management. The well-posedness of the logarithmic Schrödinger equation with white noise dispersion is established via the regularization energy approximation and a spatial scaling property. For the small noise case, the effect of the noise dispersion is quantified by the proven large deviation principle under additional regularity assumptions on the initial datum. As an application, we show that for the regularized model, the exit from a neighborhood of the attractor of deterministic equation occurs on a sufficiently large time scale. Furthermore, the exit time and exit point in the small noise case, as well as the effect of large noise dispersion, is also discussed for the stochastic logarithmic Schrödinger equation.
{"title":"Quantifying the Effect of Random Dispersion for Logarithmic Schrödinger Equation","authors":"Jianbo Cui, Liying Sun","doi":"10.1137/23m1578619","DOIUrl":"https://doi.org/10.1137/23m1578619","url":null,"abstract":"SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 579-613, June 2024. <br/> Abstract.This paper is concerned with the random effect of the noise dispersion for the stochastic logarithmic Schrödinger equation emerged from the optical fibre with dispersion management. The well-posedness of the logarithmic Schrödinger equation with white noise dispersion is established via the regularization energy approximation and a spatial scaling property. For the small noise case, the effect of the noise dispersion is quantified by the proven large deviation principle under additional regularity assumptions on the initial datum. As an application, we show that for the regularized model, the exit from a neighborhood of the attractor of deterministic equation occurs on a sufficiently large time scale. Furthermore, the exit time and exit point in the small noise case, as well as the effect of large noise dispersion, is also discussed for the stochastic logarithmic Schrödinger equation.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141508629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 524-548, June 2024. Abstract.Bayesian hierarchical models have been demonstrated to provide efficient algorithms for finding sparse solutions to ill-posed inverse problems. The models comprise typically a conditionally Gaussian prior model for the unknown, augmented by a hyperprior model for the variances. A widely used choice for the hyperprior is a member of the family of generalized gamma distributions. Most of the work in the literature has concentrated on numerical approximation of the maximum a posteriori estimates, and less attention has been paid on sampling methods or other means for uncertainty quantification. Sampling from the hierarchical models is challenging mainly for two reasons: The hierarchical models are typically high dimensional, thus suffering from the curse of dimensionality, and the strong correlation between the unknown of interest and its variance can make sampling rather inefficient. This work addresses mainly the first one of these obstacles. By using a novel reparametrization, it is shown how the posterior distribution can be transformed into one dominated by a Gaussian white noise, allowing sampling by using the preconditioned Crank–Nicholson (pCN) scheme that has been shown to be efficient for sampling from distributions dominated by a Gaussian component. Furthermore, a novel idea for speeding up the pCN in a special case is developed, and the question of how strongly the hierarchical models are concentrated on sparse solutions is addressed in light of a computed example.
SIAM/ASA Journal on Uncertainty Quantification,第12卷,第2期,第524-548页,2024年6月。 摘要.贝叶斯层次模型已被证明能提供高效算法,用于寻找问题逆问题的稀疏解。这些模型通常由未知数的条件高斯先验模型和方差的超先验模型组成。超先验模型广泛使用的是广义伽马分布系列中的一个成员。文献中的大部分工作都集中在最大后验估计值的数值近似上,而较少关注不确定性量化的抽样方法或其他手段。从层次模型中取样具有挑战性,主要有两个原因:分层模型通常维度很高,因此会受到维度诅咒的影响,而且相关未知数与其方差之间的强相关性会使抽样效率相当低。这项研究主要解决了第一个障碍。通过使用一种新颖的重参数化方法,证明了如何将后验分布转化为由高斯白噪声主导的分布,从而可以使用预处理克兰-尼科尔森(pCN)方案进行采样,该方案已被证明可以高效地从由高斯成分主导的分布中进行采样。此外,我们还提出了一个在特殊情况下加快 pCN 速度的新想法,并根据一个计算实例探讨了分层模型在多大程度上集中于稀疏解的问题。
{"title":"Computationally Efficient Sampling Methods for Sparsity Promoting Hierarchical Bayesian Models","authors":"D. Calvetti, E. Somersalo","doi":"10.1137/23m1564043","DOIUrl":"https://doi.org/10.1137/23m1564043","url":null,"abstract":"SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 524-548, June 2024. <br/> Abstract.Bayesian hierarchical models have been demonstrated to provide efficient algorithms for finding sparse solutions to ill-posed inverse problems. The models comprise typically a conditionally Gaussian prior model for the unknown, augmented by a hyperprior model for the variances. A widely used choice for the hyperprior is a member of the family of generalized gamma distributions. Most of the work in the literature has concentrated on numerical approximation of the maximum a posteriori estimates, and less attention has been paid on sampling methods or other means for uncertainty quantification. Sampling from the hierarchical models is challenging mainly for two reasons: The hierarchical models are typically high dimensional, thus suffering from the curse of dimensionality, and the strong correlation between the unknown of interest and its variance can make sampling rather inefficient. This work addresses mainly the first one of these obstacles. By using a novel reparametrization, it is shown how the posterior distribution can be transformed into one dominated by a Gaussian white noise, allowing sampling by using the preconditioned Crank–Nicholson (pCN) scheme that has been shown to be efficient for sampling from distributions dominated by a Gaussian component. Furthermore, a novel idea for speeding up the pCN in a special case is developed, and the question of how strongly the hierarchical models are concentrated on sparse solutions is addressed in light of a computed example.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141508630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Helmut Harbrecht, Viacheslav Karnaev, Marc Schmidlin
SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 503-523, June 2024. Abstract.The present article considers the quantification of uncertainty for the equations of linear elasticity on random domains. To this end, we model the random domains as the images of some given fixed, nominal domain under random domain mappings, which are defined by a Karhunen–Loève expansion. We then prove the analytic regularity of the random solution with respect to the countable random input parameters which enter the problem through the Karhunen–Loève expansion of the random domain mappings. In particular, we provide appropriate bounds on arbitrary derivatives of the random solution with respect to those input parameters. These enable the use of state-of-the-art quadrature methods to compute deterministic statistics of quantities of interest, such as the mean and the variance of the random solution itself or the random von Mises stress, as integrals over the countable random input parameters in a dimensionally robust way. Numerical examples qualify and quantify the theoretical findings.
{"title":"Quantifying Domain Uncertainty in Linear Elasticity","authors":"Helmut Harbrecht, Viacheslav Karnaev, Marc Schmidlin","doi":"10.1137/23m1578589","DOIUrl":"https://doi.org/10.1137/23m1578589","url":null,"abstract":"SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 503-523, June 2024. <br/> Abstract.The present article considers the quantification of uncertainty for the equations of linear elasticity on random domains. To this end, we model the random domains as the images of some given fixed, nominal domain under random domain mappings, which are defined by a Karhunen–Loève expansion. We then prove the analytic regularity of the random solution with respect to the countable random input parameters which enter the problem through the Karhunen–Loève expansion of the random domain mappings. In particular, we provide appropriate bounds on arbitrary derivatives of the random solution with respect to those input parameters. These enable the use of state-of-the-art quadrature methods to compute deterministic statistics of quantities of interest, such as the mean and the variance of the random solution itself or the random von Mises stress, as integrals over the countable random input parameters in a dimensionally robust way. Numerical examples qualify and quantify the theoretical findings.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141173420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Ji, Henry Shaowu Yuchi, Derek Soeder, J.-F. Paquet, Steffen A. Bass, V. Roshan Joseph, C. F. Jeff Wu, Simon Mak
SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 473-502, June 2024. Abstract.In an era where scientific experimentation is often costly, multi-fidelity emulation provides a powerful tool for predictive scientific computing. While there has been notable work on multi-fidelity modeling, existing models do not incorporate an important “conglomerate” property of multi-fidelity simulators, where the accuracies of different simulator components are controlled by different fidelity parameters. Such conglomerate simulators are widely encountered in complex nuclear physics and astrophysics applications. We thus propose a new CONglomerate multi-FIdelity Gaussian process (CONFIG) model, which embeds this conglomerate structure within a novel non-stationary covariance function. We show that the proposed CONFIG model can capture prior knowledge on the numerical convergence of conglomerate simulators, which allows for cost-efficient emulation of multi-fidelity systems. We demonstrate the improved predictive performance of CONFIG over state-of-the-art models in a suite of numerical experiments and two applications, the first for emulation of cantilever beam deflection and the second for emulating the evolution of the quark-gluon plasma, which was theorized to have filled the universe shortly after the Big Bang.
{"title":"Conglomerate Multi-fidelity Gaussian Process Modeling, with Application to Heavy-Ion Collisions","authors":"Yi Ji, Henry Shaowu Yuchi, Derek Soeder, J.-F. Paquet, Steffen A. Bass, V. Roshan Joseph, C. F. Jeff Wu, Simon Mak","doi":"10.1137/22m1525004","DOIUrl":"https://doi.org/10.1137/22m1525004","url":null,"abstract":"SIAM/ASA Journal on Uncertainty Quantification, Volume 12, Issue 2, Page 473-502, June 2024. <br/> Abstract.In an era where scientific experimentation is often costly, multi-fidelity emulation provides a powerful tool for predictive scientific computing. While there has been notable work on multi-fidelity modeling, existing models do not incorporate an important “conglomerate” property of multi-fidelity simulators, where the accuracies of different simulator components are controlled by different fidelity parameters. Such conglomerate simulators are widely encountered in complex nuclear physics and astrophysics applications. We thus propose a new CONglomerate multi-FIdelity Gaussian process (CONFIG) model, which embeds this conglomerate structure within a novel non-stationary covariance function. We show that the proposed CONFIG model can capture prior knowledge on the numerical convergence of conglomerate simulators, which allows for cost-efficient emulation of multi-fidelity systems. We demonstrate the improved predictive performance of CONFIG over state-of-the-art models in a suite of numerical experiments and two applications, the first for emulation of cantilever beam deflection and the second for emulating the evolution of the quark-gluon plasma, which was theorized to have filled the universe shortly after the Big Bang.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141166728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}