Yanni Papandreou, Jon Cockayne, Mark Girolami, Andrew Duncan
SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 4, Page 1278-1307, December 2023. Abstract. The statistical finite element method (StatFEM) is an emerging probabilistic method that allows observations of a physical system to be synthesized with the numerical solution of a PDE intended to describe it in a coherent statistical framework, to compensate for model error. This work presents a new theoretical analysis of the StatFEM demonstrating that it has similar convergence properties to the finite element method on which it is based. Our results constitute a bound on the 2-Wasserstein distance between the ideal prior and posterior and the StatFEM approximation thereof, and show that this distance converges at the same mesh-dependent rate as finite element solutions converge to the true solution. Several numerical examples are presented to demonstrate our theory, including an example which tests the robustness of StatFEM when extended to nonlinear quantities of interest.
{"title":"Theoretical Guarantees for the Statistical Finite Element Method","authors":"Yanni Papandreou, Jon Cockayne, Mark Girolami, Andrew Duncan","doi":"10.1137/21m1463963","DOIUrl":"https://doi.org/10.1137/21m1463963","url":null,"abstract":"SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 4, Page 1278-1307, December 2023. <br/> Abstract. The statistical finite element method (StatFEM) is an emerging probabilistic method that allows observations of a physical system to be synthesized with the numerical solution of a PDE intended to describe it in a coherent statistical framework, to compensate for model error. This work presents a new theoretical analysis of the StatFEM demonstrating that it has similar convergence properties to the finite element method on which it is based. Our results constitute a bound on the 2-Wasserstein distance between the ideal prior and posterior and the StatFEM approximation thereof, and show that this distance converges at the same mesh-dependent rate as finite element solutions converge to the true solution. Several numerical examples are presented to demonstrate our theory, including an example which tests the robustness of StatFEM when extended to nonlinear quantities of interest.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138512772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 4, Page 1258-1277, December 2023. Abstract. Quantifying errors caused by indeterminacy in data is currently computationally expensive even in relatively simple PDE problems. Efficient methods could prove very useful in, for example, scientific experiments done with simulations. In this paper, we create and test neural networks which quantify uncertainty errors in the case of a linear one-dimensional boundary value problem. Training and testing data is generated numerically. We created three training datasets and three testing datasets and trained four neural networks with differing architectures. The performance of the neural networks is compared to known analytical bounds of errors caused by uncertain data. We find that the trained neural networks accurately approximate the exact error quantity in almost all cases and the neural network outputs are always between the analytical upper and lower bounds. The results of this paper show that after a suitable dataset is used for training even a relatively compact neural network can successfully predict quantitative effects generated by uncertain data. If these methods can be extended to more difficult PDE problems they could potentially have a multitude of real-world applications.
{"title":"Quantification of Errors Generated by Uncertain Data in a Linear Boundary Value Problem Using Neural Networks","authors":"Vilho Halonen, Ilkka Pölönen","doi":"10.1137/22m1538855","DOIUrl":"https://doi.org/10.1137/22m1538855","url":null,"abstract":"SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 4, Page 1258-1277, December 2023. <br/> Abstract. Quantifying errors caused by indeterminacy in data is currently computationally expensive even in relatively simple PDE problems. Efficient methods could prove very useful in, for example, scientific experiments done with simulations. In this paper, we create and test neural networks which quantify uncertainty errors in the case of a linear one-dimensional boundary value problem. Training and testing data is generated numerically. We created three training datasets and three testing datasets and trained four neural networks with differing architectures. The performance of the neural networks is compared to known analytical bounds of errors caused by uncertain data. We find that the trained neural networks accurately approximate the exact error quantity in almost all cases and the neural network outputs are always between the analytical upper and lower bounds. The results of this paper show that after a suitable dataset is used for training even a relatively compact neural network can successfully predict quantitative effects generated by uncertain data. If these methods can be extended to more difficult PDE problems they could potentially have a multitude of real-world applications.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138512768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 4, Page 1225-1257, December 2023. Abstract. It is common to model a deterministic response function, such as the output of a computer experiment, as a Gaussian process with a Matérn covariance kernel. The smoothness parameter of a Matérn kernel determines many important properties of the model in the large data limit, including the rate of convergence of the conditional mean to the response function. We prove that the maximum likelihood estimate of the smoothness parameter cannot asymptotically undersmooth the truth when the data are obtained on a fixed bounded subset of [math]. That is, if the data-generating response function has Sobolev smoothness [math], then the smoothness parameter estimate cannot be asymptotically less than [math]. The lower bound is sharp. Additionally, we show that maximum likelihood estimation recovers the true smoothness for a class of compactly supported self-similar functions. For cross-validation we prove an asymptotic lower bound [math], which, however, is unlikely to be sharp. The results are based on approximation theory in Sobolev spaces and some general theorems that restrict the set of values that the parameter estimators can take.
{"title":"Asymptotic Bounds for Smoothness Parameter Estimates in Gaussian Process Interpolation","authors":"Toni Karvonen","doi":"10.1137/22m149288x","DOIUrl":"https://doi.org/10.1137/22m149288x","url":null,"abstract":"SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 4, Page 1225-1257, December 2023. <br/> Abstract. It is common to model a deterministic response function, such as the output of a computer experiment, as a Gaussian process with a Matérn covariance kernel. The smoothness parameter of a Matérn kernel determines many important properties of the model in the large data limit, including the rate of convergence of the conditional mean to the response function. We prove that the maximum likelihood estimate of the smoothness parameter cannot asymptotically undersmooth the truth when the data are obtained on a fixed bounded subset of [math]. That is, if the data-generating response function has Sobolev smoothness [math], then the smoothness parameter estimate cannot be asymptotically less than [math]. The lower bound is sharp. Additionally, we show that maximum likelihood estimation recovers the true smoothness for a class of compactly supported self-similar functions. For cross-validation we prove an asymptotic lower bound [math], which, however, is unlikely to be sharp. The results are based on approximation theory in Sobolev spaces and some general theorems that restrict the set of values that the parameter estimators can take.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138512776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies the sensitivity analysis of mass-action systems against their diffusion approximations, particularly the dependence on population sizes. As a continuous-time Markov chain, a mass-action system can be described by an equation driven by finitely many Poisson processes, which has a diffusion approximation that can be pathwisely matched. The magnitude of noise in mass-action systems is proportional to the square root of the molecule count/population, which makes a large class of mass-action systems have quasi-stationary distributions (QSDs) besides invariant probability measures. In this paper, we modify the coupling-based technique developed in [M. Dobson, Y. Li, and J. Zhai, SIAM/ASA J. Uncertain. Quantif., 9 (2021), pp. 135–162] to estimate an upper bound of the 1-Wasserstein distance between two QSDs. Some numerical results of sensitivity with different population sizes are provided.
本文研究了质量作用系统对其扩散近似的敏感性分析,特别是对种群大小的依赖。作为一个连续时间马尔可夫链,质量-作用系统可以用一个由有限多个泊松过程驱动的方程来描述,该泊松过程具有路径智能匹配的扩散近似。在质量作用系统中,噪声的大小与分子数/分子数的平方根成正比,这使得大量的质量作用系统除了具有不变的概率测度外,还具有准平稳分布(qsd)。在本文中,我们改进了基于耦合的技术。杜布森,李勇,翟志强,SIAM/ASA J.不确定。Quantif。[j], 9 (2021), pp. 135-162]估计两个qsd之间1-Wasserstein距离的上界。给出了不同种群大小下的敏感性数值结果。
{"title":"Sensitivity Analysis of Quasi-Stationary Distributions (QSDs) of Mass-Action Systems","authors":"Yao Li, Yaping Yuan","doi":"10.1137/22m1535875","DOIUrl":"https://doi.org/10.1137/22m1535875","url":null,"abstract":"This paper studies the sensitivity analysis of mass-action systems against their diffusion approximations, particularly the dependence on population sizes. As a continuous-time Markov chain, a mass-action system can be described by an equation driven by finitely many Poisson processes, which has a diffusion approximation that can be pathwisely matched. The magnitude of noise in mass-action systems is proportional to the square root of the molecule count/population, which makes a large class of mass-action systems have quasi-stationary distributions (QSDs) besides invariant probability measures. In this paper, we modify the coupling-based technique developed in [M. Dobson, Y. Li, and J. Zhai, SIAM/ASA J. Uncertain. Quantif., 9 (2021), pp. 135–162] to estimate an upper bound of the 1-Wasserstein distance between two QSDs. Some numerical results of sensitivity with different population sizes are provided.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135616264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is often desirable to summarize a probability measure on a space in terms of a mode, or MAP estimator, i.e., a point of maximum probability. Such points can be rigorously defined using masses of metric balls in the small-radius limit. However, the theory is not entirely straightforward: the literature contains multiple notions of mode and various examples of pathological measures that have no mode in any sense. Since the masses of balls induce natural orderings on the points of , this article aims to shed light on some of the problems in nonparametric MAP estimation by taking an order-theoretic perspective, which appears to be a new one in the inverse problems community. This point of view opens up attractive proof strategies based upon the Cantor and Kuratowski intersection theorems; it also reveals that many of the pathologies arise from the distinction between greatest and maximal elements of an order, and from the existence of incomparable elements of , which we show can be dense in , even for an absolutely continuous measure on .
{"title":"An Order-Theoretic Perspective on Modes and Maximum A Posteriori Estimation in Bayesian Inverse Problems","authors":"Hefin Lambley, T. J. Sullivan","doi":"10.1137/22m154243x","DOIUrl":"https://doi.org/10.1137/22m154243x","url":null,"abstract":"It is often desirable to summarize a probability measure on a space in terms of a mode, or MAP estimator, i.e., a point of maximum probability. Such points can be rigorously defined using masses of metric balls in the small-radius limit. However, the theory is not entirely straightforward: the literature contains multiple notions of mode and various examples of pathological measures that have no mode in any sense. Since the masses of balls induce natural orderings on the points of , this article aims to shed light on some of the problems in nonparametric MAP estimation by taking an order-theoretic perspective, which appears to be a new one in the inverse problems community. This point of view opens up attractive proof strategies based upon the Cantor and Kuratowski intersection theorems; it also reveals that many of the pathologies arise from the distinction between greatest and maximal elements of an order, and from the existence of incomparable elements of , which we show can be dense in , even for an absolutely continuous measure on .","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135616164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reliable Error Estimates for Optimal Control of Linear Elliptic PDEs with Random Inputs","authors":"Johannes Milz","doi":"10.1137/22m1503889","DOIUrl":"https://doi.org/10.1137/22m1503889","url":null,"abstract":"","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135884855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work we connect two notions: That of the nonparametric mode of a probability measure, defined by asymptotic small ball probabilities, and that of the Onsager-Machlup functional, a generalized density also defined via asymptotic small ball probabilities. We show that in a separable Hilbert space setting and under mild conditions on the likelihood, modes of a Bayesian posterior distribution based upon a Gaussian prior exist and agree with the minimizers of its Onsager-Machlup functional and thus also with weak posterior modes. We apply this result to inverse problems and derive conditions on the forward mapping under which this variational characterization of posterior modes holds. Our results show rigorously that in the limit case of infinite-dimensional data corrupted by additive Gaussian or Laplacian noise, nonparametric maximum a posteriori estimation is equivalent to Tikhonov-Phillips regularization. In comparison with the work of Dashti, Law, Stuart, and Voss (2013), the assumptions on the likelihood are relaxed so that they cover in particular the important case of white Gaussian process noise. We illustrate our results by applying them to a severely ill-posed linear problem with Laplacian noise, where we express the maximum a posteriori estimator analytically and study its rate of convergence in the small noise limit.
{"title":"Are Minimizers of the Onsager–Machlup Functional Strong Posterior Modes?","authors":"Remo Kretschmann","doi":"10.1137/23m1546579","DOIUrl":"https://doi.org/10.1137/23m1546579","url":null,"abstract":"In this work we connect two notions: That of the nonparametric mode of a probability measure, defined by asymptotic small ball probabilities, and that of the Onsager-Machlup functional, a generalized density also defined via asymptotic small ball probabilities. We show that in a separable Hilbert space setting and under mild conditions on the likelihood, modes of a Bayesian posterior distribution based upon a Gaussian prior exist and agree with the minimizers of its Onsager-Machlup functional and thus also with weak posterior modes. We apply this result to inverse problems and derive conditions on the forward mapping under which this variational characterization of posterior modes holds. Our results show rigorously that in the limit case of infinite-dimensional data corrupted by additive Gaussian or Laplacian noise, nonparametric maximum a posteriori estimation is equivalent to Tikhonov-Phillips regularization. In comparison with the work of Dashti, Law, Stuart, and Voss (2013), the assumptions on the likelihood are relaxed so that they cover in particular the important case of white Gaussian process noise. We illustrate our results by applying them to a severely ill-posed linear problem with Laplacian noise, where we express the maximum a posteriori estimator analytically and study its rate of convergence in the small noise limit.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136352593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer model calibration is a crucial step in building a reliable computer model. In the face of massive physical observations, a fast estimation of the calibration parameters is urgently needed. To alleviate the computational burden, we design a two-step algorithm to estimate the calibration parameters by employing the subsampling techniques. Compared with the current state-of-the-art calibration methods, the complexity of the proposed algorithm is greatly reduced without sacrificing too much accuracy. We prove the consistency and asymptotic normality of the proposed estimator. The form of the variance of the proposed estimation is also presented, which provides a natural way to quantify the uncertainty of the calibration parameters. The obtained results of two numerical simulations and two real-case studies demonstrate the advantages of the proposed method.
{"title":"Fast Calibration for Computer Models with Massive Physical Observations","authors":"Shurui Lv, Jun Yu, Yan Wang, Jiang Du","doi":"10.1137/22m153673x","DOIUrl":"https://doi.org/10.1137/22m153673x","url":null,"abstract":"Computer model calibration is a crucial step in building a reliable computer model. In the face of massive physical observations, a fast estimation of the calibration parameters is urgently needed. To alleviate the computational burden, we design a two-step algorithm to estimate the calibration parameters by employing the subsampling techniques. Compared with the current state-of-the-art calibration methods, the complexity of the proposed algorithm is greatly reduced without sacrificing too much accuracy. We prove the consistency and asymptotic normality of the proposed estimator. The form of the variance of the proposed estimation is also presented, which provides a natural way to quantify the uncertainty of the calibration parameters. The obtained results of two numerical simulations and two real-case studies demonstrate the advantages of the proposed method.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135584965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Constraints are a natural choice for prior information in Bayesian inference. In various applications, the parameters of interest lie on the boundary of the constraint set. In this paper, we use a method that implicitly defines a constrained prior such that the posterior assigns positive probability to the boundary of the constraint set. We show that by projecting posterior mass onto the constraint set, we obtain a new posterior with a rich probabilistic structure on the boundary of that set. If the original posterior is a Gaussian, then such a projection can be done efficiently. We apply the method to Bayesian linear inverse problems, in which case samples can be obtained by repeatedly solving constrained least squares problems, similar to a MAP estimate, but with perturbations in the data. When combined into a Bayesian hierarchical model and the constraint set is a polyhedral cone, we can derive a Gibbs sampler to efficiently sample from the hierarchical model. To show the effect of projecting the posterior, we applied the method to deblurring and computed tomography examples.
{"title":"Bayesian Inference with Projected Densities","authors":"Jasper M. Everink, Yiqiu Dong, Martin S. Andersen","doi":"10.1137/22m150695x","DOIUrl":"https://doi.org/10.1137/22m150695x","url":null,"abstract":"Constraints are a natural choice for prior information in Bayesian inference. In various applications, the parameters of interest lie on the boundary of the constraint set. In this paper, we use a method that implicitly defines a constrained prior such that the posterior assigns positive probability to the boundary of the constraint set. We show that by projecting posterior mass onto the constraint set, we obtain a new posterior with a rich probabilistic structure on the boundary of that set. If the original posterior is a Gaussian, then such a projection can be done efficiently. We apply the method to Bayesian linear inverse problems, in which case samples can be obtained by repeatedly solving constrained least squares problems, similar to a MAP estimate, but with perturbations in the data. When combined into a Bayesian hierarchical model and the constraint set is a polyhedral cone, we can derive a Gibbs sampler to efficiently sample from the hierarchical model. To show the effect of projecting the posterior, we applied the method to deblurring and computed tomography examples.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135579646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dimension Free Nonasymptotic Bounds on the Accuracy of High-Dimensional Laplace Approximation","authors":"Vladimir Spokoiny","doi":"10.1137/22m1495688","DOIUrl":"https://doi.org/10.1137/22m1495688","url":null,"abstract":"","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135537084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}