This work studies finite sample approximations of the exact and entropic regularized Wasserstein distances between centered Gaussian processes and, more generally, covariance operators of functional random processes. We first show that these distances/divergences are fully represented by reproducing kernel Hilbert space (RKHS) covariance and cross-covariance operators associated with the corresponding covariance functions. Using this representation, we show that the Sinkhorn divergence between two centered Gaussian processes can be consistently and efficiently estimated from the divergence between their corresponding normalized finite-dimensional covariance matrices, or alternatively, their sample covariance operators. Consequently, this leads to a consistent and efficient algorithm for estimating the Sinkhorn divergence from finite samples generated by the two processes. For a fixed regularization parameter, the convergence rates are {it dimension-independent} and of the same order as those for the Hilbert-Schmidt distance. If at least one of the RKHS is finite-dimensional, we obtain a {it dimension-dependent} sample complexity for the exact Wasserstein distance between the Gaussian processes.
{"title":"Finite Sample Approximations of Exact and Entropic Wasserstein Distances Between Covariance Operators and Gaussian Processes","authors":"H. Q. Minh","doi":"10.1137/21m1410488","DOIUrl":"https://doi.org/10.1137/21m1410488","url":null,"abstract":"This work studies finite sample approximations of the exact and entropic regularized Wasserstein distances between centered Gaussian processes and, more generally, covariance operators of functional random processes. We first show that these distances/divergences are fully represented by reproducing kernel Hilbert space (RKHS) covariance and cross-covariance operators associated with the corresponding covariance functions. Using this representation, we show that the Sinkhorn divergence between two centered Gaussian processes can be consistently and efficiently estimated from the divergence between their corresponding normalized finite-dimensional covariance matrices, or alternatively, their sample covariance operators. Consequently, this leads to a consistent and efficient algorithm for estimating the Sinkhorn divergence from finite samples generated by the two processes. For a fixed regularization parameter, the convergence rates are {it dimension-independent} and of the same order as those for the Hilbert-Schmidt distance. If at least one of the RKHS is finite-dimensional, we obtain a {it dimension-dependent} sample complexity for the exact Wasserstein distance between the Gaussian processes.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82721824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Landmark-Warped Emulators for Models with Misaligned Functional Response","authors":"Devin Francom, B. Sansó, A. Kupresanin","doi":"10.1137/20m135279x","DOIUrl":"https://doi.org/10.1137/20m135279x","url":null,"abstract":"","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80476885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Generalized Kernel Method for Global Sensitivity Analysis","authors":"John Barr, H. Rabitz","doi":"10.1137/20m1354829","DOIUrl":"https://doi.org/10.1137/20m1354829","url":null,"abstract":"","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76797077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shivendra Agrawal, Hwanwoo Kim, D. Sanz-Alonso, A. Strang
Hierarchical models with gamma hyperpriors provide a flexible, sparse-promoting framework to bridge L1 and L2 regularizations in Bayesian formulations to inverse problems. Despite the Bayesian motivation for these models, existing methodologies are limited to maximum a posteriori estimation. The potential to perform uncertainty quantification has not yet been realized. This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors. The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement. In addition, it lends itself naturally to conduct model selection for the choice of hyperparameters. We illustrate the performance of our methodology in several computed examples, including a deconvolution problem and sparse identification of dynamical systems from time series data.
{"title":"A Variational Inference Approach to Inverse Problems with Gamma Hyperpriors","authors":"Shivendra Agrawal, Hwanwoo Kim, D. Sanz-Alonso, A. Strang","doi":"10.1137/21m146209x","DOIUrl":"https://doi.org/10.1137/21m146209x","url":null,"abstract":"Hierarchical models with gamma hyperpriors provide a flexible, sparse-promoting framework to bridge L1 and L2 regularizations in Bayesian formulations to inverse problems. Despite the Bayesian motivation for these models, existing methodologies are limited to maximum a posteriori estimation. The potential to perform uncertainty quantification has not yet been realized. This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors. The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement. In addition, it lends itself naturally to conduct model selection for the choice of hyperparameters. We illustrate the performance of our methodology in several computed examples, including a deconvolution problem and sparse identification of dynamical systems from time series data.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89516203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study debuts a new spline dimensional decomposition (SDD) for uncertainty quantification analysis of high-dimensional functions, including those endowed with high nonlinearity and nonsmoothness, if they exist, in a proficient manner. The decomposition creates an hierarchical expansion for an output random variable of interest with respect to measure-consistent orthonormalized basis splines (B-splines) in independent input random variables. A dimensionwise decomposition of a spline space into orthogonal subspaces, each spanned by a reduced set of such orthonormal splines, results in SDD. Exploiting the modulus of smoothness, the SDD approximation is shown to converge in mean-square to the correct limit. The computational complexity of the SDD method is polynomial, as opposed to exponential, thus alleviating the curse of dimensionality to the extent possible. Analytical formulae are proposed to calculate the second-moment properties of a truncated SDD approximation for a general output random variable in terms of the expansion coefficients involved. Numerical results indicate that a low-order SDD approximation of nonsmooth functions calculates the probabilistic characteristics of an output variable with an accuracy matching or surpassing those obtained by high-order approximations from several existing methods. Finally, a 34-dimensional random eigenvalue analysis demonstrates the utility of SDD in solving practical problems.
{"title":"A Spline Dimensional Decomposition for Uncertainty Quantification in High Dimensions","authors":"S. Rahman, Ramin Jahanbin","doi":"10.1137/20m1364175","DOIUrl":"https://doi.org/10.1137/20m1364175","url":null,"abstract":"This study debuts a new spline dimensional decomposition (SDD) for uncertainty quantification analysis of high-dimensional functions, including those endowed with high nonlinearity and nonsmoothness, if they exist, in a proficient manner. The decomposition creates an hierarchical expansion for an output random variable of interest with respect to measure-consistent orthonormalized basis splines (B-splines) in independent input random variables. A dimensionwise decomposition of a spline space into orthogonal subspaces, each spanned by a reduced set of such orthonormal splines, results in SDD. Exploiting the modulus of smoothness, the SDD approximation is shown to converge in mean-square to the correct limit. The computational complexity of the SDD method is polynomial, as opposed to exponential, thus alleviating the curse of dimensionality to the extent possible. Analytical formulae are proposed to calculate the second-moment properties of a truncated SDD approximation for a general output random variable in terms of the expansion coefficients involved. Numerical results indicate that a low-order SDD approximation of nonsmooth functions calculates the probabilistic characteristics of an output variable with an accuracy matching or surpassing those obtained by high-order approximations from several existing methods. Finally, a 34-dimensional random eigenvalue analysis demonstrates the utility of SDD in solving practical problems.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2021-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90851195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.26226/morressier.612f6736bc98103724100846
Ö. D. Akyildiz, Connor Duffin, S. Sabanis, M. Girolami
The recent statistical finite element method (statFEM) provides a coherent statistical framework to synthesise finite element models with observed data. Through embedding uncertainty inside of the governing equations, finite element solutions are updated to give a posterior distribution which quantifies all sources of uncertainty associated with the model. However to incorporate all sources of uncertainty, one must integrate over the uncertainty associated with the model parameters, the known forward problem of uncertainty quantification. In this paper, we make use of Langevin dynamics to solve the statFEM forward problem, studying the utility of the unadjusted Langevin algorithm (ULA), a Metropolis-free Markov chain Monte Carlo sampler, to build a sample-based characterisation of this otherwise intractable measure. Due to the structure of the statFEM problem, these methods are able to solve the forward problem without explicit full PDE solves, requiring only sparse matrix-vector products. ULA is also gradient-based, and hence provides a scalable approach up to high degrees-of-freedom. Leveraging the theory behind Langevin-based samplers, we provide theoretical guarantees on sampler performance, demonstrating convergence, for both the prior and posterior, in the Kullback-Leibler divergence, and, in Wasserstein-2, with further results on the effect of preconditioning. Numerical experiments are also provided, for both the prior and posterior, to demonstrate the efficacy of the sampler, with a Python package also included.
{"title":"Statistical Finite Elements via Langevin Dynamics","authors":"Ö. D. Akyildiz, Connor Duffin, S. Sabanis, M. Girolami","doi":"10.26226/morressier.612f6736bc98103724100846","DOIUrl":"https://doi.org/10.26226/morressier.612f6736bc98103724100846","url":null,"abstract":"The recent statistical finite element method (statFEM) provides a coherent statistical framework to synthesise finite element models with observed data. Through embedding uncertainty inside of the governing equations, finite element solutions are updated to give a posterior distribution which quantifies all sources of uncertainty associated with the model. However to incorporate all sources of uncertainty, one must integrate over the uncertainty associated with the model parameters, the known forward problem of uncertainty quantification. In this paper, we make use of Langevin dynamics to solve the statFEM forward problem, studying the utility of the unadjusted Langevin algorithm (ULA), a Metropolis-free Markov chain Monte Carlo sampler, to build a sample-based characterisation of this otherwise intractable measure. Due to the structure of the statFEM problem, these methods are able to solve the forward problem without explicit full PDE solves, requiring only sparse matrix-vector products. ULA is also gradient-based, and hence provides a scalable approach up to high degrees-of-freedom. Leveraging the theory behind Langevin-based samplers, we provide theoretical guarantees on sampler performance, demonstrating convergence, for both the prior and posterior, in the Kullback-Leibler divergence, and, in Wasserstein-2, with further results on the effect of preconditioning. Numerical experiments are also provided, for both the prior and posterior, to demonstrate the efficacy of the sampler, with a Python package also included.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72921826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper considers the surrogate modeling of a complex numerical code in a multifidelity framework when the code output is a time series. Using an experimental design of the low-and high-fidelity code levels, an original Gaussian process regression method is proposed. The code output is expanded on a basis built from the experimental design. The first coefficients of the expansion of the code output are processed by a co-kriging approach. The last coefficients are collectively processed by a kriging approach with covariance tensorization. The resulting surrogate model taking into account the uncertainty in the basis construction is shown to have better performance in terms of prediction errors and uncertainty quantification than standard dimension reduction techniques.
{"title":"Multifidelity Surrogate Modeling for Time-Series Outputs","authors":"Baptiste Kerleguer","doi":"10.1137/20m1386694","DOIUrl":"https://doi.org/10.1137/20m1386694","url":null,"abstract":"This paper considers the surrogate modeling of a complex numerical code in a multifidelity framework when the code output is a time series. Using an experimental design of the low-and high-fidelity code levels, an original Gaussian process regression method is proposed. The code output is expanded on a basis built from the experimental design. The first coefficients of the expansion of the code output are processed by a co-kriging approach. The last coefficients are collectively processed by a kriging approach with covariance tensorization. The resulting surrogate model taking into account the uncertainty in the basis construction is shown to have better performance in terms of prediction errors and uncertainty quantification than standard dimension reduction techniques.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2021-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80765812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To overcome topological constraints and improve the expressiveness of normalizing flow architectures, Wu, K"ohler and No'e introduced stochastic normalizing flows which combine deterministic, learnable flow transformations with stochastic sampling methods. In this paper, we consider stochastic normalizing flows from a Markov chain point of view. In particular, we replace transition densities by general Markov kernels and establish proofs via Radon-Nikodym derivatives which allows to incorporate distributions without densities in a sound way. Further, we generalize the results for sampling from posterior distributions as required in inverse problems. The performance of the proposed conditional stochastic normalizing flow is demonstrated by numerical examples.
为了克服拓扑约束并提高归一化流架构的表达性,Wu, K ohler和No e引入了随机归一化流,该流将确定性、可学习的流转换与随机抽样方法相结合。本文从马尔可夫链的角度考虑随机归一化流问题。特别是,我们用一般的马尔可夫核取代过渡密度,并通过Radon-Nikodym导数建立证明,该导数允许以合理的方式合并没有密度的分布。进一步,我们推广了从后验分布中抽样的结果,作为反问题的需要。通过数值算例验证了所提条件随机归一化流的性能。
{"title":"Stochastic Normalizing Flows for Inverse Problems: a Markov Chains Viewpoint","authors":"Paul Hagemann, J. Hertrich, G. Steidl","doi":"10.1137/21M1450604","DOIUrl":"https://doi.org/10.1137/21M1450604","url":null,"abstract":"To overcome topological constraints and improve the expressiveness of normalizing flow architectures, Wu, K\"ohler and No'e introduced stochastic normalizing flows which combine deterministic, learnable flow transformations with stochastic sampling methods. In this paper, we consider stochastic normalizing flows from a Markov chain point of view. In particular, we replace transition densities by general Markov kernels and establish proofs via Radon-Nikodym derivatives which allows to incorporate distributions without densities in a sound way. Further, we generalize the results for sampling from posterior distributions as required in inverse problems. The performance of the proposed conditional stochastic normalizing flow is demonstrated by numerical examples.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2021-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87697904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The stochastic partial differential equation approach to Gaussian processes (GPs) represents Matérn GP priors in terms of 𝑛 finite element basis functions and Gaussian coefficients with sparse precision matrix. Such representations enhance the scalability of GP regression and classification to datasets of large size 𝑁 by setting 𝑛 ≈ 𝑁 and exploiting sparsity. In this paper we reconsider the standard choice 𝑛 ≈ 𝑁 through an analysis of the estimation performance. Our theory implies that, under certain smoothness assumptions, one can reduce the computation and memory cost without hindering the estimation accuracy by setting 𝑛 ≪ 𝑁 in the large 𝑁 asymptotics. Numerical experiments illustrate the applicability of our theory and the effect of the prior lengthscale in the pre-asymptotic regime.
高斯过程(GPs)的随机偏微分方程方法用𝑛有限元基函数和高斯系数的稀疏精度矩阵来表示mat n n GP先验。这样的表示通过设置𝑛≈抛掷和利用稀疏性,增强了GP回归和分类对大型数据集的可扩展性。在本文中,我们通过对估计性能的分析,重新考虑了标准选择𝑛≈二进制操作。我们的理论表明,在一定的平滑性假设下,可以通过设置𝑛在大的渐近曲线中≪倘使计算和存储成本降低而不影响估计精度。数值实验证明了本文理论的适用性和先验长度尺度在前渐近状态下的影响。
{"title":"Finite Element Representations of Gaussian Processes: Balancing Numerical and Statistical Accuracy","authors":"D. Sanz-Alonso, Ruiyi Yang","doi":"10.1137/21m144788x","DOIUrl":"https://doi.org/10.1137/21m144788x","url":null,"abstract":"The stochastic partial differential equation approach to Gaussian processes (GPs) represents Matérn GP priors in terms of 𝑛 finite element basis functions and Gaussian coefficients with sparse precision matrix. Such representations enhance the scalability of GP regression and classification to datasets of large size 𝑁 by setting 𝑛 ≈ 𝑁 and exploiting sparsity. In this paper we reconsider the standard choice 𝑛 ≈ 𝑁 through an analysis of the estimation performance. Our theory implies that, under certain smoothness assumptions, one can reduce the computation and memory cost without hindering the estimation accuracy by setting 𝑛 ≪ 𝑁 in the large 𝑁 asymptotics. Numerical experiments illustrate the applicability of our theory and the effect of the prior lengthscale in the pre-asymptotic regime.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2021-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75743783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motivated by the spurious variance loss encountered during covariance propagation in atmospheric and other large-scale data assimilation systems, we consider the problem for state dynamics governed by the continuity and related hyperbolic partial differential equations. This loss of variance is often attributed to reduced-rank representations of the covariance matrix, as in ensemble methods for example, or else to the use of dissipative numerical methods. Through a combination of analytical work and numerical experiments, we demonstrate that significant variance loss, as well as gain, typically occurs during covariance propagation, even at full rank. The cause of this unusual behavior is a discontinuous change in the continuum covariance dynamics as correlation lengths become small, for instance in the vicinity of sharp gradients in the velocity field. This discontinuity in the covariance dynamics arises from hyperbolicity: the diagonal of the kernel of the covariance operator is a characteristic surface for advective dynamics. Our numerical experiments demonstrate that standard numerical methods for evolving the state are not adequate for propagating the covariance, because they do not capture the discontinuity in the continuum covariance dynamics as correlations lengths tend to zero. Our analytical and numerical results demonstrate in the context of mass conservation that this leads to significant, spurious variance loss in regions of mass convergence and gain in regions of mass divergence. The results suggest that developing local covariance propagation methods designed specifically to capture covariance evolution near the diagonal may prove a useful alternative to current methods of covariance propagation.
{"title":"Continuum Covariance Propagation for Understanding Variance Loss in Advective Systems","authors":"Shay Gilpin, T. Matsuo, S. Cohn","doi":"10.1137/21m1442449","DOIUrl":"https://doi.org/10.1137/21m1442449","url":null,"abstract":"Motivated by the spurious variance loss encountered during covariance propagation in atmospheric and other large-scale data assimilation systems, we consider the problem for state dynamics governed by the continuity and related hyperbolic partial differential equations. This loss of variance is often attributed to reduced-rank representations of the covariance matrix, as in ensemble methods for example, or else to the use of dissipative numerical methods. Through a combination of analytical work and numerical experiments, we demonstrate that significant variance loss, as well as gain, typically occurs during covariance propagation, even at full rank. The cause of this unusual behavior is a discontinuous change in the continuum covariance dynamics as correlation lengths become small, for instance in the vicinity of sharp gradients in the velocity field. This discontinuity in the covariance dynamics arises from hyperbolicity: the diagonal of the kernel of the covariance operator is a characteristic surface for advective dynamics. Our numerical experiments demonstrate that standard numerical methods for evolving the state are not adequate for propagating the covariance, because they do not capture the discontinuity in the continuum covariance dynamics as correlations lengths tend to zero. Our analytical and numerical results demonstrate in the context of mass conservation that this leads to significant, spurious variance loss in regions of mass convergence and gain in regions of mass divergence. The results suggest that developing local covariance propagation methods designed specifically to capture covariance evolution near the diagonal may prove a useful alternative to current methods of covariance propagation.","PeriodicalId":56064,"journal":{"name":"Siam-Asa Journal on Uncertainty Quantification","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2021-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78940583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}