Pub Date : 2020-12-21DOI: 10.1615/int.j.uncertaintyquantification.2021037549
Alexandros Gilch, M. Griebel, Jens Oettershagen
Generalized Method of Moments (GMM) estimators in their various forms, including the popular Maximum Likelihood (ML) estimator, are frequently applied for the evaluation of complex econometric models with not analytically computable moment or likelihood functions. As the objective functions of GMMand ML-estimators themselves constitute the approximation of an integral, more precisely of the expected value over the real world data space, the question arises whether the approximation of the moment function and the simulation of the entire objective function can be combined. Motivated by the popular Probit and Mixed Logit models, we consider double integrals with a linking function which stems from the considered estimator, e.g. the logarithm for Maximum Likelihood, and apply a sparse tensor product quadrature to reduce the computational effort for the approximation of the combined integral. Given Hölder continuity of the linking function, we prove that this approach can improve the order of the convergence rate of the classical GMMand ML-estimator by a factor of two, even for integrands of low regularity or high dimensionality. This result is illustrated by numerical simulations of Mixed Logit and Multinomial Probit integrals which are estimated by MLand GMM-estimators, respectively.
{"title":"SPARSE TENSOR PRODUCT APPROXIMATION FOR A CLASS OF GENERALIZED METHOD OF MOMENTS ESTIMATORS","authors":"Alexandros Gilch, M. Griebel, Jens Oettershagen","doi":"10.1615/int.j.uncertaintyquantification.2021037549","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2021037549","url":null,"abstract":"Generalized Method of Moments (GMM) estimators in their various forms, including the popular Maximum Likelihood (ML) estimator, are frequently applied for the evaluation of complex econometric models with not analytically computable moment or likelihood functions. As the objective functions of GMMand ML-estimators themselves constitute the approximation of an integral, more precisely of the expected value over the real world data space, the question arises whether the approximation of the moment function and the simulation of the entire objective function can be combined. Motivated by the popular Probit and Mixed Logit models, we consider double integrals with a linking function which stems from the considered estimator, e.g. the logarithm for Maximum Likelihood, and apply a sparse tensor product quadrature to reduce the computational effort for the approximation of the combined integral. Given Hölder continuity of the linking function, we prove that this approach can improve the order of the convergence rate of the classical GMMand ML-estimator by a factor of two, even for integrands of low regularity or high dimensionality. This result is illustrated by numerical simulations of Mixed Logit and Multinomial Probit integrals which are estimated by MLand GMM-estimators, respectively.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2020-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42778796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-10DOI: 10.1615/int.j.uncertaintyquantification.2021036153
Nora Luthen, S. Marelli, B. Sudret
Sparse polynomial chaos expansions (PCE) are an efficient and widely used surrogate modeling method in uncertainty quantification for engineering problems with computationally expensive models. To make use of the available information in the most efficient way, several approaches for so-called basis-adaptive sparse PCE have been proposed to determine the set of polynomial regressors (“basis”) for PCE adaptively. The goal of this paper is to help practitioners identify the most suitable methods for constructing a surrogate PCE for their model. We describe three state-of-the-art basis-adaptive approaches from the recent sparse PCE literature and conduct an extensive benchmark in terms of global approximation accuracy on a large set of computational models. Investigating the synergies between sparse regression solvers and basis adaptivity schemes, we find that the choice of the proper solver and basis-adaptive scheme is very important, as it can result in more than one order of magnitude difference in performance. No single method significantly outperforms the others, but dividing the analysis into classes (regarding input dimension and experimental design size), we are able to identify specific sparse solver and basis adaptivity combinations for each class that show comparatively good performance. To further improve on these findings, we introduce a novel solver and basis adaptivity selection scheme guided by cross-validation error. We demonstrate that this automatic selection procedure provides close-to-optimal results in terms of accuracy, and significantly more robust solutions, while being more general than the case-by-case recommendations obtained by the benchmark.
{"title":"Automatic selection of basis-adaptive sparse polynomial chaos expansions for engineering applications","authors":"Nora Luthen, S. Marelli, B. Sudret","doi":"10.1615/int.j.uncertaintyquantification.2021036153","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2021036153","url":null,"abstract":"Sparse polynomial chaos expansions (PCE) are an efficient and widely used surrogate modeling method in uncertainty quantification for engineering problems with computationally expensive models. To make use of the available information in the most efficient way, several approaches for so-called basis-adaptive sparse PCE have been proposed to determine the set of polynomial regressors (“basis”) for PCE adaptively. The goal of this paper is to help practitioners identify the most suitable methods for constructing a surrogate PCE for their model. We describe three state-of-the-art basis-adaptive approaches from the recent sparse PCE literature and conduct an extensive benchmark in terms of global approximation accuracy on a large set of computational models. Investigating the synergies between sparse regression solvers and basis adaptivity schemes, we find that the choice of the proper solver and basis-adaptive scheme is very important, as it can result in more than one order of magnitude difference in performance. No single method significantly outperforms the others, but dividing the analysis into classes (regarding input dimension and experimental design size), we are able to identify specific sparse solver and basis adaptivity combinations for each class that show comparatively good performance. To further improve on these findings, we introduce a novel solver and basis adaptivity selection scheme guided by cross-validation error. We demonstrate that this automatic selection procedure provides close-to-optimal results in terms of accuracy, and significantly more robust solutions, while being more general than the case-by-case recommendations obtained by the benchmark.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42040947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-02DOI: 10.1615/Int.J.UncertaintyQuantification.2021038133
A. Puy, W. Becker, S. L. Piano, Andrea Saltelli
Sensitivity analysis helps identify which model inputs convey the most uncertainty to the model output. One of the most authoritative measures in global sensitivity analysis is the Sobol' total-order index, which can be computed with several different estimators. Although previous comparisons exist, it is hard to know which estimator performs best since the results are contingent on the benchmark setting defined by the analyst (the sampling method, the distribution of the model inputs, the number of model runs, the test function or model and its dimensionality, the weight of higher order effects or the performance measure selected). Here we compare several total-order estimators in an eight-dimension hypercube where these benchmark parameters are treated as random parameters. This arrangement significantly relaxes the dependency of the results on the benchmark design. We observe that the most accurate estimators are Razavi and Gupta's, Jansen's or Janon/Monod's for factor prioritization, and Jansen's, Janon/Monod's or Azzini and Rosati's for approaching the"true"total-order indices. The rest lag considerably behind. Our work helps analysts navigate the myriad of total-order formulae by reducing the uncertainty in the selection of the most appropriate estimator.
{"title":"A comprehensive comparison of total-order estimators for global sensitivity analysis","authors":"A. Puy, W. Becker, S. L. Piano, Andrea Saltelli","doi":"10.1615/Int.J.UncertaintyQuantification.2021038133","DOIUrl":"https://doi.org/10.1615/Int.J.UncertaintyQuantification.2021038133","url":null,"abstract":"Sensitivity analysis helps identify which model inputs convey the most uncertainty to the model output. One of the most authoritative measures in global sensitivity analysis is the Sobol' total-order index, which can be computed with several different estimators. Although previous comparisons exist, it is hard to know which estimator performs best since the results are contingent on the benchmark setting defined by the analyst (the sampling method, the distribution of the model inputs, the number of model runs, the test function or model and its dimensionality, the weight of higher order effects or the performance measure selected). Here we compare several total-order estimators in an eight-dimension hypercube where these benchmark parameters are treated as random parameters. This arrangement significantly relaxes the dependency of the results on the benchmark design. We observe that the most accurate estimators are Razavi and Gupta's, Jansen's or Janon/Monod's for factor prioritization, and Jansen's, Janon/Monod's or Azzini and Rosati's for approaching the\"true\"total-order indices. The rest lag considerably behind. Our work helps analysts navigate the myriad of total-order formulae by reducing the uncertainty in the selection of the most appropriate estimator.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42308022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-21DOI: 10.1615/int.j.uncertaintyquantification.2022035476
V. Volodina, Nikki Sonenberg, E. Wheatcroft, H. Wynn
Majorisation, also called rearrangement inequalities, yields a type of stochastic ordering in which two or more distributions can be then compared. This method provides a representation of the peakedness of probability distributions and is also independent of the location of probabilities. These properties make majorisation a good candidate as a theory for uncertainty. We demonstrate that this approach is also dimension free by obtaining univariate decreasing rearrangements from multivariate distributions, thus we can consider the ordering of two, or more, distributions with different support. We present operations including inverse mixing and maximise/minimise to combine and analyse uncertainties associated with different distribution functions. We illustrate these methods for empirical examples with applications to scenario analysis and simulations.
{"title":"Majorisation as a theory for uncertainty","authors":"V. Volodina, Nikki Sonenberg, E. Wheatcroft, H. Wynn","doi":"10.1615/int.j.uncertaintyquantification.2022035476","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2022035476","url":null,"abstract":"Majorisation, also called rearrangement inequalities, yields a type of stochastic ordering in which two or more distributions can be then compared. This method provides a representation of the peakedness of probability distributions and is also independent of the location of probabilities. These properties make majorisation a good candidate as a theory for uncertainty. We demonstrate that this approach is also dimension free by obtaining univariate decreasing rearrangements from multivariate distributions, thus we can consider the ordering of two, or more, distributions with different support. We present operations including inverse mixing and maximise/minimise to combine and analyse uncertainties associated with different distribution functions. We illustrate these methods for empirical examples with applications to scenario analysis and simulations.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2020-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49462589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1615/int.j.uncertaintyquantification.2021037183
Stephan Gerster, M. Bambach, M. Herty, M. Imran
We design the controls of physical systems that are faced by uncertainties. The system dynamics are described by random hyperbolic balance laws. The control aims to steer the system to a desired state under uncertainties. We propose a control based on Lyapunov stability analysis of a suitable series expansion of the random dynamics. The control damps the impact of uncertainties exponentially fast in time. The presented approach can be applied to a large class of physical systems and random perturbations, as~e.g.~Gaussian processes. We illustrate the control effect on a stochastic viscoplastic material model.
{"title":"Feedback control for random, linear hyperbolic balance laws","authors":"Stephan Gerster, M. Bambach, M. Herty, M. Imran","doi":"10.1615/int.j.uncertaintyquantification.2021037183","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2021037183","url":null,"abstract":"We design the controls of physical systems that are faced by uncertainties. The system dynamics are described by random hyperbolic balance laws. The control aims to steer the system to a desired state under uncertainties. We propose a control based on Lyapunov stability analysis of a suitable series expansion of the random dynamics. The control damps the impact of uncertainties exponentially fast in time. The presented approach can be applied to a large class of physical systems and random perturbations, as~e.g.~Gaussian processes. We illustrate the control effect on a stochastic viscoplastic material model.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47675365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-13DOI: 10.1615/INT.J.UNCERTAINTYQUANTIFICATION.2021034382
Hongqiao Wang, Xiang Zhou
In this work, we employ the Bayesian inference framework to solve the problem of estimating the solution and particularly, its derivatives, which satisfy a known differential equation, from the given noisy and scarce observations of the solution data only. To address the key issue of accuracy and robustness of derivative estimation, we use the Gaussian processes to jointly model the solution, the derivatives, and the differential equation. By regarding the linear differential equation as a linear constraint, a Gaussian process regression with constraint method (GPRC) is developed to improve the accuracy of prediction of derivatives. For nonlinear differential equations, we propose a Picard-iteration-like approximation of linearization around the Gaussian process obtained only from data so that our GPRC can be still iteratively applicable. Besides, a product of experts method is applied to ensure the initial or boundary condition is considered to further enhance the prediction accuracy of the derivatives. We present several numerical results to illustrate the advantages of our new method in comparison to the standard data-driven Gaussian process regression.
{"title":"EXPLICIT ESTIMATION OF DERIVATIVES FROM DATA AND DIFFERENTIAL EQUATIONS BY GAUSSIAN PROCESS REGRESSION","authors":"Hongqiao Wang, Xiang Zhou","doi":"10.1615/INT.J.UNCERTAINTYQUANTIFICATION.2021034382","DOIUrl":"https://doi.org/10.1615/INT.J.UNCERTAINTYQUANTIFICATION.2021034382","url":null,"abstract":"In this work, we employ the Bayesian inference framework to solve the problem of estimating the solution and particularly, its derivatives, which satisfy a known differential equation, from the given noisy and scarce observations of the solution data only. To address the key issue of accuracy and robustness of derivative estimation, we use the Gaussian processes to jointly model the solution, the derivatives, and the differential equation. By regarding the linear differential equation as a linear constraint, a Gaussian process regression with constraint method (GPRC) is developed to improve the accuracy of prediction of derivatives. For nonlinear differential equations, we propose a Picard-iteration-like approximation of linearization around the Gaussian process obtained only from data so that our GPRC can be still iteratively applicable. Besides, a product of experts method is applied to ensure the initial or boundary condition is considered to further enhance the prediction accuracy of the derivatives. We present several numerical results to illustrate the advantages of our new method in comparison to the standard data-driven Gaussian process regression.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"9 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2020-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74967814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-09DOI: 10.1615/int.j.uncertaintyquantification.2020034395
S. Marelli, Paul Wagner, C. Lataniotis, B. Sudret
Constructing approximations that can accurately mimic the behavior of complex models at reduced computational costs is an important aspect of uncertainty quantification. Despite their flexibility and efficiency, classical surrogate models such as Kriging or polynomial chaos expansions tend to struggle with highly non-linear, localized or non-stationary computational models. We hereby propose a novel sequential adaptive surrogate modeling method based on recursively embedding locally spectral expansions. It is achieved by means of disjoint recursive partitioning of the input domain, which consists in sequentially splitting the latter into smaller subdomains, and constructing a simpler local spectral expansions in each, exploiting the trade-off complexity vs. locality. The resulting expansion, which we refer to as "stochastic spectral embedding" (SSE), is a piece-wise continuous approximation of the model response that shows promising approximation capabilities, and good scaling with both the problem dimension and the size of the training set. We finally show how the method compares favorably against state-of-the-art sparse polynomial chaos expansions on a set of models with different complexity and input dimension.
{"title":"STOCHASTIC SPECTRAL EMBEDDING","authors":"S. Marelli, Paul Wagner, C. Lataniotis, B. Sudret","doi":"10.1615/int.j.uncertaintyquantification.2020034395","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2020034395","url":null,"abstract":"Constructing approximations that can accurately mimic the behavior of complex models at reduced computational costs is an important aspect of uncertainty quantification. Despite their flexibility and efficiency, classical surrogate models such as Kriging or polynomial chaos expansions tend to struggle with highly non-linear, localized or non-stationary computational models. We hereby propose a novel sequential adaptive surrogate modeling method based on recursively embedding locally spectral expansions. It is achieved by means of disjoint recursive partitioning of the input domain, which consists in sequentially splitting the latter into smaller subdomains, and constructing a simpler local spectral expansions in each, exploiting the trade-off complexity vs. locality. The resulting expansion, which we refer to as \"stochastic spectral embedding\" (SSE), is a piece-wise continuous approximation of the model response that shows promising approximation capabilities, and good scaling with both the problem dimension and the size of the training set. We finally show how the method compares favorably against state-of-the-art sparse polynomial chaos expansions on a set of models with different complexity and input dimension.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2020-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44555174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1615/int.j.uncertaintyquantification.2020032659
M. J. Sanz, J. C. Gregori, O. Maître, Juan Carlos Cortés López
This paper concerns the estimation of the density function of the solution to a random non-autonomous second-order linear differential equation with analytic data processes. In a recent contribution, we proposed to express the density function as an expectation, and we used a standard Monte Carlo algorithm to approximate the expectation. Although the algorithms worked satisfactorily for most test-problems, some numerical challenges emerged for others, due to large statistical errors. In these situations, the convergence of the Monte Carlo simulation slows down severely, and noisy features plague the estimates. In this paper, we focus on computational aspects and propose several variance reduction methods to remedy these issues and speed up the convergence. First, we introduce a path-wise selection of the approximating processes which aims at controlling the variance of the estimator. Second, we propose a hybrid method, combining Monte Carlo and deterministic quadrature rules, to estimate the expectation. Third, we exploit the series expansions of the solutions to design a multilevel Monte Carlo estimator. The proposed methods are implemented and tested on several numerical examples to highlight the theoretical discussions and demonstrate the significant improvements achieved.
{"title":"VARIANCE REDUCTION METHODS AND MULTILEVEL MONTE CARLO STRATEGY FOR ESTIMATING DENSITIES OF SOLUTIONS TO RANDOM SECOND-ORDER LINEAR DIFFERENTIAL EQUATIONS","authors":"M. J. Sanz, J. C. Gregori, O. Maître, Juan Carlos Cortés López","doi":"10.1615/int.j.uncertaintyquantification.2020032659","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2020032659","url":null,"abstract":"This paper concerns the estimation of the density function of the solution to a random non-autonomous second-order linear differential equation with analytic data processes. In a recent contribution, we proposed to express the density function as an expectation, and we used a standard Monte Carlo algorithm to approximate the expectation. Although the algorithms worked satisfactorily for most test-problems, some numerical challenges emerged for others, due to large statistical errors. In these situations, the convergence of the Monte Carlo simulation slows down severely, and noisy features plague the estimates. In this paper, we focus on computational aspects and propose several variance reduction methods to remedy these issues and speed up the convergence. First, we introduce a path-wise selection of the approximating processes which aims at controlling the variance of the estimator. Second, we propose a hybrid method, combining Monte Carlo and deterministic quadrature rules, to estimate the expectation. Third, we exploit the series expansions of the solutions to design a multilevel Monte Carlo estimator. The proposed methods are implemented and tested on several numerical examples to highlight the theoretical discussions and demonstrate the significant improvements achieved.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67530757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1615/int.j.uncertaintyquantification.2020033186
K. D. Vries, A. Nikishova, B. Czaja, Gábor Závodszky, A. Hoekstra
Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.
{"title":"INVERSE UNCERTAINTY QUANTIFICATION OF A CELL MODEL USING A GAUSSIAN PROCESS METAMODEL","authors":"K. D. Vries, A. Nikishova, B. Czaja, Gábor Závodszky, A. Hoekstra","doi":"10.1615/int.j.uncertaintyquantification.2020033186","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2020033186","url":null,"abstract":"Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67530797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}