Pub Date : 2024-03-01DOI: 10.1615/int.j.uncertaintyquantification.2024051489
Vincent Chabridon, Edgar Jaber, Emmanuel Remy, Michaël Baudin, Didier Lucor, Mathilde Mougeot, Bertrand Iooss
Long-term operation of nuclear steam generators can result in the occurrence of clogging, a deposition phenomenon that may increase the risk of mechanical and vibration loadings on tube bundles and internal structures as well as potentially affecting their response to hypothetical accidental transients. To manage and prevent this issue, a robust maintenance program that requires a fine understanding of the underlying physics is essential. This study focuses on the utilization of a clogging simulation code developed by EDF R&D. This numerical tool employs specific physical models to simulate the kinetics of clogging and generates time dependent clogging rate profiles for particular steam generators. However, certain parameters in this code are subject to uncertainties. To address these uncertainties, Monte Carlo simulations are conducted to assess the distribution of the clogging rate. Subsequently, polynomial chaos expansions are used in order to build a metamodel while time-dependent Sobol’ indices are computed to understand the impact of the random input parameters throughout the whole operating time. Comparisons are made with a previous published study and additional Hilbert-Schmidt independence criterion sensitivity indices are computed. Key input-output dependencies are exhibited in the different chemical conditionings and new behavior patterns in high-pH regimes are uncovered by the sensitivity analysis. These findings contribute to a better understanding of the clogging phenomenon while opening future lines of modeling research and helping in robustifying maintenance planning.
{"title":"SENSITIVITY ANALYSES OF A MULTI-PHYSICS LONG-TERM CLOGGING MODEL FOR STEAM GENERATORS","authors":"Vincent Chabridon, Edgar Jaber, Emmanuel Remy, Michaël Baudin, Didier Lucor, Mathilde Mougeot, Bertrand Iooss","doi":"10.1615/int.j.uncertaintyquantification.2024051489","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2024051489","url":null,"abstract":"Long-term operation of nuclear steam generators can result in the occurrence of clogging, a deposition phenomenon that may increase the risk of mechanical and vibration loadings on tube bundles and internal structures as well as potentially affecting their response to hypothetical accidental transients.\u0000To manage and prevent this issue, a robust maintenance program that requires a fine understanding of the underlying physics is essential. This study focuses on the utilization\u0000of a clogging simulation code developed by EDF R&D. This numerical tool employs specific physical models to simulate the kinetics of clogging and generates time dependent clogging rate profiles for particular steam generators. However,\u0000certain parameters in this code are subject to uncertainties. To address these uncertainties, Monte Carlo simulations are conducted to assess the distribution of the clogging rate. Subsequently, polynomial chaos expansions are used in\u0000order to build a metamodel while time-dependent Sobol’ indices are computed to understand the impact of the random input parameters throughout the whole operating time. Comparisons are made with a previous published study and\u0000additional Hilbert-Schmidt independence criterion sensitivity indices are computed. Key input-output dependencies are exhibited in the different chemical conditionings and new behavior patterns in high-pH regimes are uncovered by the sensitivity analysis. These findings contribute to a better understanding of the clogging phenomenon while opening\u0000future lines of modeling research and helping in robustifying maintenance planning.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140298099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01DOI: 10.1615/int.j.uncertaintyquantification.2024051384
Sergei Kucherenko, Dimitris Giamalakis, Nilay Shah
The design space (DS) is defined as the combination of materials and process conditions, which provides assurance of quality for a pharmaceutical product. A model-based approach to identify a probability-based DS requires costly simulations across the entire process parameter space (certain) and the uncertain model parameter space. We demonstrate that application of global sensitivity analysis (GSA) can significantly reduce model complexity and reduce computational time for identifying and quantifying DS by screening out non-important uncertain parameters. The novelty of this approach in that the usage of an indicator function which takes only binary values as a model function allows to apply a straightforward GSA based on Sobol’ sensitivity indices and to avoid using more costly Monte Carlo filtering or GSA for constrained problems. We consider an application from the chemical industry to illustrate how this formulation results in model reduction and dramatic reduction of the number of required model runs.
{"title":"Application of global sensitivity analysis for identification of probabilistic design spaces","authors":"Sergei Kucherenko, Dimitris Giamalakis, Nilay Shah","doi":"10.1615/int.j.uncertaintyquantification.2024051384","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2024051384","url":null,"abstract":"The design space (DS) is defined as the combination of materials and process conditions, which provides assurance of quality for a pharmaceutical product. A model-based approach to identify a probability-based DS requires costly simulations across the entire process parameter space (certain) and the uncertain model parameter space. We demonstrate that application of global sensitivity analysis (GSA) can significantly reduce model complexity and reduce computational time for identifying and quantifying DS by screening out non-important uncertain parameters. The novelty of this approach in that the usage of an indicator function which takes only binary values as a model function allows to apply a straightforward GSA based on Sobol’ sensitivity indices and to avoid using more costly Monte Carlo filtering or GSA for constrained problems. We consider an application from the chemical industry to illustrate how this formulation results in model reduction and dramatic reduction of the number of required model runs.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"141 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139946339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01DOI: 10.1615/int.j.uncertaintyquantification.2024050099
Roland Pulch, Olivier Sète
We consider linear first-order systems of ordinary differential equations (ODEs) in port-Hamiltonian (pH) form. Physical parameters are remodelled as random variables to conduct an uncertainty quantification. A stochastic Galerkin projection yields a larger deterministic system of ODEs, which does not exhibit a pH form in general. We apply transformations of the original systems such that the stochastic Galerkin projection becomes structure-preserving. Furthermore, we investigate meaning and properties of the Hamiltonian function belonging to the stochastic Galerkin system. A large number of random variables implies a high-dimensional stochastic Galerkin system, which suggests itself to apply model order reduction (MOR) generating a low-dimensional system of ODEs. We discuss structure preservation in projection-based MOR, where the smaller systems of ODEs feature pH form again. Results of numerical computations are presented using two test examples.
{"title":"Stochastic Galerkin method and port-Hamiltonian form for linear first-order ordinary differential equations","authors":"Roland Pulch, Olivier Sète","doi":"10.1615/int.j.uncertaintyquantification.2024050099","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2024050099","url":null,"abstract":"We consider linear first-order systems of ordinary differential equations (ODEs) in port-Hamiltonian (pH) form. Physical parameters are remodelled as random variables to conduct an uncertainty quantification. A stochastic Galerkin projection yields a larger deterministic system of ODEs, which does not exhibit a pH form in general. We apply transformations of the original systems such that the stochastic Galerkin projection becomes structure-preserving. Furthermore, we investigate meaning and properties of the Hamiltonian function belonging to the stochastic Galerkin system. A large number of random variables implies a high-dimensional stochastic Galerkin system, which suggests itself to apply model order reduction (MOR) generating a low-dimensional system of ODEs. We discuss structure preservation in projection-based MOR, where the smaller systems of ODEs feature pH form again. Results of numerical computations are presented using two test examples.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"27 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139766497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1615/int.j.uncertaintyquantification.2023046480
Clement Gauchy, Cyril Feau, Josselin Garnier
Seismic fragility curves have been introduced as key components of Seismic Probabilistic Risk Assessment studies. They express the probability of failure of mechanical structures conditional to a seismic intensity measure and must take into account the inherent uncertainties in such studies, the so-called epistemic uncertainties (i.e. coming from the uncertainty on the mechanical parameters of the structure) and the aleatory uncertainties (i.e. coming from the randomness of the seismic ground motions). For simulation-based approaches we propose a methodology to build and calibrate a Gaussian process surrogate model to estimate a family of non-parametric seismic fragility curves for a mechanical structure by propagating both the surrogate model uncertainty and the epistemic ones. Gaussian processes have indeed the main advantage to propose both a predictor and an assessment of the uncertainty of its predictions. In addition, we extend this methodology to sensitivity analysis. Global sensitivity indices such as aggregated Sobol indices and kernel-based indices are proposed to know how the uncertainty on the seismic fragility curves is apportioned according to each uncertain mechanical parameter. This comprehensive Uncertainty Quantification framework is finally applied to an industrial test case consisting in a part of a piping system of a Pressurized Water Reactor.
{"title":"Uncertainty quantification and global sensitivity analysis of seismic fragility curves using kriging","authors":"Clement Gauchy, Cyril Feau, Josselin Garnier","doi":"10.1615/int.j.uncertaintyquantification.2023046480","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2023046480","url":null,"abstract":"Seismic fragility curves have been introduced as key components of Seismic Probabilistic Risk Assessment studies. They express the probability of failure of mechanical structures conditional to a seismic intensity measure and must take into account the inherent uncertainties in such studies, the so-called epistemic uncertainties (i.e. coming from the uncertainty on the mechanical parameters of the structure) and the aleatory uncertainties (i.e. coming from the randomness of the seismic ground motions). For simulation-based approaches we propose a methodology to build and calibrate a Gaussian process surrogate model to estimate a family of non-parametric seismic fragility curves for a mechanical structure by propagating both the surrogate model uncertainty and the epistemic ones. Gaussian processes have indeed the main advantage to propose both a predictor and an assessment of the uncertainty of its predictions. In addition, we extend this methodology to sensitivity analysis. Global sensitivity indices such as aggregated Sobol indices and kernel-based indices are proposed to know how the uncertainty on the seismic fragility curves is apportioned according to each uncertain mechanical parameter. This comprehensive Uncertainty Quantification framework is finally applied to an industrial test case consisting in a part of a piping system of a Pressurized Water Reactor.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"27 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139083909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-05DOI: 10.1615/int.j.uncertaintyquantification.2023044584
Baptiste Kerleguer, C. Cannamela, J. Garnier
This paper deals with surrogate modelling of a computer code output in a hierarchical multi-fidelity context, i.e., when the output can be evaluated at different levels of accuracy and computational cost. Using observations of the output at low- and high-fidelity levels, we propose a method that combines Gaussian process (GP) regression and Bayesian neural network (BNN), in a method called GPBNN. The low-fidelity output is treated as a single-fidelity code using classical GP regression. The high-fidelity output is approximated by a BNN that incorporates, in addition to the high-fidelity observations, well-chosen realisations of the low-fidelity output emulator. The predictive uncertainty of the final surrogate model is then quantified by a complete characterisation of the uncertainties of the different models and their interaction. GPBNN is compared with most of the multi-fidelity regression methods allowing to quantify the prediction uncertainty.
{"title":"A Bayesian neural network approach to Multi-fidelity surrogate modelling","authors":"Baptiste Kerleguer, C. Cannamela, J. Garnier","doi":"10.1615/int.j.uncertaintyquantification.2023044584","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2023044584","url":null,"abstract":"This paper deals with surrogate modelling of a computer code output in a hierarchical multi-fidelity context, i.e., when the output can be evaluated at different levels of accuracy and computational cost. Using observations of the output at low- and high-fidelity levels, we propose a method that combines Gaussian process (GP) regression and Bayesian neural network (BNN), in a method called GPBNN. The low-fidelity output is treated as a single-fidelity code using classical GP regression. The high-fidelity output is approximated by a BNN that incorporates, in addition to the high-fidelity observations, well-chosen realisations of the low-fidelity output emulator. The predictive uncertainty of the final surrogate model is then quantified by a complete characterisation of the uncertainties of the different models and their interaction. GPBNN is compared with most of the multi-fidelity regression methods allowing to quantify the prediction uncertainty.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67531784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1615/int.j.uncertaintyquantification.2023047236
Zhihang Xu, Yingzhi Xia, Qifeng Liao
Bayesian inverse problems are often computationally challenging when the forward model is governed by complex partial differential equations (PDEs). This is typically caused by expensive forward model evaluations and high-dimensional parameterization of priors. This paper proposes a domain-decomposed variational auto-encoder Markov chain Monte Carlo (DD-VAE-MCMC) method to tackle these challenges simultaneously. Through partitioning the global physical domain into small subdomains, the proposed method first constructs local deterministic generative models based on local historical data, which provide efficient local prior representations. Gaussian process models with active learning address the domain decomposition interface conditions. Then inversions are conducted on each subdomain independently in parallel and in low-dimensional latent parameter spaces. The local inference solutions are post-processed through the Poisson image blending procedure to result in an efficient global inference result. Numerical examples are provided to demonstrate the performance of the proposed method.
{"title":"A domain-decomposed VAE method for Bayesian inverse problems","authors":"Zhihang Xu, Yingzhi Xia, Qifeng Liao","doi":"10.1615/int.j.uncertaintyquantification.2023047236","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2023047236","url":null,"abstract":"Bayesian inverse problems are often computationally challenging when the forward model is governed by complex partial differential equations (PDEs). This is typically caused by expensive forward model evaluations and high-dimensional parameterization of priors. This paper proposes a domain-decomposed variational auto-encoder Markov chain Monte Carlo (DD-VAE-MCMC) method to tackle these challenges simultaneously. Through partitioning the global physical domain into small subdomains, the proposed method first constructs local deterministic generative models based on local historical data, which provide efficient local prior representations.\u0000Gaussian process models with active learning address the domain decomposition interface conditions.\u0000Then inversions are conducted on each subdomain independently in parallel and in low-dimensional latent parameter spaces. The local inference solutions are post-processed through the Poisson image blending procedure to result in an efficient global inference result. Numerical examples are provided to demonstrate the performance of the proposed method.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"20 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138546226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1615/int.j.uncertaintyquantification.2023048277
Ferdinand Uilhoorn
In data assimilation, the description of the model error uncertainty is utmost important because incorrectly defined may lead to information loss about the real state of the system. In this work, we proposed a novel approach that finds the optimal distribution for describing the model error uncertainty within a particle filtering framework. The method was applied to nonlinear waves in compressible flows. We investigated the influence of observation noise statistics, resolution of the numerical model, smoothness of the solutions and sensor location. The results showed that in almost all situations the Pearson Type I is preferred, but with different curve-shape characteristics, namely, skewed, nearly symmetric, ∩-, ∪- and J-shaped. The distributions became in most cases ∪-shaped when the sensors were located nearby the discontinuities.
在数据同化过程中,对模型误差不确定性的描述至关重要,因为不正确的定义可能会导致系统真实状态信息的丢失。在这项工作中,我们提出了一种新方法,在粒子滤波框架内找到描述模型误差不确定性的最佳分布。该方法被应用于可压缩流中的非线性波。我们研究了观测噪声统计、数值模型分辨率、解的平滑度和传感器位置的影响。结果表明,几乎在所有情况下,Pearson I 型都是首选,但具有不同的曲线形状特征,即偏斜、近乎对称、∩形、∪形和 J 形。在大多数情况下,当传感器位于不连续点附近时,曲线分布呈"∪"形。
{"title":"MODEL ERROR ESTIMATION USING PEARSON SYSTEM WITH APPLICATION TO NONLINEAR WAVES IN COMPRESSIBLE FLOWS","authors":"Ferdinand Uilhoorn","doi":"10.1615/int.j.uncertaintyquantification.2023048277","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2023048277","url":null,"abstract":"In data assimilation, the description of the model error uncertainty is utmost important because incorrectly defined may lead to information loss about the real state of the system. In this work, we proposed a novel approach that finds the optimal distribution for describing the model error uncertainty within a particle filtering framework. The method was applied to nonlinear waves in compressible flows. We investigated the influence of observation noise statistics, resolution of the numerical model, smoothness of the solutions and sensor location. The results showed that in almost all situations the Pearson Type I is preferred, but with different curve-shape characteristics, namely, skewed, nearly symmetric, ∩-, ∪- and J-shaped. The distributions became in most cases ∪-shaped when the sensors were located nearby the discontinuities.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"73 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138573882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The design and testing of supervised machine learning models combine two fundamental distributions: (1) the training data distribution (2) the testing data distribution. Although these two distributions are identical and identifiable when the data set is infinite, they are imperfectly known when the data is finite (and possibly corrupted), and this uncertainty must be taken into account for robust Uncertainty Quantification (UQ). An important case is when the test distribution is coming from a modal or localized area of the finite sample distribution. We present a general decision-theoretic bootstrapping solution to this problem: (1) partition the available data into a training subset, and a UQ subset (2) take $m$ subsampled subsets of the training set and train $m$ models (3) partition the UQ set into $n$ sorted subsets and take a random fraction of them to define $n$ corresponding empirical distributions $mu_{j}$ (4) consider the adversarial game where Player I selects a model $iinleft{ 1,ldots,mright} $, Player II selects the UQ distribution $mu_{j}$ and Player I receives a loss defined by evaluating the model $i$ against data points sampled from $mu_{j}$ (5) identify optimal mixed strategies (probability distributions over models and UQ distributions) for both players. These randomized optimal mixed strategies provide optimal model mixtures, and UQ estimates given the adversarial uncertainty of the training and testing distributions represented by the game. The proposed approach provides (1) some degree of robustness to in-sample distribution localization/concentration (2) conditional probability distributions on the output.
{"title":"Decision theoretic bootstrapping","authors":"Peyman Tavallali, Peyman Tavallali, Hamed Hamze Bajgiran, Danial Esaid, Houman Owhadi","doi":"10.1615/int.j.uncertaintyquantification.2023038552","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2023038552","url":null,"abstract":"The design and testing of supervised machine learning models combine two fundamental distributions: (1) the training data distribution (2) the testing data distribution. Although these two distributions are identical and identifiable when the data set is infinite, they are imperfectly known when the data is finite (and possibly corrupted), and this uncertainty must be taken into account for robust Uncertainty Quantification (UQ). An important case is when the test distribution is coming from a modal or localized area of the finite sample distribution. We present a general decision-theoretic bootstrapping solution to this problem: (1) partition the available data into a training subset, and a UQ subset (2) take $m$ subsampled subsets of the training set and train $m$ models (3) partition the UQ set into $n$ sorted subsets and take a random fraction of them to define $n$ corresponding empirical distributions $mu_{j}$ (4) consider the adversarial game where Player I selects a model $iinleft{ 1,ldots,mright} $, Player II selects the UQ distribution $mu_{j}$ and Player I receives a loss defined by evaluating the model $i$ against data points sampled from $mu_{j}$ (5) identify optimal mixed strategies (probability distributions over models and UQ distributions) for both players. These randomized optimal mixed strategies provide optimal model mixtures, and UQ estimates given the adversarial uncertainty of the training and testing distributions represented by the game. The proposed approach provides (1) some degree of robustness to in-sample distribution localization/concentration (2) conditional probability distributions on the output.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"32 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139026778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1615/int.j.uncertaintyquantification.2023048049
Friedrich Menhorn, Gianluca Geraci, D. Thomas Seidl, Youssef Marzouk, Michael S. Eldred, Hans-Joachim Bungartz
Optimization is a key tool for scientific and engineering applications, however, in the presence of models affected by uncertainty, the optimization formulation needs to be extended to consider statistics of the quantity of interest. Op- timization under uncertainty (OUU) deals with this endeavor and requires uncertainty quantification analyses at several design locations. The cost of OUU is proportional to the cost of performing a forward uncertainty analysis at each design location visited, which makes the computational burden too high for high-fidelity simulations with sig- nificant computational cost. From a high-level standpoint, an OUU workflow typically has two main components: an inner loop strategy for the computation of statistics of the quantity of interest, and an outer loop optimization strategy tasked with finding the optimal design, given a merit function based on the inner loop statistics. In this work, we propose to alleviate the cost of the inner loop uncertainty analysis by leveraging the so-called Multilevel Monte Carlo (MLMC) method. MLMC has the potential of drastically reducing the computational cost by allocating resources over multiple models with varying accuracy and cost. The resource allocation problem in MLMC is formulated by mini- mizing the computational cost given a target variance for the estimator. We consider MLMC estimators for statistics usually employed in OUU workflows and solve the corresponding allocation problem. For the outer loop, we consider a derivative-free optimization strategy implemented in the SNOWPAC library; our novel strategy is implemented and released in the Dakota software toolkit. We discuss several n
{"title":"MULTILEVEL MONTE CARLO ESTIMATORS FOR DERIVATIVE-FREE OPTIMIZATION UNDER UNCERTAINTY","authors":"Friedrich Menhorn, Gianluca Geraci, D. Thomas Seidl, Youssef Marzouk, Michael S. Eldred, Hans-Joachim Bungartz","doi":"10.1615/int.j.uncertaintyquantification.2023048049","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2023048049","url":null,"abstract":"Optimization is a key tool for scientific and engineering applications, however, in the presence of models affected by\u0000uncertainty, the optimization formulation needs to be extended to consider statistics of the quantity of interest. Op-\u0000timization under uncertainty (OUU) deals with this endeavor and requires uncertainty quantification analyses at\u0000several design locations. The cost of OUU is proportional to the cost of performing a forward uncertainty analysis at\u0000each design location visited, which makes the computational burden too high for high-fidelity simulations with sig-\u0000nificant computational cost. From a high-level standpoint, an OUU workflow typically has two main components: an\u0000inner loop strategy for the computation of statistics of the quantity of interest, and an outer loop optimization strategy\u0000tasked with finding the optimal design, given a merit function based on the inner loop statistics. In this work, we\u0000propose to alleviate the cost of the inner loop uncertainty analysis by leveraging the so-called Multilevel Monte Carlo\u0000(MLMC) method. MLMC has the potential of drastically reducing the computational cost by allocating resources over\u0000multiple models with varying accuracy and cost. The resource allocation problem in MLMC is formulated by mini-\u0000mizing the computational cost given a target variance for the estimator. We consider MLMC estimators for statistics\u0000usually employed in OUU workflows and solve the corresponding allocation problem. For the outer loop, we consider\u0000a derivative-free optimization strategy implemented in the SNOWPAC library; our novel strategy is implemented and\u0000released in the Dakota software toolkit. We discuss several n","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"428 8","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138506611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-05eCollection Date: 2023-06-01DOI: 10.2478/jtim-2023-0084
Xin Qiao, Yuxiao Ding, Abdullah Altawil, Yan Yin, Qiuyue Wang, Wei Wang, Jian Kang
Chronic obstructive pulmonary disease (COPD) is a chronic heterogeneous disease characterized by persistent airflow obstruction and variable clinical presentations.[1,2] A lack of understanding regarding the molecular mechanisms underlying COPD makes the identification of critical molecules involved in COPD crucial for the development of novel diagnostic measures and therapeutic strategies. In recent decades, wide-ranging profiling methods such as microarrays and next-generation sequencing have made it easier to identify RNA transcripts that do not encode proteins, referred to as noncoding RNAs (ncRNAs).[3] NcRNAs comprise a diverse range of RNA species, characterized according to their length, shape, and location. Many ncRNAs are involved in epigenetic and posttranscriptional gene regulation, including microRNAs (miRNAs), tRNA-derived small RNAs (tsRNAs) and PIWI-interacting RNAs (piRNAs).[4] Long noncoding RNAs (lncRNAs) and circular RNAs (circRNAs) can fold into complex secondary structures that facilitate their interactions with DNA, RNA, and protein.[4] Additionally, lncRNAs and circRNAs can bind to miRNAs in a competitive endogenous RNA (ceRNA) network that prevents targeted mRNA degradation.[5,6] Recent studies have shown that ncRNAs play crucial roles in multiple pathophysiological processes associated with COPD.[5,7,8] A better understanding of the role of ncRNAs in COPD could contribute to the detection of biomarkers and the identification of new therapeutic targets. Here, we summarize the current findings regarding the potential role of ncRNAs, especially miRNAs, lncRNAs, and circRNAs. Additionally, we propose considerations regarding present and future research in this area.
{"title":"Roles of noncoding RNAs in chronic obstructive pulmonary disease.","authors":"Xin Qiao, Yuxiao Ding, Abdullah Altawil, Yan Yin, Qiuyue Wang, Wei Wang, Jian Kang","doi":"10.2478/jtim-2023-0084","DOIUrl":"10.2478/jtim-2023-0084","url":null,"abstract":"Chronic obstructive pulmonary disease (COPD) is a chronic heterogeneous disease characterized by persistent airflow obstruction and variable clinical presentations.[1,2] A lack of understanding regarding the molecular mechanisms underlying COPD makes the identification of critical molecules involved in COPD crucial for the development of novel diagnostic measures and therapeutic strategies. In recent decades, wide-ranging profiling methods such as microarrays and next-generation sequencing have made it easier to identify RNA transcripts that do not encode proteins, referred to as noncoding RNAs (ncRNAs).[3] NcRNAs comprise a diverse range of RNA species, characterized according to their length, shape, and location. Many ncRNAs are involved in epigenetic and posttranscriptional gene regulation, including microRNAs (miRNAs), tRNA-derived small RNAs (tsRNAs) and PIWI-interacting RNAs (piRNAs).[4] Long noncoding RNAs (lncRNAs) and circular RNAs (circRNAs) can fold into complex secondary structures that facilitate their interactions with DNA, RNA, and protein.[4] Additionally, lncRNAs and circRNAs can bind to miRNAs in a competitive endogenous RNA (ceRNA) network that prevents targeted mRNA degradation.[5,6] Recent studies have shown that ncRNAs play crucial roles in multiple pathophysiological processes associated with COPD.[5,7,8] A better understanding of the role of ncRNAs in COPD could contribute to the detection of biomarkers and the identification of new therapeutic targets. Here, we summarize the current findings regarding the potential role of ncRNAs, especially miRNAs, lncRNAs, and circRNAs. Additionally, we propose considerations regarding present and future research in this area.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"1 1","pages":"106-110"},"PeriodicalIF":4.9,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10680378/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89646187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}