Pub Date : 2024-05-01DOI: 10.1615/int.j.uncertaintyquantification.2024051548
Hui Duan, Giray Okten
We introduce a new Shapley value approach for global sensitivity analysis and machine learning explainability. The method is based on the first-order partial derivatives of the underlying function. The computational complexity of the method is linear in dimension (number of features), as opposed to the exponential complexity of other Shapley value approaches in the literature. Examples from global sensitivity analysis and machine learning are used to compare the method numerically with activity scores, SHAP, and KernelSHAP.
{"title":"Derivative-based Shapley value for global sensitivity analysis and machine learning explainability","authors":"Hui Duan, Giray Okten","doi":"10.1615/int.j.uncertaintyquantification.2024051548","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2024051548","url":null,"abstract":"We introduce a new Shapley value approach for global sensitivity analysis and machine learning explainability. The method is based on the first-order partial derivatives of the underlying function. The computational complexity of the method is linear in dimension (number of features), as opposed to the exponential complexity of other Shapley value approaches in the literature. Examples from global sensitivity analysis and machine learning are used to compare the method numerically with activity scores, SHAP, and KernelSHAP.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"97 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01DOI: 10.1615/int.j.uncertaintyquantification.2024052162
Paolo Manfredi
This paper introduces a simple and computationally tractable probabilistic framework for forward uncertainty quantification based on Gaussian process regression, also known as Kriging. The aim is to equip uncertainty measures in the propagation of input uncertainty to simulator outputs with predictive uncertainty and confidence bounds accounting for the limited accuracy of the surrogate model, which is mainly due to using a finite amount of training data. The additional uncertainty related to the estimation of some of the prior model parameters (namely, trend coefficients and kernel variance) is further accounted for. Two different scenarios are considered. In the first one, the Gaussian process surrogate is used to emulate the actual simulator and propagate input uncertainty in the framework of a Monte Carlo analysis, i.e., as computationally cheap replacement of the original code. In the second one, semi-analytical estimates for the statistical moments of the output quantity are obtained directly based on their integral definition. The estimates for the first scenario are more general, more tractable, and they naturally extend to inputs of higher dimensions. The impact of noise on the target function is also discussed. Our findings are demonstrated based on a simple illustrative function and validated by means of several benchmark functions and a high-dimensional test case with more than a hundred uncertain variables.
{"title":"Probabilistic Uncertainty Propagation Using Gaussian Process Surrogates","authors":"Paolo Manfredi","doi":"10.1615/int.j.uncertaintyquantification.2024052162","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2024052162","url":null,"abstract":"This paper introduces a simple and computationally tractable probabilistic framework for forward uncertainty quantification based on Gaussian process regression, also known as Kriging. The aim is to equip uncertainty measures in the propagation of input uncertainty to simulator outputs with predictive uncertainty and confidence bounds accounting for the limited accuracy of the surrogate model, which is mainly due to using a finite amount of training data. The additional uncertainty related to the estimation of some of the prior model parameters (namely, trend coefficients and kernel variance) is further accounted for. Two different scenarios are considered. In the first one, the Gaussian process surrogate is used to emulate the actual simulator and propagate input uncertainty in the framework of a Monte Carlo analysis, i.e., as computationally cheap replacement of the original code. In the second one, semi-analytical estimates for the statistical moments of the output quantity are obtained directly based on their integral definition. The estimates for the first scenario are more general, more tractable, and they naturally extend to inputs of higher dimensions. The impact of noise on the target function is also discussed. Our findings are demonstrated based on a simple illustrative function and validated by means of several benchmark functions and a high-dimensional test case with more than a hundred uncertain variables.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"31 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141189356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01DOI: 10.1615/int.j.uncertaintyquantification.2024048538
Yingjie Gao, E Bruce Pitman
Emulators are used to approximate the output of large computer simulations. Statistical emulators are surrogates that, in addition to predicting the mean behavior of the system, provide an estimate of the error in that prediction. Classical Gaussian Stochastic Process emulators predict scalar outputs based on a modest number of input parameters. To make predictions across a space-time field of input variables is not feasible using classical Gaussian process methods. Parallel Partial Emulation is a new statistical emulator methodology that predicts a field of outputs at space-time locations, based on a set of input parameters of modest dimension. Parallel partial emulation is constructed as a Gaussian process in parameter space, but no correlation in space/time is assumed. Thus the computational work of parallel partial emulation scales as the cube of the number of input parameters (as traditional Gaussian Process emulation) and linearly with space-time grid. The behavior of Parallel Partial Emulation predictions in complex applications is not well understood. Scientists would like to understand how predictions depend on the separation of input parameters, across the space-time outputs. It is also of interest to study whether the emulator predictions inherit properties (e.g conservation) from the numerical simulator. This paper studies the properties of emulator predictions, in the context of scalar and systems of partial differential equation.
{"title":"PARALLEL PARTIAL EMULATION IN APPLICATIONS","authors":"Yingjie Gao, E Bruce Pitman","doi":"10.1615/int.j.uncertaintyquantification.2024048538","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2024048538","url":null,"abstract":"Emulators are used to approximate the output of large computer simulations.\u0000Statistical emulators are surrogates that, in addition to predicting the mean behavior of the system, provide an estimate of the error in that prediction.\u0000Classical Gaussian Stochastic Process emulators predict scalar outputs based on a modest number of input parameters.\u0000To make predictions across a space-time field of input variables is not feasible using classical Gaussian process methods.\u0000Parallel Partial Emulation is a new statistical emulator methodology that predicts a field of outputs at space-time locations, based on a set of input parameters of modest dimension.\u0000Parallel partial emulation is constructed as a Gaussian process in parameter space, but no correlation in space/time is assumed. Thus the computational work of parallel partial emulation scales as the cube of the number of input parameters (as traditional Gaussian Process emulation) and linearly with space-time grid.\u0000The behavior of Parallel Partial Emulation predictions in complex applications is not well understood.\u0000Scientists would like to understand how predictions depend on the separation of input parameters, across the space-time outputs.\u0000It is also of interest to study whether the emulator predictions inherit properties (e.g conservation) from the numerical simulator.\u0000This paper studies the properties of emulator predictions, in the context of scalar and systems of partial differential equation.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"49 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140889294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01DOI: 10.1615/int.j.uncertaintyquantification.2024051424
Sebastien Roux, Patrice Loisel, Samuel Buis
We address the question of sensitivity analysis for model outputs of any dimension using Regional Sensitivity Analysis (RSA). Classical RSA computes sensitivity indices related to the impact of model inputs variations on the occurrence of a target region of the model output space. In this work, we put this perspective one step further by proposing to find, for a given model input, the region whose occurrence is best explained by the variations of this input. When it exists, this region can be seen as a model behavior which is particularly sensitive to the variations of the model input under study. We name this method mRSA (for maximized RSA). mRSA is formalized as an optimization problem using region-based sensitivity indices. Two formulations are studied, one theoretically and one numerically using a dedicated algorithm. Using a 2D test model and an environmental model producing time series, we show that mRSA, as a new model exploration tool, can provide interpretable insights on the sensitivity of model outputs of various dimensions.
{"title":"Maximizing Regional Sensitivity Analysis indices to find sensitive model behaviors","authors":"Sebastien Roux, Patrice Loisel, Samuel Buis","doi":"10.1615/int.j.uncertaintyquantification.2024051424","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2024051424","url":null,"abstract":"We address the question of sensitivity analysis for model outputs of any dimension using Regional Sensitivity Analysis (RSA). Classical RSA computes sensitivity indices related to the impact of model inputs variations on the occurrence of a target region of the model output space. In this work, we put this perspective one step further by proposing to find, for a given model input, the region whose occurrence is best explained by the variations of this input. When it exists, this region can be seen as a model behavior which is particularly sensitive to the variations of the model input under study. We name this method mRSA (for maximized RSA).\u0000mRSA is formalized as an optimization problem using region-based sensitivity indices. Two formulations are studied, one theoretically and one numerically using a dedicated algorithm. Using a 2D test model and an environmental model producing time series, we show that mRSA, as a new model exploration tool, can provide interpretable insights on the sensitivity of model outputs of various dimensions.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"3 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140562068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01DOI: 10.1615/int.j.uncertaintyquantification.2024048898
Xiaolong Wang, Siqing Liu
We investigate model order reduction (MOR) of random parametric linear systems via the regression method. By sampling the random parameters containing in the coefficient matrices of the systems via Latin hypercube method, the iterative rational Krylov algorithm (IRKA) is used to generate sample reduced models corresponding to the sample data. We assemble the resulting reduced models by interpolating the coefficient matrices of reduced sample models with the regression technique, where the generalized polynomial chaos (gPC) are adopted to characterize the random dependence coming from the original systems. Noting the invariance of the transfer function with respect to restricted equivalence transformations, the regression method is conducted based on the controllable canonical form of reduced sample models in such a way to improve the accuracy of reduced models greatly. We also provide a posteriori error bound for the projection reduction method in the stochastic setting. We showcase the efficiency of the proposed approach by two large-scale systems along with random parameters: a synthetic model and a mass-spring-damper system.
{"title":"Structure-Preserving Model Order Reduction of Random Parametric Linear Systems via Regression","authors":"Xiaolong Wang, Siqing Liu","doi":"10.1615/int.j.uncertaintyquantification.2024048898","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2024048898","url":null,"abstract":"We investigate model order reduction (MOR) of random parametric linear systems via the regression method. By sampling the random parameters containing in the coefficient matrices of the systems via Latin hypercube method, the iterative rational Krylov algorithm (IRKA) is used to generate sample reduced models corresponding to the sample data. We assemble the resulting reduced models by interpolating the coefficient matrices of reduced sample models with the regression technique, where the generalized polynomial chaos (gPC) are adopted to characterize the random dependence coming from the original systems. Noting the invariance of the transfer function with respect to restricted equivalence transformations, the regression method is conducted based on the controllable canonical form of reduced sample models in such a way to improve the accuracy of reduced models greatly. We also provide a posteriori error bound for the projection reduction method in the stochastic setting. We showcase the efficiency of the proposed approach by two large-scale systems along with random parameters: a synthetic model and a mass-spring-damper system.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"48 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140634987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01DOI: 10.1615/int.j.uncertaintyquantification.2024049566
Hector Galante, Anca Belme, Jean-Camille Chassaing, Timothy Wildey
We present a non-intrusive adaptive stochastic collocation method coupled with a data-consistent inference framework to optimize stochastic inverse problems solve in CFD. The purpose of the proposed data-consistent method is, given a model and some observed output probability density function (pdf), to build a new model input pdf which is consistent with both the model and the data. Solving stochastic inverse problems in CFD is however very costly, which is why we use a surrogate or metamodel in the data-consistent inference method. This surrogate model is built using an adaptive stochastic collocation approach based on a stochastic error estimator and simplex elements in the parameters space. The efficiency of the proposed method is evaluated on analytical test cases and two CFD configurations. The metamodel inference results are shown to be as accurate as crude Monte Carlo inferences while performing 103 less deterministic computations for smooth and discontinuous response surfaces. Moreover, the proposed method is shown to be able to reconstruct both an observed pdf on the data and a data-generating distribution in the uncertain parameter space under certain conditions.
{"title":"SOLVING STOCHASTIC INVERSE PROBLEMS FOR CFD USING DATA-CONSISTENT INVERSION AND AN ADAPTIVE STOCHASTIC COLLOCATION METHOD","authors":"Hector Galante, Anca Belme, Jean-Camille Chassaing, Timothy Wildey","doi":"10.1615/int.j.uncertaintyquantification.2024049566","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2024049566","url":null,"abstract":"We present a non-intrusive adaptive stochastic collocation method coupled with a data-consistent inference framework to optimize stochastic inverse problems solve in CFD. The purpose of the proposed data-consistent method is, given a model and some observed output probability density function (pdf), to build a new model input pdf which is consistent with both the model and the data. Solving stochastic inverse problems in CFD is however very costly, which is why we use a surrogate or metamodel in the data-consistent inference method. This surrogate model is built using an adaptive stochastic collocation approach based on a stochastic error estimator and simplex elements in the parameters space. The efficiency of the proposed method is evaluated on analytical test cases and two CFD configurations. The metamodel inference results are shown to be as accurate as crude Monte Carlo inferences while performing 103 less deterministic computations for smooth and discontinuous response surfaces. Moreover, the proposed method is shown to be able to reconstruct both an observed pdf on the data and a data-generating distribution in the uncertain parameter space under certain conditions.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"28 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140627318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01DOI: 10.1615/int.j.uncertaintyquantification.2024039459
Greta Marino, Jan-Frederik Pietschmann, Alois Pichler
We study evolution equations of drift-diffusion type when various parameters are random. Motivated by applications in pedestrian dynamics, we focus on the case when the total mass is, due to boundary or reaction terms, not conserved. After providing existence and stability for the deterministic problem, we consider uncertainty in the data. Instead of a sensitivity analysis we propose to measure functionals of the solution, so-called quantities of interest (QoI), by involving scalarizing statistics. For these summarizing statistics we provide probabilistic continuity results.
{"title":"Uncertainty Analysis for Drift-Diffusion Equations","authors":"Greta Marino, Jan-Frederik Pietschmann, Alois Pichler","doi":"10.1615/int.j.uncertaintyquantification.2024039459","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2024039459","url":null,"abstract":"We study evolution equations of drift-diffusion type when various parameters are random. Motivated by applications in pedestrian dynamics, we focus on the case when the total mass is, due to boundary or reaction terms, not conserved. After providing existence and stability for the deterministic problem, we consider uncertainty in the data. Instead of a sensitivity analysis we propose to measure functionals of the solution, so-called quantities of interest (QoI), by involving scalarizing statistics. For these summarizing statistics we provide probabilistic continuity results.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"29 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140562074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.1615/int.j.uncertaintyquantification.2024050125
Bryan Reuter, Gianluca Geraci, Timothy Wildey
Multifidelity (MF) Uncertainty Quantification (UQ) seeks to leverage and fuse information from a collection of models to achieve greater statistical accuracy with respect to a single-fidelity counterpart, while maintaining an efficient use of computational resources. Despite many recent advancements in MF UQ, several challenges remain and these often limit its practical impact in certain application areas. In this manuscript, we focus on the challenges introduced by non-deterministic models to sampling MF UQ estimators. Non-deterministic models produce different responses for the same inputs, which means their outputs are effectively noisy. MF UQ becomes complicated by this noise since many state-of-the-art approaches rely on statistics, e.g., the correlation among models, to optimally fuse information and allocate computational resources. We demonstrate how the statistics of the quantities of interest, which impact the design, effectiveness, and use of existing MF UQ techniques, change as functions of the noise. With this in hand, we extend the unifying Approximate Control Variate framework to account for non-determinism, providing for the first time a rigorous means of comparing the effect of non-determinism on different multifidelity estimators and analyzing their performance with respect to one another. Numerical examples are presented throughout the manuscript to illustrate and discuss the consequences of the presented theoretical results.
{"title":"Analysis of the Challenges in Developing Sample-Based Multi-fidelity Estimators for Non-deterministic Models","authors":"Bryan Reuter, Gianluca Geraci, Timothy Wildey","doi":"10.1615/int.j.uncertaintyquantification.2024050125","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2024050125","url":null,"abstract":"Multifidelity (MF) Uncertainty Quantification (UQ) seeks to leverage and fuse information from a collection of models to achieve greater statistical accuracy with respect to a single-fidelity counterpart, while maintaining an efficient use of computational resources.\u0000Despite many recent advancements in MF UQ, several challenges remain and these often limit its practical impact in certain application areas. In this manuscript, we focus on the challenges introduced by non-deterministic models to sampling MF UQ estimators.\u0000Non-deterministic models produce different responses for the same inputs, which means their outputs are effectively noisy. MF UQ becomes complicated by this noise since many state-of-the-art approaches rely on statistics, e.g., the correlation among models, to optimally fuse information and allocate computational resources. We demonstrate how the statistics of the quantities of interest, which impact the design, effectiveness, and use of existing MF UQ techniques, change as functions of the noise. With this in hand, we extend the unifying Approximate Control Variate framework to account for non-determinism, providing for the first time a rigorous means of comparing the effect of non-determinism on different multifidelity estimators and analyzing their performance with respect to one another. Numerical examples are presented throughout the manuscript to illustrate and discuss the consequences of the presented theoretical results.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"116 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140108115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.1615/int.j.uncertaintyquantification.2024049519
John Darges, Alen Alexanderian, Pierre Gremaud
Variance-based global sensitivity analysis (GSA) can provide a wealth of information when applied to complex models. A well-known Achilles’ heel of this approach is its computational cost which often renders it unfeasible in practice. An appealing alternative is to analyze instead the sensitivity of a surrogate model with the goal of lowering computational costs while maintaining sufficient accuracy. Should a surrogate be “simple" enough to be amenable to the analytical calculations of its Sobol’ indices, the cost of GSA is essentially reduced to the construction of the surrogate. We propose a new class of sparse weight Extreme Learning Machines (SW-ELMs) which, when considered as surrogates in the context of GSA, admit analytical formulas for their Sobol’ indices and, unlike the standard ELMs, yield accurate approximations of these indices. The effectiveness of this approach is illustrated through both traditional benchmarks in the field and on a chemical reaction network.
{"title":"EXTREME LEARNING MACHINES FOR VARIANCE-BASED GLOBAL SENSITIVITY ANALYSIS","authors":"John Darges, Alen Alexanderian, Pierre Gremaud","doi":"10.1615/int.j.uncertaintyquantification.2024049519","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2024049519","url":null,"abstract":"Variance-based global sensitivity analysis (GSA) can provide a wealth of information when applied to complex models. A well-known Achilles’ heel of this approach is its computational cost which often renders it unfeasible in practice. An appealing alternative is to analyze instead the sensitivity of a surrogate model with the goal of lowering computational costs while maintaining sufficient accuracy. Should a surrogate be “simple\" enough to be amenable to the analytical calculations of its Sobol’ indices, the cost of GSA is essentially reduced to the construction of the surrogate. We propose a new class of sparse weight Extreme Learning Machines (SW-ELMs) which, when considered as surrogates in the context of GSA, admit analytical formulas for their Sobol’ indices and, unlike the standard ELMs, yield accurate approximations of these indices. The effectiveness of this approach is illustrated through both traditional benchmarks in the field and on a chemical reaction network.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"157 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140034606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.1615/int.j.uncertaintyquantification.2024049119
Matieyendou LAMBONI
A methodology for assessing the inputs-outputs association for time-depending predictive models under failure mode for instance is investigated. Firstly, new dependency models for sampling random values of uncertain inputs that comply with the safety objectives are provided. Secondly, the asymmetric role of outputs and inputs leads to develop new kernel-based statistical tests of independence between the inputs and outputs using the dependency models. The associated test statistics are normalized so as to introduce new kernel-based sensitivity indices (Kb-SIs). Such first-order and total Kb-SIs allow for i) assessing the inputs effects on the whole dynamic outputs subjected to safety objectives, ii) dealing with sensitivity functionals (SFs) having heavy-tailed distributions or non-stationary time-depending SFs thanks to kernel methods. Our approach is also well-suited for dynamic models with prescribed copulas of inputs.
{"title":"Measuring inputs-outputs association for time-depending hazard models under safety objectives using kernels","authors":"Matieyendou LAMBONI","doi":"10.1615/int.j.uncertaintyquantification.2024049119","DOIUrl":"https://doi.org/10.1615/int.j.uncertaintyquantification.2024049119","url":null,"abstract":"A methodology for assessing the inputs-outputs association for time-depending predictive models under failure mode for instance is investigated. Firstly, new dependency models for sampling random values of uncertain inputs that comply with the safety objectives are provided. Secondly, the asymmetric role of outputs and inputs leads to develop new kernel-based statistical tests of independence between the inputs and outputs using the dependency models. The associated test statistics are normalized so as to introduce new kernel-based sensitivity indices (Kb-SIs). Such first-order and total Kb-SIs allow for i) assessing the inputs effects on the whole dynamic outputs subjected to safety objectives, ii) dealing with sensitivity functionals (SFs) having heavy-tailed distributions or non-stationary time-depending SFs thanks to kernel methods. Our approach is also well-suited for dynamic models with prescribed copulas of inputs.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"74 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140322821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}