{"title":"决策理论引导","authors":"Peyman Tavallali, Peyman Tavallali, Hamed Hamze Bajgiran, Danial Esaid, Houman Owhadi","doi":"10.1615/int.j.uncertaintyquantification.2023038552","DOIUrl":null,"url":null,"abstract":"The design and testing of supervised machine learning models combine two fundamental distributions: (1) the training data distribution (2) the testing data distribution. Although these two distributions are identical and identifiable when the data set is infinite, they are imperfectly known when the data is finite (and possibly corrupted), and this uncertainty must be taken into account for robust Uncertainty Quantification (UQ). An important case is when the test distribution is coming from a modal or localized area of the finite sample distribution. We present a general decision-theoretic bootstrapping solution to this problem: (1) partition the available data into a training subset, and a UQ subset (2) take $m$ subsampled subsets of the training set and train $m$ models (3) partition the UQ set into $n$ sorted subsets and take a random fraction of them to define $n$ corresponding empirical distributions $\\mu_{j}$ (4) consider the adversarial game where Player I selects a model $i\\in\\left\\{ 1,\\ldots,m\\right\\} $, Player II selects the UQ distribution $\\mu_{j}$ and Player I receives a loss defined by evaluating the model $i$ against data points sampled from $\\mu_{j}$ (5) identify optimal mixed strategies (probability distributions over models and UQ distributions) for both players. These randomized optimal mixed strategies provide optimal model mixtures, and UQ estimates given the adversarial uncertainty of the training and testing distributions represented by the game. The proposed approach provides (1) some degree of robustness to in-sample distribution localization/concentration (2) conditional probability distributions on the output.","PeriodicalId":48814,"journal":{"name":"International Journal for Uncertainty Quantification","volume":"32 1","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Decision theoretic bootstrapping\",\"authors\":\"Peyman Tavallali, Peyman Tavallali, Hamed Hamze Bajgiran, Danial Esaid, Houman Owhadi\",\"doi\":\"10.1615/int.j.uncertaintyquantification.2023038552\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The design and testing of supervised machine learning models combine two fundamental distributions: (1) the training data distribution (2) the testing data distribution. Although these two distributions are identical and identifiable when the data set is infinite, they are imperfectly known when the data is finite (and possibly corrupted), and this uncertainty must be taken into account for robust Uncertainty Quantification (UQ). An important case is when the test distribution is coming from a modal or localized area of the finite sample distribution. We present a general decision-theoretic bootstrapping solution to this problem: (1) partition the available data into a training subset, and a UQ subset (2) take $m$ subsampled subsets of the training set and train $m$ models (3) partition the UQ set into $n$ sorted subsets and take a random fraction of them to define $n$ corresponding empirical distributions $\\\\mu_{j}$ (4) consider the adversarial game where Player I selects a model $i\\\\in\\\\left\\\\{ 1,\\\\ldots,m\\\\right\\\\} $, Player II selects the UQ distribution $\\\\mu_{j}$ and Player I receives a loss defined by evaluating the model $i$ against data points sampled from $\\\\mu_{j}$ (5) identify optimal mixed strategies (probability distributions over models and UQ distributions) for both players. These randomized optimal mixed strategies provide optimal model mixtures, and UQ estimates given the adversarial uncertainty of the training and testing distributions represented by the game. The proposed approach provides (1) some degree of robustness to in-sample distribution localization/concentration (2) conditional probability distributions on the output.\",\"PeriodicalId\":48814,\"journal\":{\"name\":\"International Journal for Uncertainty Quantification\",\"volume\":\"32 1\",\"pages\":\"\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2023-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal for Uncertainty Quantification\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1615/int.j.uncertaintyquantification.2023038552\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal for Uncertainty Quantification","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1615/int.j.uncertaintyquantification.2023038552","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
The design and testing of supervised machine learning models combine two fundamental distributions: (1) the training data distribution (2) the testing data distribution. Although these two distributions are identical and identifiable when the data set is infinite, they are imperfectly known when the data is finite (and possibly corrupted), and this uncertainty must be taken into account for robust Uncertainty Quantification (UQ). An important case is when the test distribution is coming from a modal or localized area of the finite sample distribution. We present a general decision-theoretic bootstrapping solution to this problem: (1) partition the available data into a training subset, and a UQ subset (2) take $m$ subsampled subsets of the training set and train $m$ models (3) partition the UQ set into $n$ sorted subsets and take a random fraction of them to define $n$ corresponding empirical distributions $\mu_{j}$ (4) consider the adversarial game where Player I selects a model $i\in\left\{ 1,\ldots,m\right\} $, Player II selects the UQ distribution $\mu_{j}$ and Player I receives a loss defined by evaluating the model $i$ against data points sampled from $\mu_{j}$ (5) identify optimal mixed strategies (probability distributions over models and UQ distributions) for both players. These randomized optimal mixed strategies provide optimal model mixtures, and UQ estimates given the adversarial uncertainty of the training and testing distributions represented by the game. The proposed approach provides (1) some degree of robustness to in-sample distribution localization/concentration (2) conditional probability distributions on the output.
期刊介绍:
The International Journal for Uncertainty Quantification disseminates information of permanent interest in the areas of analysis, modeling, design and control of complex systems in the presence of uncertainty. The journal seeks to emphasize methods that cross stochastic analysis, statistical modeling and scientific computing. Systems of interest are governed by differential equations possibly with multiscale features. Topics of particular interest include representation of uncertainty, propagation of uncertainty across scales, resolving the curse of dimensionality, long-time integration for stochastic PDEs, data-driven approaches for constructing stochastic models, validation, verification and uncertainty quantification for predictive computational science, and visualization of uncertainty in high-dimensional spaces. Bayesian computation and machine learning techniques are also of interest for example in the context of stochastic multiscale systems, for model selection/classification, and decision making. Reports addressing the dynamic coupling of modern experiments and modeling approaches towards predictive science are particularly encouraged. Applications of uncertainty quantification in all areas of physical and biological sciences are appropriate.