{"title":"Estimating Uncertainty with Implicit Quantile Network","authors":"Yi Hung Lim","doi":"arxiv-2408.14525","DOIUrl":null,"url":null,"abstract":"Uncertainty quantification is an important part of many performance critical\napplications. This paper provides a simple alternative to existing approaches\nsuch as ensemble learning and bayesian neural networks. By directly modeling\nthe loss distribution with an Implicit Quantile Network, we get an estimate of\nhow uncertain the model is of its predictions. For experiments with MNIST and\nCIFAR datasets, the mean of the estimated loss distribution is 2x higher for\nincorrect predictions. When data with high estimated uncertainty is removed\nfrom the test dataset, the accuracy of the model goes up as much as 10%. This\nmethod is simple to implement while offering important information to\napplications where the user has to know when the model could be wrong (e.g.\ndeep learning for healthcare).","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"20 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.14525","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Uncertainty quantification is an important part of many performance critical
applications. This paper provides a simple alternative to existing approaches
such as ensemble learning and bayesian neural networks. By directly modeling
the loss distribution with an Implicit Quantile Network, we get an estimate of
how uncertain the model is of its predictions. For experiments with MNIST and
CIFAR datasets, the mean of the estimated loss distribution is 2x higher for
incorrect predictions. When data with high estimated uncertainty is removed
from the test dataset, the accuracy of the model goes up as much as 10%. This
method is simple to implement while offering important information to
applications where the user has to know when the model could be wrong (e.g.
deep learning for healthcare).