Self-Assessment and Robust Anomaly Detection with Bayesian Deep Learning

Giuseppina Carannante, Dimah Dera, Orune Aminul, N. Bouaynaya, G. Rasool
{"title":"Self-Assessment and Robust Anomaly Detection with Bayesian Deep Learning","authors":"Giuseppina Carannante, Dimah Dera, Orune Aminul, N. Bouaynaya, G. Rasool","doi":"10.23919/fusion49751.2022.9841358","DOIUrl":null,"url":null,"abstract":"Deep Learning (DL) models have achieved or even surpassed human-level accuracy in several areas, including computer vision and pattern recognition. The state-of-art performance of DL models has raised the interest in using them in real-world applications, such as disease diagnosis and clinical decision support systems. However, the challenge remains the lack of trustworthiness and reliability of these DL models. The detection of incorrect decisions or flagging suspicious input samples is essential for the reliability of machine learning models. Uncertainty estimation in the output decision is a key component in establishing the trustworthiness and reliability of these models. In this work, we use Bayesian techniques to estimate the uncertainty in the model's output and use this uncertainty to detect distributional shifts linked to both input perturbations and labels shifts. We use the learned uncertainty information (i.e., the variance of the predictive distribution) in two different ways to detect anomalous input samples: 1) a static threshold based on average uncertainty of a model evaluated on the clean test data, and 2) a statistical threshold based on the significant increase in the average uncertainty of the model evaluated on corrupted (anomalous) samples. Our extensive experiments demonstrate that both approaches can detect anomalous samples. We observe that the proposed thresholding techniques can distinguish misclassified examples in the presence of noise, adversarial attacks, anomalies or distributional shifts. For example, when considering corrupted versions of MNIST and CIFAR-10 datasets, the rate of detecting misclassified samples is almost twice as compared to Monte-Carlo-based approaches.","PeriodicalId":176447,"journal":{"name":"2022 25th International Conference on Information Fusion (FUSION)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 25th International Conference on Information Fusion (FUSION)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/fusion49751.2022.9841358","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Deep Learning (DL) models have achieved or even surpassed human-level accuracy in several areas, including computer vision and pattern recognition. The state-of-art performance of DL models has raised the interest in using them in real-world applications, such as disease diagnosis and clinical decision support systems. However, the challenge remains the lack of trustworthiness and reliability of these DL models. The detection of incorrect decisions or flagging suspicious input samples is essential for the reliability of machine learning models. Uncertainty estimation in the output decision is a key component in establishing the trustworthiness and reliability of these models. In this work, we use Bayesian techniques to estimate the uncertainty in the model's output and use this uncertainty to detect distributional shifts linked to both input perturbations and labels shifts. We use the learned uncertainty information (i.e., the variance of the predictive distribution) in two different ways to detect anomalous input samples: 1) a static threshold based on average uncertainty of a model evaluated on the clean test data, and 2) a statistical threshold based on the significant increase in the average uncertainty of the model evaluated on corrupted (anomalous) samples. Our extensive experiments demonstrate that both approaches can detect anomalous samples. We observe that the proposed thresholding techniques can distinguish misclassified examples in the presence of noise, adversarial attacks, anomalies or distributional shifts. For example, when considering corrupted versions of MNIST and CIFAR-10 datasets, the rate of detecting misclassified samples is almost twice as compared to Monte-Carlo-based approaches.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于贝叶斯深度学习的自评估和鲁棒异常检测
深度学习(DL)模型在包括计算机视觉和模式识别在内的几个领域已经达到甚至超过了人类水平的准确性。深度学习模型的先进性能提高了人们在现实世界应用中使用它们的兴趣,例如疾病诊断和临床决策支持系统。然而,挑战仍然是这些深度学习模型缺乏可信度和可靠性。检测不正确的决策或标记可疑的输入样本对于机器学习模型的可靠性至关重要。输出决策中的不确定性估计是建立模型可信性和可靠性的关键环节。在这项工作中,我们使用贝叶斯技术来估计模型输出中的不确定性,并使用这种不确定性来检测与输入扰动和标签移位相关的分布移位。我们以两种不同的方式使用学习到的不确定性信息(即预测分布的方差)来检测异常输入样本:1)基于在干净测试数据上评估的模型的平均不确定性的静态阈值,以及2)基于在损坏(异常)样本上评估的模型的平均不确定性的显著增加的统计阈值。我们的大量实验表明,这两种方法都可以检测到异常样本。我们观察到,所提出的阈值技术可以在存在噪声、对抗性攻击、异常或分布变化的情况下区分错误分类的示例。例如,当考虑MNIST和CIFAR-10数据集的损坏版本时,检测错误分类样本的比率几乎是基于蒙特卡罗方法的两倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An Integrated Localization Method for Mixed Near-Field and Far-Field Sources Based on Mixed-order Statistic A Comparison of Correlation-Agnostic Techniques for Magnetic Navigation On the Development of Quantitative Operator Situational Awareness Assessment Methods for Small-Scale Unmanned Aircraft Systems Visual-Inertial Odometry aided by Speed and Steering Angle Measurements Data fusion strategies for improving resilience to sensor noise in cable-stayed tower monitoring
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1