通过对抗性训练和归因方法,可以评估用于图像分类的深度学习模型的鲁棒性和可解释性。

IF 2.4 3区 物理与天体物理 Q1 Mathematics Physical review. E Pub Date : 2024-11-01 DOI:10.1103/PhysRevE.110.054310
Flávio A O Santos, Cleber Zanchettin, Weihua Lei, Luís A Nunes Amaral
{"title":"通过对抗性训练和归因方法,可以评估用于图像分类的深度学习模型的鲁棒性和可解释性。","authors":"Flávio A O Santos, Cleber Zanchettin, Weihua Lei, Luís A Nunes Amaral","doi":"10.1103/PhysRevE.110.054310","DOIUrl":null,"url":null,"abstract":"<p><p>Deep learning models have achieved high performance in a wide range of applications. Recently, however, there have been increasing concerns about the fragility of many of those models to adversarial approaches and out-of-distribution inputs. A way to investigate and potentially address model fragility is to develop the ability to provide interpretability to model predictions. To this end, input attribution approaches such as Grad-CAM and integrated gradients have been introduced to address model interpretability. Here, we combine adversarial and input attribution approaches in order to achieve two goals. The first is to investigate the impact of adversarial approaches on input attribution. The second is to benchmark competing input attribution approaches. In the context of the image classification task, we find that models trained with adversarial approaches yield dramatically different input attribution matrices from those obtained using standard techniques for all considered input attribution approaches. Additionally, by evaluating the signal-(typical input attribution of the foreground)-to-noise (typical input attribution of the background) ratio and correlating it to model confidence, we are able to identify the most reliable input attribution approaches and demonstrate that adversarial training does increase prediction robustness. Our approach can be easily extended to contexts other than the image classification task and enables users to increase their confidence in the reliability of deep learning models.</p>","PeriodicalId":20085,"journal":{"name":"Physical review. E","volume":"110 5-1","pages":"054310"},"PeriodicalIF":2.4000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adversarial training and attribution methods enable evaluation of robustness and interpretability of deep learning models for image classification.\",\"authors\":\"Flávio A O Santos, Cleber Zanchettin, Weihua Lei, Luís A Nunes Amaral\",\"doi\":\"10.1103/PhysRevE.110.054310\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Deep learning models have achieved high performance in a wide range of applications. Recently, however, there have been increasing concerns about the fragility of many of those models to adversarial approaches and out-of-distribution inputs. A way to investigate and potentially address model fragility is to develop the ability to provide interpretability to model predictions. To this end, input attribution approaches such as Grad-CAM and integrated gradients have been introduced to address model interpretability. Here, we combine adversarial and input attribution approaches in order to achieve two goals. The first is to investigate the impact of adversarial approaches on input attribution. The second is to benchmark competing input attribution approaches. In the context of the image classification task, we find that models trained with adversarial approaches yield dramatically different input attribution matrices from those obtained using standard techniques for all considered input attribution approaches. Additionally, by evaluating the signal-(typical input attribution of the foreground)-to-noise (typical input attribution of the background) ratio and correlating it to model confidence, we are able to identify the most reliable input attribution approaches and demonstrate that adversarial training does increase prediction robustness. Our approach can be easily extended to contexts other than the image classification task and enables users to increase their confidence in the reliability of deep learning models.</p>\",\"PeriodicalId\":20085,\"journal\":{\"name\":\"Physical review. E\",\"volume\":\"110 5-1\",\"pages\":\"054310\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2024-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Physical review. E\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://doi.org/10.1103/PhysRevE.110.054310\",\"RegionNum\":3,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Mathematics\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physical review. E","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1103/PhysRevE.110.054310","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Mathematics","Score":null,"Total":0}
引用次数: 0

摘要

深度学习模型在广泛的应用中取得了高性能。然而,最近人们越来越关注许多这些模型在对抗方法和分布外输入面前的脆弱性。调查和潜在地解决模型脆弱性的一种方法是发展为模型预测提供可解释性的能力。为此,引入了诸如Grad-CAM和集成梯度等输入归因方法来解决模型可解释性问题。在这里,我们结合对抗性和输入归因方法来实现两个目标。首先是研究对抗性方法对输入归因的影响。第二是对竞争性输入归因方法进行基准测试。在图像分类任务的背景下,我们发现使用对抗方法训练的模型与使用所有考虑的输入归因方法的标准技术获得的模型产生显著不同的输入归因矩阵。此外,通过评估信号(前景的典型输入归因)与噪声(背景的典型输入归因)的比率,并将其与模型置信度相关联,我们能够确定最可靠的输入归因方法,并证明对抗性训练确实提高了预测的鲁棒性。我们的方法可以很容易地扩展到图像分类任务以外的环境中,并使用户能够增加对深度学习模型可靠性的信心。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Adversarial training and attribution methods enable evaluation of robustness and interpretability of deep learning models for image classification.

Deep learning models have achieved high performance in a wide range of applications. Recently, however, there have been increasing concerns about the fragility of many of those models to adversarial approaches and out-of-distribution inputs. A way to investigate and potentially address model fragility is to develop the ability to provide interpretability to model predictions. To this end, input attribution approaches such as Grad-CAM and integrated gradients have been introduced to address model interpretability. Here, we combine adversarial and input attribution approaches in order to achieve two goals. The first is to investigate the impact of adversarial approaches on input attribution. The second is to benchmark competing input attribution approaches. In the context of the image classification task, we find that models trained with adversarial approaches yield dramatically different input attribution matrices from those obtained using standard techniques for all considered input attribution approaches. Additionally, by evaluating the signal-(typical input attribution of the foreground)-to-noise (typical input attribution of the background) ratio and correlating it to model confidence, we are able to identify the most reliable input attribution approaches and demonstrate that adversarial training does increase prediction robustness. Our approach can be easily extended to contexts other than the image classification task and enables users to increase their confidence in the reliability of deep learning models.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Physical review. E
Physical review. E 物理-物理:流体与等离子体
CiteScore
4.60
自引率
16.70%
发文量
0
审稿时长
3.3 months
期刊介绍: Physical Review E (PRE), broad and interdisciplinary in scope, focuses on collective phenomena of many-body systems, with statistical physics and nonlinear dynamics as the central themes of the journal. Physical Review E publishes recent developments in biological and soft matter physics including granular materials, colloids, complex fluids, liquid crystals, and polymers. The journal covers fluid dynamics and plasma physics and includes sections on computational and interdisciplinary physics, for example, complex networks.
期刊最新文献
Energy exchange statistics and fluctuation theorem for nonthermal asymptotic states. Ergodicity breaking and restoration in models of heat transport with microscopic reversibility. Random search for a partially reactive target by multiple diffusive searchers. Random walk with horizontal and cyclic currents. Noise-induced transitions from contractile to extensile active stress in isotropic fluids.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1