Hardening Interpretable Deep Learning Systems: Investigating Adversarial Threats and Defenses

IF 4.7 2区 化学 Q2 MATERIALS SCIENCE, MULTIDISCIPLINARY ACS Applied Polymer Materials Pub Date : 2024-07-01 DOI:10.1109/TDSC.2023.3341090
Eldor Abdukhamidov, Mohammad Abuhamad, Simon S. Woo, Eric Chan-Tin, Tamer Abuhmed
{"title":"Hardening Interpretable Deep Learning Systems: Investigating Adversarial Threats and Defenses","authors":"Eldor Abdukhamidov, Mohammad Abuhamad, Simon S. Woo, Eric Chan-Tin, Tamer Abuhmed","doi":"10.1109/TDSC.2023.3341090","DOIUrl":null,"url":null,"abstract":"Deep learning methods have gained increasing attention in various applications due to their outstanding performance. For exploring how this high performance relates to the proper use of data artifacts and the accurate problem formulation of a given task, interpretation models have become a crucial component in developing deep learning-based systems. Interpretation models enable the understanding of the inner workings of deep learning models and offer a sense of security in detecting the misuse of artifacts in the input data. Similar to prediction models, interpretation models are also susceptible to adversarial inputs. This work introduces two attacks, AdvEdge and AdvEdge<inline-formula><tex-math notation=\"LaTeX\">$^{+}$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mo>+</mml:mo></mml:msup></mml:math><inline-graphic xlink:href=\"abuhmed-ieq1-3341090.gif\"/></alternatives></inline-formula>, which deceive both the target deep learning model and the coupled interpretation model. We assess the effectiveness of proposed attacks against four deep learning model architectures coupled with four interpretation models that represent different categories of interpretation models. Our experiments include the implementation of attacks using various attack frameworks. We also explore the attack resilience against three general defense mechanisms and potential countermeasures. Our analysis shows the effectiveness of our attacks in terms of deceiving the deep learning models and their interpreters, and highlights insights to improve and circumvent the attacks.","PeriodicalId":7,"journal":{"name":"ACS Applied Polymer Materials","volume":"6 9","pages":"3963-3976"},"PeriodicalIF":4.7000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Polymer Materials","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TDSC.2023.3341090","RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 5

Abstract

Deep learning methods have gained increasing attention in various applications due to their outstanding performance. For exploring how this high performance relates to the proper use of data artifacts and the accurate problem formulation of a given task, interpretation models have become a crucial component in developing deep learning-based systems. Interpretation models enable the understanding of the inner workings of deep learning models and offer a sense of security in detecting the misuse of artifacts in the input data. Similar to prediction models, interpretation models are also susceptible to adversarial inputs. This work introduces two attacks, AdvEdge and AdvEdge$^{+}$+, which deceive both the target deep learning model and the coupled interpretation model. We assess the effectiveness of proposed attacks against four deep learning model architectures coupled with four interpretation models that represent different categories of interpretation models. Our experiments include the implementation of attacks using various attack frameworks. We also explore the attack resilience against three general defense mechanisms and potential countermeasures. Our analysis shows the effectiveness of our attacks in terms of deceiving the deep learning models and their interpreters, and highlights insights to improve and circumvent the attacks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
加固可解释深度学习系统:对抗性威胁与防御调查
深度学习方法因其出色的性能在各种应用中获得了越来越多的关注。为了探索这种高性能如何与正确使用数据工件和准确制定给定任务的问题相关联,解释模型已成为开发基于深度学习的系统的重要组成部分。解释模型有助于理解深度学习模型的内部运作,并为检测输入数据中人工智能的滥用提供安全感。与预测模型类似,解释模型也容易受到对抗性输入的影响。这项研究引入了两种攻击:AdvEdge 和 AdvEdge$^{+}$+,这两种攻击同时欺骗了目标深度学习模型和耦合解释模型。我们评估了针对四种深度学习模型架构和四种解释模型提出的攻击的有效性,这四种解释模型代表了不同类别的解释模型。我们的实验包括使用各种攻击框架实施攻击。我们还探索了针对三种一般防御机制和潜在对策的攻击复原力。我们的分析表明了我们的攻击在欺骗深度学习模型及其解释器方面的有效性,并强调了改进和规避攻击的见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
7.20
自引率
6.00%
发文量
810
期刊介绍: ACS Applied Polymer Materials is an interdisciplinary journal publishing original research covering all aspects of engineering, chemistry, physics, and biology relevant to applications of polymers. The journal is devoted to reports of new and original experimental and theoretical research of an applied nature that integrates fundamental knowledge in the areas of materials, engineering, physics, bioscience, polymer science and chemistry into important polymer applications. The journal is specifically interested in work that addresses relationships among structure, processing, morphology, chemistry, properties, and function as well as work that provide insights into mechanisms critical to the performance of the polymer for applications.
期刊最新文献
Issue Publication Information Highly Stable Mesoporous Poly(styrene-divinylbenzene) Sulfonic Acid Resins for Efficient Dealkylation of Di-tert-butyl-cresol Micellar Copolymerization Toughened MXene Hybrid PHEAA Hydrogel with Antibacterial and Antifouling Performances for Flexible Sensing Multifunctional Integration and High Dimensional Stability in EVA Foam via Dynamic Ionic Cross-Linking Network Non-Reactive Compatibilizers for Laser-Welded Immiscible Polymer Interfaces: Strengthening Effects and Molecular Mechanisms
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1