A Hybrid Sparse-dense Defensive DNN Accelerator Architecture against Adversarial Example Attacks

IF 2.8 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE ACM Transactions on Embedded Computing Systems Pub Date : 2024-07-09 DOI:10.1145/3677318
Xingbin Wang, Boyan Zhao, Yulan Su, Sisi Zhang, Fengkai Yuan, Jun Zhang, Dan Meng, Rui Hou
{"title":"A Hybrid Sparse-dense Defensive DNN Accelerator Architecture against Adversarial Example Attacks","authors":"Xingbin Wang, Boyan Zhao, Yulan Su, Sisi Zhang, Fengkai Yuan, Jun Zhang, Dan Meng, Rui Hou","doi":"10.1145/3677318","DOIUrl":null,"url":null,"abstract":"\n Understanding how to defend against adversarial attacks is crucial for ensuring the safety and reliability of these systems in real-world applications. Various adversarial defense methods are proposed, which aim to improve the robustness of neural networks against adversarial attacks by changing the model structure, adding detection networks, and adversarial purification network. However, deploying adversarial defense methods in existing DNN accelerators or defensive accelerators leads to many key issues. To address these challenges, this paper proposes\n sDNNGuard\n , an elastic heterogeneous DNN accelerator architecture that can efficiently orchestrate the simultaneous execution of original (\n target\n ) DNN networks and the\n detect\n algorithm or network. It not only supports for dense DNN detect algorithms, but also allows for sparse DNN defense methods and other mixed dense-sparse (e.g., dense-dense and sparse-dense) workloads to fully exploit the benefits of sparsity. sDNNGuard with a CPU core also supports the non-DNN computing and allows the special layer of the neural network, and used for the conversion for sparse storage format for weights and activation values. To reduce off-chip traffic and improve resources utilization, a new hardware abstraction with elastic on-chip buffer/computing resource management is proposed to achieve dynamical resource scheduling mechanism. We propose an\n extended AI instruction set\n for neural networks synchronization, task scheduling and efficient data interaction. Experiment results show that sDNNGuard can effectively validate the legitimacy of the input samples in parallel with the target DNN model, achieving an average 1.42 × speedup compared with the state-of-the-art accelerators.\n","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":null,"pages":null},"PeriodicalIF":2.8000,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Embedded Computing Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3677318","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Understanding how to defend against adversarial attacks is crucial for ensuring the safety and reliability of these systems in real-world applications. Various adversarial defense methods are proposed, which aim to improve the robustness of neural networks against adversarial attacks by changing the model structure, adding detection networks, and adversarial purification network. However, deploying adversarial defense methods in existing DNN accelerators or defensive accelerators leads to many key issues. To address these challenges, this paper proposes sDNNGuard , an elastic heterogeneous DNN accelerator architecture that can efficiently orchestrate the simultaneous execution of original ( target ) DNN networks and the detect algorithm or network. It not only supports for dense DNN detect algorithms, but also allows for sparse DNN defense methods and other mixed dense-sparse (e.g., dense-dense and sparse-dense) workloads to fully exploit the benefits of sparsity. sDNNGuard with a CPU core also supports the non-DNN computing and allows the special layer of the neural network, and used for the conversion for sparse storage format for weights and activation values. To reduce off-chip traffic and improve resources utilization, a new hardware abstraction with elastic on-chip buffer/computing resource management is proposed to achieve dynamical resource scheduling mechanism. We propose an extended AI instruction set for neural networks synchronization, task scheduling and efficient data interaction. Experiment results show that sDNNGuard can effectively validate the legitimacy of the input samples in parallel with the target DNN model, achieving an average 1.42 × speedup compared with the state-of-the-art accelerators.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
针对对抗性示例攻击的稀疏-密集混合防御 DNN 加速体系结构
了解如何抵御对抗性攻击对于确保这些系统在实际应用中的安全性和可靠性至关重要。人们提出了各种对抗性防御方法,旨在通过改变模型结构、添加检测网络和对抗性净化网络来提高神经网络对抗对抗性攻击的鲁棒性。然而,在现有的 DNN 加速器或防御型加速器中部署对抗性防御方法会导致许多关键问题。为了应对这些挑战,本文提出了一种弹性异构 DNN 加速器架构 sDNNGuard,它可以高效地协调原始(目标)DNN 网络和检测算法或网络的同时执行。它不仅支持密集 DNN 检测算法,还支持稀疏 DNN 防御方法和其他密稀混合(如密集-密集和稀疏-密集)工作负载,以充分发挥稀疏性的优势。sDNNGuard 的 CPU 内核还支持非 DNN 计算,允许神经网络的特殊层,并用于权重和激活值的稀疏存储格式转换。为了减少片外流量并提高资源利用率,我们提出了一种具有弹性片上缓冲区/计算资源管理的新硬件抽象,以实现动态资源调度机制。我们为神经网络同步、任务调度和高效数据交互提出了一个扩展的人工智能指令集。实验结果表明,sDNNGuard 可以与目标 DNN 模型并行有效地验证输入样本的合法性,与最先进的加速器相比平均提速 1.42 倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
ACM Transactions on Embedded Computing Systems
ACM Transactions on Embedded Computing Systems 工程技术-计算机:软件工程
CiteScore
3.70
自引率
0.00%
发文量
138
审稿时长
6 months
期刊介绍: The design of embedded computing systems, both the software and hardware, increasingly relies on sophisticated algorithms, analytical models, and methodologies. ACM Transactions on Embedded Computing Systems (TECS) aims to present the leading work relating to the analysis, design, behavior, and experience with embedded computing systems.
期刊最新文献
Optimizing Dilithium Implementation with AVX2/-512 Optimizing Dilithium Implementation with AVX2/-512 Transient Fault Detection in Tensor Cores for Modern GPUs High Performance and Predictable Shared Last-level Cache for Safety-Critical Systems APB-tree: An Adaptive Pre-built Tree Indexing Scheme for NVM-based IoT Systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1