Xingbin Wang, Boyan Zhao, Yulan Su, Sisi Zhang, Fengkai Yuan, Jun Zhang, Dan Meng, Rui Hou
{"title":"针对对抗性示例攻击的稀疏-密集混合防御 DNN 加速体系结构","authors":"Xingbin Wang, Boyan Zhao, Yulan Su, Sisi Zhang, Fengkai Yuan, Jun Zhang, Dan Meng, Rui Hou","doi":"10.1145/3677318","DOIUrl":null,"url":null,"abstract":"\n Understanding how to defend against adversarial attacks is crucial for ensuring the safety and reliability of these systems in real-world applications. Various adversarial defense methods are proposed, which aim to improve the robustness of neural networks against adversarial attacks by changing the model structure, adding detection networks, and adversarial purification network. However, deploying adversarial defense methods in existing DNN accelerators or defensive accelerators leads to many key issues. To address these challenges, this paper proposes\n sDNNGuard\n , an elastic heterogeneous DNN accelerator architecture that can efficiently orchestrate the simultaneous execution of original (\n target\n ) DNN networks and the\n detect\n algorithm or network. It not only supports for dense DNN detect algorithms, but also allows for sparse DNN defense methods and other mixed dense-sparse (e.g., dense-dense and sparse-dense) workloads to fully exploit the benefits of sparsity. sDNNGuard with a CPU core also supports the non-DNN computing and allows the special layer of the neural network, and used for the conversion for sparse storage format for weights and activation values. To reduce off-chip traffic and improve resources utilization, a new hardware abstraction with elastic on-chip buffer/computing resource management is proposed to achieve dynamical resource scheduling mechanism. We propose an\n extended AI instruction set\n for neural networks synchronization, task scheduling and efficient data interaction. Experiment results show that sDNNGuard can effectively validate the legitimacy of the input samples in parallel with the target DNN model, achieving an average 1.42 × speedup compared with the state-of-the-art accelerators.\n","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":null,"pages":null},"PeriodicalIF":2.8000,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Hybrid Sparse-dense Defensive DNN Accelerator Architecture against Adversarial Example Attacks\",\"authors\":\"Xingbin Wang, Boyan Zhao, Yulan Su, Sisi Zhang, Fengkai Yuan, Jun Zhang, Dan Meng, Rui Hou\",\"doi\":\"10.1145/3677318\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Understanding how to defend against adversarial attacks is crucial for ensuring the safety and reliability of these systems in real-world applications. Various adversarial defense methods are proposed, which aim to improve the robustness of neural networks against adversarial attacks by changing the model structure, adding detection networks, and adversarial purification network. However, deploying adversarial defense methods in existing DNN accelerators or defensive accelerators leads to many key issues. To address these challenges, this paper proposes\\n sDNNGuard\\n , an elastic heterogeneous DNN accelerator architecture that can efficiently orchestrate the simultaneous execution of original (\\n target\\n ) DNN networks and the\\n detect\\n algorithm or network. It not only supports for dense DNN detect algorithms, but also allows for sparse DNN defense methods and other mixed dense-sparse (e.g., dense-dense and sparse-dense) workloads to fully exploit the benefits of sparsity. sDNNGuard with a CPU core also supports the non-DNN computing and allows the special layer of the neural network, and used for the conversion for sparse storage format for weights and activation values. To reduce off-chip traffic and improve resources utilization, a new hardware abstraction with elastic on-chip buffer/computing resource management is proposed to achieve dynamical resource scheduling mechanism. We propose an\\n extended AI instruction set\\n for neural networks synchronization, task scheduling and efficient data interaction. Experiment results show that sDNNGuard can effectively validate the legitimacy of the input samples in parallel with the target DNN model, achieving an average 1.42 × speedup compared with the state-of-the-art accelerators.\\n\",\"PeriodicalId\":50914,\"journal\":{\"name\":\"ACM Transactions on Embedded Computing Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2024-07-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Embedded Computing Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3677318\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Embedded Computing Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3677318","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
A Hybrid Sparse-dense Defensive DNN Accelerator Architecture against Adversarial Example Attacks
Understanding how to defend against adversarial attacks is crucial for ensuring the safety and reliability of these systems in real-world applications. Various adversarial defense methods are proposed, which aim to improve the robustness of neural networks against adversarial attacks by changing the model structure, adding detection networks, and adversarial purification network. However, deploying adversarial defense methods in existing DNN accelerators or defensive accelerators leads to many key issues. To address these challenges, this paper proposes
sDNNGuard
, an elastic heterogeneous DNN accelerator architecture that can efficiently orchestrate the simultaneous execution of original (
target
) DNN networks and the
detect
algorithm or network. It not only supports for dense DNN detect algorithms, but also allows for sparse DNN defense methods and other mixed dense-sparse (e.g., dense-dense and sparse-dense) workloads to fully exploit the benefits of sparsity. sDNNGuard with a CPU core also supports the non-DNN computing and allows the special layer of the neural network, and used for the conversion for sparse storage format for weights and activation values. To reduce off-chip traffic and improve resources utilization, a new hardware abstraction with elastic on-chip buffer/computing resource management is proposed to achieve dynamical resource scheduling mechanism. We propose an
extended AI instruction set
for neural networks synchronization, task scheduling and efficient data interaction. Experiment results show that sDNNGuard can effectively validate the legitimacy of the input samples in parallel with the target DNN model, achieving an average 1.42 × speedup compared with the state-of-the-art accelerators.
期刊介绍:
The design of embedded computing systems, both the software and hardware, increasingly relies on sophisticated algorithms, analytical models, and methodologies. ACM Transactions on Embedded Computing Systems (TECS) aims to present the leading work relating to the analysis, design, behavior, and experience with embedded computing systems.