Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation

IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Information (Switzerland) Pub Date : 2023-09-19 DOI:10.3390/info14090516
Mehdi Sadi, B. M. S. Bahar Talukder, Kaniz Mishty, Tauhidur Rahman
{"title":"Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation","authors":"Mehdi Sadi, B. M. S. Bahar Talukder, Kaniz Mishty, Tauhidur Rahman","doi":"10.3390/info14090516","DOIUrl":null,"url":null,"abstract":"Universal adversarial perturbations are image-agnostic and model-independent noise that, when added to any image, can mislead the trained deep convolutional neural networks into the wrong prediction. Since these universal adversarial perturbations can seriously jeopardize the security and integrity of practical deep learning applications, the existing techniques use additional neural networks to detect the existence of these noises at the input image source. In this paper, we demonstrate an attack strategy that, when activated by rogue means (e.g., malware, trojan), can bypass these existing countermeasures by augmenting the adversarial noise at the AI hardware accelerator stage. We demonstrate the accelerator-level universal adversarial noise attack on several deep learning models using co-simulation of the software kernel of the Conv2D function and the Verilog RTL model of the hardware under the FuseSoC environment.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"93 1","pages":"0"},"PeriodicalIF":2.4000,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information (Switzerland)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/info14090516","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Universal adversarial perturbations are image-agnostic and model-independent noise that, when added to any image, can mislead the trained deep convolutional neural networks into the wrong prediction. Since these universal adversarial perturbations can seriously jeopardize the security and integrity of practical deep learning applications, the existing techniques use additional neural networks to detect the existence of these noises at the input image source. In this paper, we demonstrate an attack strategy that, when activated by rogue means (e.g., malware, trojan), can bypass these existing countermeasures by augmenting the adversarial noise at the AI hardware accelerator stage. We demonstrate the accelerator-level universal adversarial noise attack on several deep learning models using co-simulation of the software kernel of the Conv2D function and the Verilog RTL model of the hardware under the FuseSoC environment.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用通用对抗性扰动攻击深度学习AI硬件
通用对抗性扰动是图像不可知和模型无关的噪声,当添加到任何图像中时,可能会误导训练有素的深度卷积神经网络进行错误的预测。由于这些普遍的对抗性扰动会严重危及实际深度学习应用的安全性和完整性,现有的技术使用额外的神经网络来检测输入图像源处这些噪声的存在。在本文中,我们展示了一种攻击策略,当被流氓手段(例如,恶意软件,特洛伊木马)激活时,可以通过增加人工智能硬件加速器阶段的对抗性噪声来绕过这些现有的对策。在FuseSoC环境下,利用Conv2D函数的软件内核和硬件的Verilog RTL模型的联合仿真,我们演示了加速器级通用对抗性噪声攻击在几个深度学习模型上的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Information (Switzerland)
Information (Switzerland) Computer Science-Information Systems
CiteScore
6.90
自引率
0.00%
发文量
515
审稿时长
11 weeks
期刊最新文献
Weakly Supervised Learning Approach for Implicit Aspect Extraction Science Mapping of Meta-Analysis in Agricultural Science An Integrated Time Series Prediction Model Based on Empirical Mode Decomposition and Two Attention Mechanisms Context-Aware Personalization: A Systems Engineering Framework Polarizing Topics on Twitter in the 2022 United States Elections
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1