基于故障注入攻击的图像传感器接口对抗实例

IF 0.4 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Ieice Transactions on Fundamentals of Electronics Communications and Computer Sciences Pub Date : 2023-01-01 DOI:10.1587/transfun.2023cip0025
Tatsuya OYAMA, Kota YOSHIDA, Shunsuke OKURA, Takeshi FUJINO
{"title":"基于故障注入攻击的图像传感器接口对抗实例","authors":"Tatsuya OYAMA, Kota YOSHIDA, Shunsuke OKURA, Takeshi FUJINO","doi":"10.1587/transfun.2023cip0025","DOIUrl":null,"url":null,"abstract":"Adversarial examples (AEs), which cause misclassification by adding subtle perturbations to input images, have been proposed as an attack method on image-classification systems using deep neural networks (DNNs). Physical AEs created by attaching stickers to traffic signs have been reported, which are a threat to traffic-sign-recognition DNNs used in advanced driver assistance systems. We previously proposed an attack method for generating a noise area on images by superimposing an electrical signal on the mobile industry processor interface and showed that it can generate a single adversarial mark that triggers a backdoor attack on the input image. Therefore, we propose a misclassification attack method n DNNs by creating AEs that include small perturbations to multiple places on the image by the fault injection. The perturbation position for AEs is pre-calculated in advance against the target traffic-sign image, which will be captured on future driving. With 5.2% to 5.5% of a specific image on the simulation, the perturbation that induces misclassification to the target label was calculated. As the experimental results, we confirmed that the traffic-sign-recognition DNN on a Raspberry Pi was successfully misclassified when the target traffic sign was captured with. In addition, we created robust AEs that cause misclassification of images with varying positions and size by adding a common perturbation. We propose a method to reduce the amount of robust AEs perturbation. Our results demonstrated successful misclassification of the captured image with a high attack success rate even if the position and size of the captured image are slightly changed.","PeriodicalId":55003,"journal":{"name":"Ieice Transactions on Fundamentals of Electronics Communications and Computer Sciences","volume":null,"pages":null},"PeriodicalIF":0.4000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adversarial Examples created by Fault Injection Attack on Image Sensor Interface\",\"authors\":\"Tatsuya OYAMA, Kota YOSHIDA, Shunsuke OKURA, Takeshi FUJINO\",\"doi\":\"10.1587/transfun.2023cip0025\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adversarial examples (AEs), which cause misclassification by adding subtle perturbations to input images, have been proposed as an attack method on image-classification systems using deep neural networks (DNNs). Physical AEs created by attaching stickers to traffic signs have been reported, which are a threat to traffic-sign-recognition DNNs used in advanced driver assistance systems. We previously proposed an attack method for generating a noise area on images by superimposing an electrical signal on the mobile industry processor interface and showed that it can generate a single adversarial mark that triggers a backdoor attack on the input image. Therefore, we propose a misclassification attack method n DNNs by creating AEs that include small perturbations to multiple places on the image by the fault injection. The perturbation position for AEs is pre-calculated in advance against the target traffic-sign image, which will be captured on future driving. With 5.2% to 5.5% of a specific image on the simulation, the perturbation that induces misclassification to the target label was calculated. As the experimental results, we confirmed that the traffic-sign-recognition DNN on a Raspberry Pi was successfully misclassified when the target traffic sign was captured with. In addition, we created robust AEs that cause misclassification of images with varying positions and size by adding a common perturbation. We propose a method to reduce the amount of robust AEs perturbation. Our results demonstrated successful misclassification of the captured image with a high attack success rate even if the position and size of the captured image are slightly changed.\",\"PeriodicalId\":55003,\"journal\":{\"name\":\"Ieice Transactions on Fundamentals of Electronics Communications and Computer Sciences\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.4000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ieice Transactions on Fundamentals of Electronics Communications and Computer Sciences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1587/transfun.2023cip0025\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ieice Transactions on Fundamentals of Electronics Communications and Computer Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1587/transfun.2023cip0025","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

对抗性示例(AEs)是一种利用深度神经网络(dnn)攻击图像分类系统的方法,它通过在输入图像中添加细微的扰动而导致误分类。有报道称,将贴纸贴在交通标志上产生的物理ae对高级驾驶辅助系统中使用的交通标志识别dnn构成了威胁。我们之前提出了一种通过在移动工业处理器接口上叠加电信号在图像上产生噪声区域的攻击方法,并表明它可以产生单个对抗性标记,从而触发对输入图像的后门攻击。因此,我们提出了一种对dnn的错误分类攻击方法,即通过错误注入创建包含对图像上多个位置的小扰动的ae。针对目标交通标志图像,预先计算ae的摄动位置,并在以后的驾驶中捕获。在模拟特定图像的5.2%至5.5%的情况下,计算了导致目标标签错误分类的扰动。作为实验结果,我们证实了树莓派上的交通标志识别DNN在目标交通标志被捕获时成功地错误分类。此外,我们创建了鲁棒的ae,通过添加一个常见的扰动来导致不同位置和大小的图像的错误分类。我们提出了一种减少鲁棒AEs摄动量的方法。我们的结果表明,即使捕获图像的位置和大小略有变化,也可以成功地对捕获图像进行错误分类,并且攻击成功率很高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Adversarial Examples created by Fault Injection Attack on Image Sensor Interface
Adversarial examples (AEs), which cause misclassification by adding subtle perturbations to input images, have been proposed as an attack method on image-classification systems using deep neural networks (DNNs). Physical AEs created by attaching stickers to traffic signs have been reported, which are a threat to traffic-sign-recognition DNNs used in advanced driver assistance systems. We previously proposed an attack method for generating a noise area on images by superimposing an electrical signal on the mobile industry processor interface and showed that it can generate a single adversarial mark that triggers a backdoor attack on the input image. Therefore, we propose a misclassification attack method n DNNs by creating AEs that include small perturbations to multiple places on the image by the fault injection. The perturbation position for AEs is pre-calculated in advance against the target traffic-sign image, which will be captured on future driving. With 5.2% to 5.5% of a specific image on the simulation, the perturbation that induces misclassification to the target label was calculated. As the experimental results, we confirmed that the traffic-sign-recognition DNN on a Raspberry Pi was successfully misclassified when the target traffic sign was captured with. In addition, we created robust AEs that cause misclassification of images with varying positions and size by adding a common perturbation. We propose a method to reduce the amount of robust AEs perturbation. Our results demonstrated successful misclassification of the captured image with a high attack success rate even if the position and size of the captured image are slightly changed.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.10
自引率
20.00%
发文量
137
审稿时长
3.9 months
期刊介绍: Includes reports on research, developments, and examinations performed by the Society''s members for the specific fields shown in the category list such as detailed below, the contents of which may advance the development of science and industry: (1) Reports on new theories, experiments with new contents, or extensions of and supplements to conventional theories and experiments. (2) Reports on development of measurement technology and various applied technologies. (3) Reports on the planning, design, manufacture, testing, or operation of facilities, machinery, parts, materials, etc. (4) Presentation of new methods, suggestion of new angles, ideas, systematization, software, or any new facts regarding the above.
期刊最新文献
Post-Quantum Anonymous One-Sided Authenticated Key Exchange without Random Oracles Detection of False Data Injection Attacks in Distributed State Estimation of Power Networks An Accuracy Reconfigurable Vector Accelerator based on Approximate Logarithmic Multipliers for Energy-Efficient Computing Solving the Problem of Blockwise Isomorphism of Polynomials with Circulant Matrices Short DL-based Blacklistable Ring Signatures from DualRing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1