Physical Adversarial Attack Scheme on Object Detectors using 3D Adversarial Object

Abeer Toheed, M. Yousaf, Rabnawaz, A. Javed
{"title":"Physical Adversarial Attack Scheme on Object Detectors using 3D Adversarial Object","authors":"Abeer Toheed, M. Yousaf, Rabnawaz, A. Javed","doi":"10.1109/ICoDT255437.2022.9787422","DOIUrl":null,"url":null,"abstract":"Adversarial attacks are being frequently used these days to exploit different machine learning models including the deep neural networks (DNN) either during the training or testing stage. DNN under such attacks make the false predictions. Digital adversarial attacks are not applicable in physical world. Adversarial attack on object detection is more difficult as compared to the adversarial attack on image classification. This paper presents a physical adversarial attack on object detection using 3D adversarial objects. The proposed methodology overcome the constraint of 2D adversarial patches as they only work for certain viewpoints only. We have mapped an adversarial texture onto a mesh to create the 3D adversarial object. These objects are of various shapes and sizes. Unlike adversarial patch attacks, these adversarial objects are movable from one place to another. Moreover, application of 2D patch is limited to confined viewpoints. Experimentation results show that our 3D adversarial objects are free from such constraints and perform a successful attack on object detection. We used the ShapeNet dataset for different vehicle models. 3D objects are created using Blender 2.93 [1]. Different HDR images are incorporated to create the virtual physical environment. Moreover, we targeted the FasterRCNN and YOLO pre-trained models on the COCO dataset as our target DNN. Experimental results demonstrate that our proposed model successfully fooled these object detectors.","PeriodicalId":291030,"journal":{"name":"2022 2nd International Conference on Digital Futures and Transformative Technologies (ICoDT2)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 2nd International Conference on Digital Futures and Transformative Technologies (ICoDT2)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICoDT255437.2022.9787422","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Adversarial attacks are being frequently used these days to exploit different machine learning models including the deep neural networks (DNN) either during the training or testing stage. DNN under such attacks make the false predictions. Digital adversarial attacks are not applicable in physical world. Adversarial attack on object detection is more difficult as compared to the adversarial attack on image classification. This paper presents a physical adversarial attack on object detection using 3D adversarial objects. The proposed methodology overcome the constraint of 2D adversarial patches as they only work for certain viewpoints only. We have mapped an adversarial texture onto a mesh to create the 3D adversarial object. These objects are of various shapes and sizes. Unlike adversarial patch attacks, these adversarial objects are movable from one place to another. Moreover, application of 2D patch is limited to confined viewpoints. Experimentation results show that our 3D adversarial objects are free from such constraints and perform a successful attack on object detection. We used the ShapeNet dataset for different vehicle models. 3D objects are created using Blender 2.93 [1]. Different HDR images are incorporated to create the virtual physical environment. Moreover, we targeted the FasterRCNN and YOLO pre-trained models on the COCO dataset as our target DNN. Experimental results demonstrate that our proposed model successfully fooled these object detectors.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于三维对抗对象的目标检测器物理对抗攻击方案
如今,在训练或测试阶段,对抗性攻击经常被用于利用不同的机器学习模型,包括深度神经网络(DNN)。DNN在这种攻击下做出错误的预测。数字对抗性攻击不适用于物理世界。针对目标检测的对抗性攻击比针对图像分类的对抗性攻击更为困难。本文提出了一种基于三维对抗对象的物理对抗性攻击目标检测方法。所提出的方法克服了2D对抗性补丁的限制,因为它们仅适用于某些视点。我们已经将一个对抗纹理映射到一个网格上,以创建3D对抗对象。这些物体形状各异,大小不一。与对抗性补丁攻击不同,这些对抗性对象可以从一个地方移动到另一个地方。此外,二维贴片的应用仅限于受限视点。实验结果表明,我们的3D对抗对象不受这些约束,并在目标检测上进行了成功的攻击。我们对不同的车辆模型使用了ShapeNet数据集。使用Blender 2.93[1]创建3D对象。不同的HDR图像被合并来创建虚拟物理环境。此外,我们将COCO数据集上的fastrcnn和YOLO预训练模型作为我们的目标DNN。实验结果表明,该模型成功地欺骗了这些目标检测器。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Segmentation of Images Using Deep Learning: A Survey Semantic Keywords Extraction from Paper Abstract in the Domain of Educational Big Data to support Topic Clustering Automatically Categorizing Software Technologies A Theoretical CNN Compression Framework for Resource-Restricted Environments Automatic Detection and classification of Scoliosis from Spine X-rays using Transfer Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1