通过归因图实现无人机视觉系统的新型对抗检测方法

IF 4.4 2区 地球科学 Q1 REMOTE SENSING Drones Pub Date : 2023-12-07 DOI:10.3390/drones7120697
Zhun Zhang, Qihe Liu, Chunjiang Wu, Shijie Zhou, Zhangbao Yan
{"title":"通过归因图实现无人机视觉系统的新型对抗检测方法","authors":"Zhun Zhang, Qihe Liu, Chunjiang Wu, Shijie Zhou, Zhangbao Yan","doi":"10.3390/drones7120697","DOIUrl":null,"url":null,"abstract":"With the rapid advancement of unmanned aerial vehicles (UAVs) and the Internet of Things (IoTs), UAV-assisted IoTs has become integral in areas such as wildlife monitoring, disaster surveillance, and search and rescue operations. However, recent studies have shown that these systems are vulnerable to adversarial example attacks during data collection and transmission. These attacks subtly alter input data to trick UAV-based deep learning vision systems, significantly compromising the reliability and security of IoTs systems. Consequently, various methods have been developed to identify adversarial examples within model inputs, but they often lack accuracy against complex attacks like C&W and others. Drawing inspiration from model visualization technology, we observed that adversarial perturbations markedly alter the attribution maps of clean examples. This paper introduces a new, effective detection method for UAV vision systems that uses attribution maps created by model visualization techniques. The method differentiates between genuine and adversarial examples by extracting their unique attribution maps and then training a classifier on these maps. Validation experiments on the ImageNet dataset showed that our method achieves an average detection accuracy of 99.58%, surpassing the state-of-the-art methods.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"52 49","pages":""},"PeriodicalIF":4.4000,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Novel Adversarial Detection Method for UAV Vision Systems via Attribution Maps\",\"authors\":\"Zhun Zhang, Qihe Liu, Chunjiang Wu, Shijie Zhou, Zhangbao Yan\",\"doi\":\"10.3390/drones7120697\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the rapid advancement of unmanned aerial vehicles (UAVs) and the Internet of Things (IoTs), UAV-assisted IoTs has become integral in areas such as wildlife monitoring, disaster surveillance, and search and rescue operations. However, recent studies have shown that these systems are vulnerable to adversarial example attacks during data collection and transmission. These attacks subtly alter input data to trick UAV-based deep learning vision systems, significantly compromising the reliability and security of IoTs systems. Consequently, various methods have been developed to identify adversarial examples within model inputs, but they often lack accuracy against complex attacks like C&W and others. Drawing inspiration from model visualization technology, we observed that adversarial perturbations markedly alter the attribution maps of clean examples. This paper introduces a new, effective detection method for UAV vision systems that uses attribution maps created by model visualization techniques. The method differentiates between genuine and adversarial examples by extracting their unique attribution maps and then training a classifier on these maps. Validation experiments on the ImageNet dataset showed that our method achieves an average detection accuracy of 99.58%, surpassing the state-of-the-art methods.\",\"PeriodicalId\":36448,\"journal\":{\"name\":\"Drones\",\"volume\":\"52 49\",\"pages\":\"\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2023-12-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Drones\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.3390/drones7120697\",\"RegionNum\":2,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"REMOTE SENSING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Drones","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.3390/drones7120697","RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"REMOTE SENSING","Score":null,"Total":0}
引用次数: 0

摘要

随着无人机(uav)和物联网(iot)的快速发展,无人机辅助物联网已成为野生动物监测、灾害监测和搜救行动等领域不可或缺的一部分。然而,最近的研究表明,这些系统在数据收集和传输过程中容易受到对抗性示例攻击。这些攻击巧妙地改变了输入数据,欺骗了基于无人机的深度学习视觉系统,严重损害了物联网系统的可靠性和安全性。因此,已经开发了各种方法来识别模型输入中的对抗性示例,但它们通常缺乏对C&W等复杂攻击的准确性。从模型可视化技术中获得灵感,我们观察到对抗性扰动显著地改变了干净样本的归因图。本文介绍了一种利用模型可视化技术生成的属性图对无人机视觉系统进行有效检测的新方法。该方法通过提取真实示例和对抗示例的唯一属性图,然后在这些图上训练分类器来区分真实示例和对抗示例。在ImageNet数据集上的验证实验表明,我们的方法平均检测准确率达到99.58%,超过了目前最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Novel Adversarial Detection Method for UAV Vision Systems via Attribution Maps
With the rapid advancement of unmanned aerial vehicles (UAVs) and the Internet of Things (IoTs), UAV-assisted IoTs has become integral in areas such as wildlife monitoring, disaster surveillance, and search and rescue operations. However, recent studies have shown that these systems are vulnerable to adversarial example attacks during data collection and transmission. These attacks subtly alter input data to trick UAV-based deep learning vision systems, significantly compromising the reliability and security of IoTs systems. Consequently, various methods have been developed to identify adversarial examples within model inputs, but they often lack accuracy against complex attacks like C&W and others. Drawing inspiration from model visualization technology, we observed that adversarial perturbations markedly alter the attribution maps of clean examples. This paper introduces a new, effective detection method for UAV vision systems that uses attribution maps created by model visualization techniques. The method differentiates between genuine and adversarial examples by extracting their unique attribution maps and then training a classifier on these maps. Validation experiments on the ImageNet dataset showed that our method achieves an average detection accuracy of 99.58%, surpassing the state-of-the-art methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Drones
Drones Engineering-Aerospace Engineering
CiteScore
5.60
自引率
18.80%
发文量
331
期刊最新文献
Firefighting Drone Configuration and Scheduling for Wildfire Based on Loss Estimation and Minimization Wind Tunnel Balance Measurements of Bioinspired Tails for a Fixed Wing MAV Three-Dimensional Indoor Positioning Scheme for Drone with Fingerprint-Based Deep-Learning Classifier Blockchain-Enabled Infection Sample Collection System Using Two-Echelon Drone-Assisted Mechanism Joint Trajectory Design and Resource Optimization in UAV-Assisted Caching-Enabled Networks with Finite Blocklength Transmissions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1