计算机视觉中对抗性攻击和防御的集成学习方法:最新进展

Zhiping Lu, Hongchao Hu, Shumin Huo, Shuyi Li
{"title":"计算机视觉中对抗性攻击和防御的集成学习方法:最新进展","authors":"Zhiping Lu, Hongchao Hu, Shumin Huo, Shuyi Li","doi":"10.1109/IEEECONF52377.2022.10013347","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) has developed rapidly in recent decades and is widely used in many fields, such as natural language processing, voice recognition, and especially computer vision (CV). However, the endogenous security problems brought by the AI model itself, leading to the emergence of adversarial examples (AEs), which can fool the AI models and cause a serious impact on the classification. In recent years, researches show that ensemble learning methods are effective both in generating or detecting AEs. By integrating to generate AEs, the attackers can implement stronger and good transferability attacks to the target models. On the other hand, ensemble learning methods can also be used in defenses that can improve the robustness against AEs. In this paper, we focus on the ensemble learning methods in the CV field, and first introduce the classic adversarial attack and defense technologies. Then, we survey the ensemble learning methods in the adversarial environment and divide them into three types of frameworks (i.e., parallel, sequential, and hybrid). To the best of our knowledge, we are the first to analyze the recent proposed attacks and defenses in the adversarial environment from the perspective of these ensemble frameworks. Additionally, we summarize the advantages and disadvantages of these ensemble methods and frameworks. In the end, we give some suggestions for using ensemble frameworks, and we put forward several opinions from the aspects of attacks, defenses, and evaluations for future research directions in this field.","PeriodicalId":193681,"journal":{"name":"2021 International Conference on Advanced Computing and Endogenous Security","volume":"23 6","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Ensemble Learning Methods of Adversarial Attacks and Defenses in Computer Vision: Recent Progress\",\"authors\":\"Zhiping Lu, Hongchao Hu, Shumin Huo, Shuyi Li\",\"doi\":\"10.1109/IEEECONF52377.2022.10013347\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI) has developed rapidly in recent decades and is widely used in many fields, such as natural language processing, voice recognition, and especially computer vision (CV). However, the endogenous security problems brought by the AI model itself, leading to the emergence of adversarial examples (AEs), which can fool the AI models and cause a serious impact on the classification. In recent years, researches show that ensemble learning methods are effective both in generating or detecting AEs. By integrating to generate AEs, the attackers can implement stronger and good transferability attacks to the target models. On the other hand, ensemble learning methods can also be used in defenses that can improve the robustness against AEs. In this paper, we focus on the ensemble learning methods in the CV field, and first introduce the classic adversarial attack and defense technologies. Then, we survey the ensemble learning methods in the adversarial environment and divide them into three types of frameworks (i.e., parallel, sequential, and hybrid). To the best of our knowledge, we are the first to analyze the recent proposed attacks and defenses in the adversarial environment from the perspective of these ensemble frameworks. Additionally, we summarize the advantages and disadvantages of these ensemble methods and frameworks. In the end, we give some suggestions for using ensemble frameworks, and we put forward several opinions from the aspects of attacks, defenses, and evaluations for future research directions in this field.\",\"PeriodicalId\":193681,\"journal\":{\"name\":\"2021 International Conference on Advanced Computing and Endogenous Security\",\"volume\":\"23 6\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Advanced Computing and Endogenous Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IEEECONF52377.2022.10013347\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Advanced Computing and Endogenous Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IEEECONF52377.2022.10013347","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

近几十年来,人工智能(AI)发展迅速,广泛应用于自然语言处理、语音识别,尤其是计算机视觉(CV)等多个领域。然而,人工智能模型本身带来的内生安全问题,导致了对抗性示例(AEs)的出现,它可以欺骗人工智能模型,对分类造成严重影响。近年来的研究表明,集成学习方法在生成或检测ae方面都是有效的。通过集成生成AEs,攻击者可以对目标模型实施更强、可移植性更好的攻击。另一方面,集成学习方法也可以用于防御,可以提高对ae的鲁棒性。本文重点研究了CV领域的集成学习方法,并首先介绍了经典的对抗性攻击和防御技术。然后,我们对对抗环境下的集成学习方法进行了综述,并将其分为并行、顺序和混合三种类型的框架。据我们所知,我们是第一个从这些集成框架的角度分析对抗性环境中最近提出的攻击和防御的人。此外,我们还总结了这些集成方法和框架的优缺点。最后,对集成框架的使用提出了一些建议,并从攻击、防御和评价等方面对该领域未来的研究方向提出了几点看法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Ensemble Learning Methods of Adversarial Attacks and Defenses in Computer Vision: Recent Progress
Artificial intelligence (AI) has developed rapidly in recent decades and is widely used in many fields, such as natural language processing, voice recognition, and especially computer vision (CV). However, the endogenous security problems brought by the AI model itself, leading to the emergence of adversarial examples (AEs), which can fool the AI models and cause a serious impact on the classification. In recent years, researches show that ensemble learning methods are effective both in generating or detecting AEs. By integrating to generate AEs, the attackers can implement stronger and good transferability attacks to the target models. On the other hand, ensemble learning methods can also be used in defenses that can improve the robustness against AEs. In this paper, we focus on the ensemble learning methods in the CV field, and first introduce the classic adversarial attack and defense technologies. Then, we survey the ensemble learning methods in the adversarial environment and divide them into three types of frameworks (i.e., parallel, sequential, and hybrid). To the best of our knowledge, we are the first to analyze the recent proposed attacks and defenses in the adversarial environment from the perspective of these ensemble frameworks. Additionally, we summarize the advantages and disadvantages of these ensemble methods and frameworks. In the end, we give some suggestions for using ensemble frameworks, and we put forward several opinions from the aspects of attacks, defenses, and evaluations for future research directions in this field.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
3D Deployment of Dynamic UAV Base Station based on Mobile Users Physical Layer Authentication Mechanism Based on Interpolated Polynomial Method Application of Artificial Intelligence Technology in Honeypot Technology Ensemble Learning Methods of Adversarial Attacks and Defenses in Computer Vision: Recent Progress A Lightweight Authentication and Key Agreement Protocol for IoT Based on ECC
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1