对抗性攻击与防御机制的比较研究

Bhavana Kumbar, Ankita Mane, Varsha Chalageri, Shashidhara B. Vyakaranal, S. Meena, Sunil V. Gurlahosur, Uday Kulkarni
{"title":"对抗性攻击与防御机制的比较研究","authors":"Bhavana Kumbar, Ankita Mane, Varsha Chalageri, Shashidhara B. Vyakaranal, S. Meena, Sunil V. Gurlahosur, Uday Kulkarni","doi":"10.1109/CONIT55038.2022.9848088","DOIUrl":null,"url":null,"abstract":"Deep Neural Networks (DNNs) have exemplified exceptional success in solving various complicated tasks that were difficult to solve in the past using conventional machine learning methods. Deep learning has become an inevitable part of several applications in the present scenarios. However., the latest works have found that the DNNs are unfortified against the prevailing adversarial attacks. The addition of imperceptible perturbations to the inputs causes the neural networks to fail and predict incorrect outputs. In practice., adversarial attacks create a significant challenge to the success of deep learning as they aim to deteriorate the performance of the classifiers by fooling the deep learning algorithms. This paper provides a comprehensive comparative study on the common adversarial attacks and countermeasures against them and also analyzes their behavior on standard datasets such as MNIST and CIFAR10 and also on a custom dataset that spans over 1000 images consisting of 5 classes. To mitigate the adversarial effects on deep learning models., we provide solutions against the conventional adversarial attacks that reduce 70% accuracy. It results in making the deep learning models more resilient against adversaries.","PeriodicalId":270445,"journal":{"name":"2022 2nd International Conference on Intelligent Technologies (CONIT)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Comparative Study on Adversarial Attacks and Defense Mechanisms\",\"authors\":\"Bhavana Kumbar, Ankita Mane, Varsha Chalageri, Shashidhara B. Vyakaranal, S. Meena, Sunil V. Gurlahosur, Uday Kulkarni\",\"doi\":\"10.1109/CONIT55038.2022.9848088\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep Neural Networks (DNNs) have exemplified exceptional success in solving various complicated tasks that were difficult to solve in the past using conventional machine learning methods. Deep learning has become an inevitable part of several applications in the present scenarios. However., the latest works have found that the DNNs are unfortified against the prevailing adversarial attacks. The addition of imperceptible perturbations to the inputs causes the neural networks to fail and predict incorrect outputs. In practice., adversarial attacks create a significant challenge to the success of deep learning as they aim to deteriorate the performance of the classifiers by fooling the deep learning algorithms. This paper provides a comprehensive comparative study on the common adversarial attacks and countermeasures against them and also analyzes their behavior on standard datasets such as MNIST and CIFAR10 and also on a custom dataset that spans over 1000 images consisting of 5 classes. To mitigate the adversarial effects on deep learning models., we provide solutions against the conventional adversarial attacks that reduce 70% accuracy. It results in making the deep learning models more resilient against adversaries.\",\"PeriodicalId\":270445,\"journal\":{\"name\":\"2022 2nd International Conference on Intelligent Technologies (CONIT)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 2nd International Conference on Intelligent Technologies (CONIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CONIT55038.2022.9848088\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 2nd International Conference on Intelligent Technologies (CONIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CONIT55038.2022.9848088","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深度神经网络(dnn)在解决过去使用传统机器学习方法难以解决的各种复杂任务方面取得了非凡的成功。深度学习已经成为当前一些应用中不可避免的一部分。然而。,最新的研究发现,深层神经网络无法抵御普遍存在的对抗性攻击。在输入中加入难以察觉的扰动会导致神经网络失效并预测不正确的输出。在实践中。,对抗性攻击对深度学习的成功构成了重大挑战,因为它们旨在通过欺骗深度学习算法来降低分类器的性能。本文对常见的对抗性攻击及其对策进行了全面的比较研究,并分析了它们在MNIST和CIFAR10等标准数据集上的行为,以及在包含5类的1000多张图像的自定义数据集上的行为。为了减轻对深度学习模型的对抗效应。,我们提供针对传统对抗性攻击的解决方案,可降低70%的准确率。这使得深度学习模型在对抗对手时更具弹性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Comparative Study on Adversarial Attacks and Defense Mechanisms
Deep Neural Networks (DNNs) have exemplified exceptional success in solving various complicated tasks that were difficult to solve in the past using conventional machine learning methods. Deep learning has become an inevitable part of several applications in the present scenarios. However., the latest works have found that the DNNs are unfortified against the prevailing adversarial attacks. The addition of imperceptible perturbations to the inputs causes the neural networks to fail and predict incorrect outputs. In practice., adversarial attacks create a significant challenge to the success of deep learning as they aim to deteriorate the performance of the classifiers by fooling the deep learning algorithms. This paper provides a comprehensive comparative study on the common adversarial attacks and countermeasures against them and also analyzes their behavior on standard datasets such as MNIST and CIFAR10 and also on a custom dataset that spans over 1000 images consisting of 5 classes. To mitigate the adversarial effects on deep learning models., we provide solutions against the conventional adversarial attacks that reduce 70% accuracy. It results in making the deep learning models more resilient against adversaries.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Analysis of Software Bug Prediction and Tracing Models from a Statistical Perspective Using Machine Learning Design & Simulation of a High Frequency Rectifier Using Operational Amplifier Brain Tumor Detection Application Based On Convolutional Neural Network Classification of Brain Tumor Into Four Categories Using Convolution Neural Network Comparison of Variants of Yen's Algorithm for Finding K-Simple Shortest Paths
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1