{"title":"Exploring Adversarial Attacks and Defenses in Deep Learning","authors":"Arjun Thangaraju, Cory E. Merkel","doi":"10.1109/CONECCT55679.2022.9865841","DOIUrl":null,"url":null,"abstract":"The paper aims to take a deep dive into one of the emerging fields in Deep Learning namely, Adversarial attacks and defenses. We will first see what we mean when we talk of Adversarial examples and learn why they are important? After this, we will explore different types of Adversarial attacks and defenses. Here, we specifically tackle the cases associated with Image Classification. This is done by delving into their respective concepts along with understanding the tools and frameworks required to execute them. The implementation of the FGSM (Fast Gradient Signed Method) attack and the effectiveness of the Adversarial training defense to combat it are discussed. This is done by first analyzing the drop in accuracy from performing the FGSM attack on a MNIST CNN (Convolutional Neural Network) classifier followed by an improvement in the same accuracy metric by defending against the attack using the Adversarial training defense.","PeriodicalId":380005,"journal":{"name":"2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CONECCT55679.2022.9865841","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The paper aims to take a deep dive into one of the emerging fields in Deep Learning namely, Adversarial attacks and defenses. We will first see what we mean when we talk of Adversarial examples and learn why they are important? After this, we will explore different types of Adversarial attacks and defenses. Here, we specifically tackle the cases associated with Image Classification. This is done by delving into their respective concepts along with understanding the tools and frameworks required to execute them. The implementation of the FGSM (Fast Gradient Signed Method) attack and the effectiveness of the Adversarial training defense to combat it are discussed. This is done by first analyzing the drop in accuracy from performing the FGSM attack on a MNIST CNN (Convolutional Neural Network) classifier followed by an improvement in the same accuracy metric by defending against the attack using the Adversarial training defense.