Rezaul Karim, Md. Amirul Islam, N. Mohammed, Neil D. B. Bruce
{"title":"On the Robustness of Deep Learning Models to Universal Adversarial Attack","authors":"Rezaul Karim, Md. Amirul Islam, N. Mohammed, Neil D. B. Bruce","doi":"10.1109/CRV.2018.00018","DOIUrl":null,"url":null,"abstract":"In recent years, there have been significant advances in deep learning applied to problems in high-level vision tasks (e.g. image classification, object detection, semantic segmentation etc.) which has been met with a great deal of success. State-of-the-art methods that have shown impressive results on recognition tasks typically share a common structure involving stage-wise encoding of the image, followed by a generic classifier. However, these architectures have been shown to be vulnerable to the adversarial perturbations which may undermine the security of the systems supported by deep neural nets. In this work, initially we present rigorous evaluation of adversarial attacks on recent deep learning models for two different high-level tasks (image classification and semantic segmentation). Then we propose a model and dataset independent approach to generate adversarial perturbation and also the transferability of perturbation across different datasets and tasks. Moreover, we analyze the effect of different network architectures which will aid future efforts in understanding and defending against adversarial perturbations. We perform comprehensive experiments on several standard image classification and segmentation datasets to demonstrate the effectiveness of our proposed approach.","PeriodicalId":281779,"journal":{"name":"2018 15th Conference on Computer and Robot Vision (CRV)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 15th Conference on Computer and Robot Vision (CRV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CRV.2018.00018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
In recent years, there have been significant advances in deep learning applied to problems in high-level vision tasks (e.g. image classification, object detection, semantic segmentation etc.) which has been met with a great deal of success. State-of-the-art methods that have shown impressive results on recognition tasks typically share a common structure involving stage-wise encoding of the image, followed by a generic classifier. However, these architectures have been shown to be vulnerable to the adversarial perturbations which may undermine the security of the systems supported by deep neural nets. In this work, initially we present rigorous evaluation of adversarial attacks on recent deep learning models for two different high-level tasks (image classification and semantic segmentation). Then we propose a model and dataset independent approach to generate adversarial perturbation and also the transferability of perturbation across different datasets and tasks. Moreover, we analyze the effect of different network architectures which will aid future efforts in understanding and defending against adversarial perturbations. We perform comprehensive experiments on several standard image classification and segmentation datasets to demonstrate the effectiveness of our proposed approach.