{"title":"针对深度神经网络视觉解释器的黑盒对抗性攻击","authors":"Yudai Hirose, Satoshi Ono","doi":"10.23919/MVA57639.2023.10215758","DOIUrl":null,"url":null,"abstract":"With the rapid development of deep neural networks (DNNs), eXplainable AI, which provides a basis for prediction on inputs, has become increasingly important. In addition, DNNs have a vulnerability called an Adversarial Example (AE), which can cause incorrect output by applying special perturbations to inputs. Potential vulnerabilities can also exist in image interpreters such as GradCAM, necessitating their investigation, as these vulnerabilities could potentially result in misdiagnosis within medical imaging. Therefore, this study proposes a black-box adversarial attack method that misleads the image interpreter using Sep-CMA-ES. The proposed method deceptively shifts the focus area of the image interpreter to a different location from that of the original image while maintaining the same predictive labels.","PeriodicalId":338734,"journal":{"name":"2023 18th International Conference on Machine Vision and Applications (MVA)","volume":"161 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Black-box Adversarial Attack against Visual Interpreters for Deep Neural Networks\",\"authors\":\"Yudai Hirose, Satoshi Ono\",\"doi\":\"10.23919/MVA57639.2023.10215758\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the rapid development of deep neural networks (DNNs), eXplainable AI, which provides a basis for prediction on inputs, has become increasingly important. In addition, DNNs have a vulnerability called an Adversarial Example (AE), which can cause incorrect output by applying special perturbations to inputs. Potential vulnerabilities can also exist in image interpreters such as GradCAM, necessitating their investigation, as these vulnerabilities could potentially result in misdiagnosis within medical imaging. Therefore, this study proposes a black-box adversarial attack method that misleads the image interpreter using Sep-CMA-ES. The proposed method deceptively shifts the focus area of the image interpreter to a different location from that of the original image while maintaining the same predictive labels.\",\"PeriodicalId\":338734,\"journal\":{\"name\":\"2023 18th International Conference on Machine Vision and Applications (MVA)\",\"volume\":\"161 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 18th International Conference on Machine Vision and Applications (MVA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/MVA57639.2023.10215758\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 18th International Conference on Machine Vision and Applications (MVA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/MVA57639.2023.10215758","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Black-box Adversarial Attack against Visual Interpreters for Deep Neural Networks
With the rapid development of deep neural networks (DNNs), eXplainable AI, which provides a basis for prediction on inputs, has become increasingly important. In addition, DNNs have a vulnerability called an Adversarial Example (AE), which can cause incorrect output by applying special perturbations to inputs. Potential vulnerabilities can also exist in image interpreters such as GradCAM, necessitating their investigation, as these vulnerabilities could potentially result in misdiagnosis within medical imaging. Therefore, this study proposes a black-box adversarial attack method that misleads the image interpreter using Sep-CMA-ES. The proposed method deceptively shifts the focus area of the image interpreter to a different location from that of the original image while maintaining the same predictive labels.