{"title":"图像分类黑盒对抗攻击综述","authors":"","doi":"10.1016/j.neucom.2024.128512","DOIUrl":null,"url":null,"abstract":"<div><p>In recent years, deep learning-based image classification models have been extensively studied in academia and widely applied in industry. However, deep learning is inherently vulnerable to adversarial attacks, posing security threats to image classification models in security sensitive field, such as face recognition, medical image diagnosis and traffic sign recognition. Especially for black-box adversarial attacks, which can be carried out even without remote model information, the security issues facing deep learning are even more serious. Despite more and more attentions on this issue, existing reviews always analyze black-box adversarial attack only from one perspective, focus on only a certain application field. This paper systematically reviews and discusses existing progress, demonstrating black-box adversarial attacks from multiple perspectives and systematically classifying existing methods. Besides, we also sort out and categorize the application of current black-box adversarial attacks and identify several promising directions for future research.</p></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A review of black-box adversarial attacks on image classification\",\"authors\":\"\",\"doi\":\"10.1016/j.neucom.2024.128512\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In recent years, deep learning-based image classification models have been extensively studied in academia and widely applied in industry. However, deep learning is inherently vulnerable to adversarial attacks, posing security threats to image classification models in security sensitive field, such as face recognition, medical image diagnosis and traffic sign recognition. Especially for black-box adversarial attacks, which can be carried out even without remote model information, the security issues facing deep learning are even more serious. Despite more and more attentions on this issue, existing reviews always analyze black-box adversarial attack only from one perspective, focus on only a certain application field. This paper systematically reviews and discusses existing progress, demonstrating black-box adversarial attacks from multiple perspectives and systematically classifying existing methods. Besides, we also sort out and categorize the application of current black-box adversarial attacks and identify several promising directions for future research.</p></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2024-08-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231224012839\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224012839","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A review of black-box adversarial attacks on image classification
In recent years, deep learning-based image classification models have been extensively studied in academia and widely applied in industry. However, deep learning is inherently vulnerable to adversarial attacks, posing security threats to image classification models in security sensitive field, such as face recognition, medical image diagnosis and traffic sign recognition. Especially for black-box adversarial attacks, which can be carried out even without remote model information, the security issues facing deep learning are even more serious. Despite more and more attentions on this issue, existing reviews always analyze black-box adversarial attack only from one perspective, focus on only a certain application field. This paper systematically reviews and discusses existing progress, demonstrating black-box adversarial attacks from multiple perspectives and systematically classifying existing methods. Besides, we also sort out and categorize the application of current black-box adversarial attacks and identify several promising directions for future research.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.