{"title":"Evoattack:利用进化搜索对物体检测模型进行压制性对抗攻击","authors":"Kenneth H. Chan, Betty H. C. Cheng","doi":"10.1007/s10515-024-00470-9","DOIUrl":null,"url":null,"abstract":"<div><p>State-of-the-art deep neural networks are increasingly used in image classification, recognition, and detection tasks for a range of real-world applications. Moreover, many of these applications are safety-critical, where the failure of the system may cause serious harm, injuries, or even deaths. Adversarial examples are expected inputs that are maliciously modified, but difficult to detect, such that the machine learning models fail to classify them correctly. While a number of evolutionary search-based approaches have been developed to generate adversarial examples against image classification problems, evolutionary search-based attacks against <i>object detection</i> algorithms remain largely unexplored. This paper describes <span>EvoAttack</span> that demonstrates how evolutionary search-based techniques can be used as a black-box, model- and data-agnostic approach to attack state-of-the-art object detection algorithms (e.g., RetinaNet, Faster R-CNN, and YoloV5). A proof-of-concept implementation is provided to demonstrate how evolutionary search can generate adversarial examples that existing models fail to correctly process, which can be used to assess model robustness against such attacks. In contrast to other adversarial example approaches that cause misclassification or incorrect labeling of objects, <span>EvoAttack</span> applies minor perturbations to generate adversarial examples that <i>suppress</i> the ability of object detection algorithms to detect objects. We applied <span>EvoAttack</span> to popular benchmark datasets for autonomous terrestrial and aerial vehicles.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"32 1","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evoattack: suppressive adversarial attacks against object detection models using evolutionary search\",\"authors\":\"Kenneth H. Chan, Betty H. C. Cheng\",\"doi\":\"10.1007/s10515-024-00470-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>State-of-the-art deep neural networks are increasingly used in image classification, recognition, and detection tasks for a range of real-world applications. Moreover, many of these applications are safety-critical, where the failure of the system may cause serious harm, injuries, or even deaths. Adversarial examples are expected inputs that are maliciously modified, but difficult to detect, such that the machine learning models fail to classify them correctly. While a number of evolutionary search-based approaches have been developed to generate adversarial examples against image classification problems, evolutionary search-based attacks against <i>object detection</i> algorithms remain largely unexplored. This paper describes <span>EvoAttack</span> that demonstrates how evolutionary search-based techniques can be used as a black-box, model- and data-agnostic approach to attack state-of-the-art object detection algorithms (e.g., RetinaNet, Faster R-CNN, and YoloV5). A proof-of-concept implementation is provided to demonstrate how evolutionary search can generate adversarial examples that existing models fail to correctly process, which can be used to assess model robustness against such attacks. In contrast to other adversarial example approaches that cause misclassification or incorrect labeling of objects, <span>EvoAttack</span> applies minor perturbations to generate adversarial examples that <i>suppress</i> the ability of object detection algorithms to detect objects. We applied <span>EvoAttack</span> to popular benchmark datasets for autonomous terrestrial and aerial vehicles.</p></div>\",\"PeriodicalId\":55414,\"journal\":{\"name\":\"Automated Software Engineering\",\"volume\":\"32 1\",\"pages\":\"\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2024-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Automated Software Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10515-024-00470-9\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Automated Software Engineering","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10515-024-00470-9","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Evoattack: suppressive adversarial attacks against object detection models using evolutionary search
State-of-the-art deep neural networks are increasingly used in image classification, recognition, and detection tasks for a range of real-world applications. Moreover, many of these applications are safety-critical, where the failure of the system may cause serious harm, injuries, or even deaths. Adversarial examples are expected inputs that are maliciously modified, but difficult to detect, such that the machine learning models fail to classify them correctly. While a number of evolutionary search-based approaches have been developed to generate adversarial examples against image classification problems, evolutionary search-based attacks against object detection algorithms remain largely unexplored. This paper describes EvoAttack that demonstrates how evolutionary search-based techniques can be used as a black-box, model- and data-agnostic approach to attack state-of-the-art object detection algorithms (e.g., RetinaNet, Faster R-CNN, and YoloV5). A proof-of-concept implementation is provided to demonstrate how evolutionary search can generate adversarial examples that existing models fail to correctly process, which can be used to assess model robustness against such attacks. In contrast to other adversarial example approaches that cause misclassification or incorrect labeling of objects, EvoAttack applies minor perturbations to generate adversarial examples that suppress the ability of object detection algorithms to detect objects. We applied EvoAttack to popular benchmark datasets for autonomous terrestrial and aerial vehicles.
期刊介绍:
This journal details research, tutorial papers, survey and accounts of significant industrial experience in the foundations, techniques, tools and applications of automated software engineering technology. This includes the study of techniques for constructing, understanding, adapting, and modeling software artifacts and processes.
Coverage in Automated Software Engineering examines both automatic systems and collaborative systems as well as computational models of human software engineering activities. In addition, it presents knowledge representations and artificial intelligence techniques applicable to automated software engineering, and formal techniques that support or provide theoretical foundations. The journal also includes reviews of books, software, conferences and workshops.