{"title":"When AI Fails to See: The Challenge of Adversarial Patches","authors":"Michał Zimoń, Rafał Kasprzyk","doi":"10.5604/01.3001.0054.0092","DOIUrl":null,"url":null,"abstract":"Object detection, a key application of machine learning in image processing, has achieved significant success thanks to advances in deep learning (Girshick et al. 2014). In this paper, we focus on analysing the vulnerability of one of the leading object detection models, YOLOv5x (Redmon et al. 2016), to adversarial attacks using specially designed interference known as “adversarial patches” (Brown et al. 2017). These disturbances, while often visible, have the ability to confuse the model, which can have serious consequences in real world applications. We present a methodology for generating these interferences using various techniques and algorithms, and we analyse their effectiveness in various conditions. In addition, we discuss potential defences against these types of attacks and emphasise the importance of security research in the context of the growing popularity of ML technology (Papernot et al. 2016). Our results indicate the need for further research in this area, bearing in mind the evolution of adversarial attacks and their impact on the future of ML technology.","PeriodicalId":240434,"journal":{"name":"Computer Science and Mathematical Modelling","volume":"29 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Science and Mathematical Modelling","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5604/01.3001.0054.0092","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Object detection, a key application of machine learning in image processing, has achieved significant success thanks to advances in deep learning (Girshick et al. 2014). In this paper, we focus on analysing the vulnerability of one of the leading object detection models, YOLOv5x (Redmon et al. 2016), to adversarial attacks using specially designed interference known as “adversarial patches” (Brown et al. 2017). These disturbances, while often visible, have the ability to confuse the model, which can have serious consequences in real world applications. We present a methodology for generating these interferences using various techniques and algorithms, and we analyse their effectiveness in various conditions. In addition, we discuss potential defences against these types of attacks and emphasise the importance of security research in the context of the growing popularity of ML technology (Papernot et al. 2016). Our results indicate the need for further research in this area, bearing in mind the evolution of adversarial attacks and their impact on the future of ML technology.