Rok Kukovec, Špela Pečnik, Iztok Fister Jr., S. Karakatič
{"title":"Adversarial Image Perturbation with a Genetic Algorithm","authors":"Rok Kukovec, Špela Pečnik, Iztok Fister Jr., S. Karakatič","doi":"10.18690/978-961-286-516-0.6","DOIUrl":null,"url":null,"abstract":"The quality of image recognition with neural network models relies heavily on filters and parameters optimized through the training process. These filters are di˙erent compared to how humans see and recognize objects around them. The di˙erence in machine and human recognition yields a noticeable gap, which is prone to exploitation. The workings of these algorithms can be compromised with adversarial perturbations of images. This is where images are seemingly modified imperceptibly, such that humans see little to no di˙erence, but the neural network classifies t he m otif i ncorrectly. This paper explores the adversarial image modifica-tion with an evolutionary algorithm, so that the AlexNet convolutional neural network cannot recognize previously clear motifs while preserving the human perceptibility of the image. The ex-periment was implemented in Python and tested on the ILSVRC dataset. Original images and their recreated counterparts were compared and contrasted using visual assessment and statistical metrics. The findings s uggest t hat t he human eye, without prior knowledge, will hardly spot the di˙erence compared to the original images.","PeriodicalId":282591,"journal":{"name":"Proceedings of the 2021 7th Student Computer Science Research Conference (StuCoSReC)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 7th Student Computer Science Research Conference (StuCoSReC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18690/978-961-286-516-0.6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The quality of image recognition with neural network models relies heavily on filters and parameters optimized through the training process. These filters are di˙erent compared to how humans see and recognize objects around them. The di˙erence in machine and human recognition yields a noticeable gap, which is prone to exploitation. The workings of these algorithms can be compromised with adversarial perturbations of images. This is where images are seemingly modified imperceptibly, such that humans see little to no di˙erence, but the neural network classifies t he m otif i ncorrectly. This paper explores the adversarial image modifica-tion with an evolutionary algorithm, so that the AlexNet convolutional neural network cannot recognize previously clear motifs while preserving the human perceptibility of the image. The ex-periment was implemented in Python and tested on the ILSVRC dataset. Original images and their recreated counterparts were compared and contrasted using visual assessment and statistical metrics. The findings s uggest t hat t he human eye, without prior knowledge, will hardly spot the di˙erence compared to the original images.