{"title":"DCVAE-adv:一种适用于白盒和黑盒攻击的通用对抗性示例生成方法","authors":"Lei Xu;Junhai Zhai","doi":"10.26599/TST.2023.9010004","DOIUrl":null,"url":null,"abstract":"Deep neural network (DNN) has strong representation learning ability, but it is vulnerable and easy to be fooled by adversarial examples. In order to handle the vulnerability of DNN, many methods have been proposed. The general idea of existing methods is to reduce the chance of DNN models being fooled by observing some designed adversarial examples, which are generated by adding perturbations to the original images. In this paper, we propose a novel adversarial example generation method, called DCVAE-adv. Different from the existing methods, DCVAE-adv constructs adversarial examples by mixing both explicit and implicit perturbations without using original images. Furthermore, the proposed method can be applied to both white box and black box attacks. In addition, in the inference stage, the adversarial examples can be generated without loading the original images into memory, which greatly reduces the memory overhead. We compared DCVAE-adv with three most advanced adversarial attack algorithms: FGSM, AdvGAN, and AdvGAN++. The experimental results demonstrate that DCVAE-adv is superior to these state-of-the-art methods in terms of attack success rate and transfer ability for targeted attack. Our code is available at https://github.com/xzforeverlove/DCVAE-adv.","PeriodicalId":60306,"journal":{"name":"Tsinghua Science and Technology","volume":"29 2","pages":"430-446"},"PeriodicalIF":5.2000,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/5971803/10258149/10258153.pdf","citationCount":"0","resultStr":"{\"title\":\"DCVAE-adv: A Universal Adversarial Example Generation Method for White and Black Box Attacks\",\"authors\":\"Lei Xu;Junhai Zhai\",\"doi\":\"10.26599/TST.2023.9010004\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural network (DNN) has strong representation learning ability, but it is vulnerable and easy to be fooled by adversarial examples. In order to handle the vulnerability of DNN, many methods have been proposed. The general idea of existing methods is to reduce the chance of DNN models being fooled by observing some designed adversarial examples, which are generated by adding perturbations to the original images. In this paper, we propose a novel adversarial example generation method, called DCVAE-adv. Different from the existing methods, DCVAE-adv constructs adversarial examples by mixing both explicit and implicit perturbations without using original images. Furthermore, the proposed method can be applied to both white box and black box attacks. In addition, in the inference stage, the adversarial examples can be generated without loading the original images into memory, which greatly reduces the memory overhead. We compared DCVAE-adv with three most advanced adversarial attack algorithms: FGSM, AdvGAN, and AdvGAN++. The experimental results demonstrate that DCVAE-adv is superior to these state-of-the-art methods in terms of attack success rate and transfer ability for targeted attack. Our code is available at https://github.com/xzforeverlove/DCVAE-adv.\",\"PeriodicalId\":60306,\"journal\":{\"name\":\"Tsinghua Science and Technology\",\"volume\":\"29 2\",\"pages\":\"430-446\"},\"PeriodicalIF\":5.2000,\"publicationDate\":\"2023-09-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/iel7/5971803/10258149/10258153.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Tsinghua Science and Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10258153/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Tsinghua Science and Technology","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10258153/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
DCVAE-adv: A Universal Adversarial Example Generation Method for White and Black Box Attacks
Deep neural network (DNN) has strong representation learning ability, but it is vulnerable and easy to be fooled by adversarial examples. In order to handle the vulnerability of DNN, many methods have been proposed. The general idea of existing methods is to reduce the chance of DNN models being fooled by observing some designed adversarial examples, which are generated by adding perturbations to the original images. In this paper, we propose a novel adversarial example generation method, called DCVAE-adv. Different from the existing methods, DCVAE-adv constructs adversarial examples by mixing both explicit and implicit perturbations without using original images. Furthermore, the proposed method can be applied to both white box and black box attacks. In addition, in the inference stage, the adversarial examples can be generated without loading the original images into memory, which greatly reduces the memory overhead. We compared DCVAE-adv with three most advanced adversarial attack algorithms: FGSM, AdvGAN, and AdvGAN++. The experimental results demonstrate that DCVAE-adv is superior to these state-of-the-art methods in terms of attack success rate and transfer ability for targeted attack. Our code is available at https://github.com/xzforeverlove/DCVAE-adv.