{"title":"使用单步对抗训练来捍卫迭代对抗示例","authors":"Guanxiong Liu, Issa M. Khalil, Abdallah Khreishah","doi":"10.1145/3422337.3447841","DOIUrl":null,"url":null,"abstract":"Adversarial examples are among the biggest challenges for machine learning models, especially neural network classifiers. Adversarial examples are inputs manipulated with perturbations insignificant to humans while being able to fool machine learning models. Researchers achieve great progress in utilizing adversarial training as a defense. However, the overwhelming computational cost degrades its applicability, and little has been done to overcome this issue. Single-Step adversarial training methods have been proposed as computationally viable solutions; however, they still fail to defend against iterative adversarial examples. In this work, we first experimentally analyze several different state-of-the-art (SOTA) defenses against adversarial examples. Then, based on observations from experiments, we propose a novel single-step adversarial training method that can defend against both single-step and iterative adversarial examples. Through extensive evaluations, we demonstrate that our proposed method successfully combines the advantages of both single-step (low training overhead) and iterative (high robustness) adversarial training defenses. Compared with ATDA on the CIFAR-10 dataset, for example, our proposed method achieves a 35.67% enhancement in test accuracy and a 19.14% reduction in training time. When compared with methods that use BIM or Madry examples (iterative methods) on the CIFAR-10 dataset, our proposed method saves up to 76.03% in training time, with less than 3.78% degeneration in test accuracy. Finally, our experiments with the ImageNet dataset clearly show the scalability of our approach and its performance advantages over SOTA single-step approaches.","PeriodicalId":187272,"journal":{"name":"Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy","volume":"72 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples\",\"authors\":\"Guanxiong Liu, Issa M. Khalil, Abdallah Khreishah\",\"doi\":\"10.1145/3422337.3447841\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adversarial examples are among the biggest challenges for machine learning models, especially neural network classifiers. Adversarial examples are inputs manipulated with perturbations insignificant to humans while being able to fool machine learning models. Researchers achieve great progress in utilizing adversarial training as a defense. However, the overwhelming computational cost degrades its applicability, and little has been done to overcome this issue. Single-Step adversarial training methods have been proposed as computationally viable solutions; however, they still fail to defend against iterative adversarial examples. In this work, we first experimentally analyze several different state-of-the-art (SOTA) defenses against adversarial examples. Then, based on observations from experiments, we propose a novel single-step adversarial training method that can defend against both single-step and iterative adversarial examples. Through extensive evaluations, we demonstrate that our proposed method successfully combines the advantages of both single-step (low training overhead) and iterative (high robustness) adversarial training defenses. Compared with ATDA on the CIFAR-10 dataset, for example, our proposed method achieves a 35.67% enhancement in test accuracy and a 19.14% reduction in training time. When compared with methods that use BIM or Madry examples (iterative methods) on the CIFAR-10 dataset, our proposed method saves up to 76.03% in training time, with less than 3.78% degeneration in test accuracy. Finally, our experiments with the ImageNet dataset clearly show the scalability of our approach and its performance advantages over SOTA single-step approaches.\",\"PeriodicalId\":187272,\"journal\":{\"name\":\"Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy\",\"volume\":\"72 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-02-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3422337.3447841\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3422337.3447841","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples
Adversarial examples are among the biggest challenges for machine learning models, especially neural network classifiers. Adversarial examples are inputs manipulated with perturbations insignificant to humans while being able to fool machine learning models. Researchers achieve great progress in utilizing adversarial training as a defense. However, the overwhelming computational cost degrades its applicability, and little has been done to overcome this issue. Single-Step adversarial training methods have been proposed as computationally viable solutions; however, they still fail to defend against iterative adversarial examples. In this work, we first experimentally analyze several different state-of-the-art (SOTA) defenses against adversarial examples. Then, based on observations from experiments, we propose a novel single-step adversarial training method that can defend against both single-step and iterative adversarial examples. Through extensive evaluations, we demonstrate that our proposed method successfully combines the advantages of both single-step (low training overhead) and iterative (high robustness) adversarial training defenses. Compared with ATDA on the CIFAR-10 dataset, for example, our proposed method achieves a 35.67% enhancement in test accuracy and a 19.14% reduction in training time. When compared with methods that use BIM or Madry examples (iterative methods) on the CIFAR-10 dataset, our proposed method saves up to 76.03% in training time, with less than 3.78% degeneration in test accuracy. Finally, our experiments with the ImageNet dataset clearly show the scalability of our approach and its performance advantages over SOTA single-step approaches.