{"title":"自动驾驶汽车DNN对抗性攻击的回收","authors":"Hyunjun Mun, Seonggwan Seo, J. Yun","doi":"10.1109/ICOIN50884.2021.9333975","DOIUrl":null,"url":null,"abstract":"There are several DNN-driven autonomous cars being developed in the world. However, despite their splendid progress, DNNs frequently demonstrate incorrect behaviors which can lead to fatal damages. For example, an adversarial example generated by adding a small perturbation to an image causes a misclassification of the DNN. Numerous techniques have been studied so far in order to research those adversarial examples and the results are remarkable. However, the results are not good on the huge and complex ImageNet dataset. In this paper, we propose the recycling of adversarial attacks, which shows a high success rate of the ImageNet attack. Our method is highly successful and relatively fast by recycling adversarial examples which failed once. We also compare our method with the state-of-the-art techniques and prove that our method is more effective to generate adversarial examples of the ImageNet dataset through experiments.","PeriodicalId":6741,"journal":{"name":"2021 International Conference on Information Networking (ICOIN)","volume":"35 1","pages":"814-817"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Recycling of Adversarial Attacks on the DNN of Autonomous Cars\",\"authors\":\"Hyunjun Mun, Seonggwan Seo, J. Yun\",\"doi\":\"10.1109/ICOIN50884.2021.9333975\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There are several DNN-driven autonomous cars being developed in the world. However, despite their splendid progress, DNNs frequently demonstrate incorrect behaviors which can lead to fatal damages. For example, an adversarial example generated by adding a small perturbation to an image causes a misclassification of the DNN. Numerous techniques have been studied so far in order to research those adversarial examples and the results are remarkable. However, the results are not good on the huge and complex ImageNet dataset. In this paper, we propose the recycling of adversarial attacks, which shows a high success rate of the ImageNet attack. Our method is highly successful and relatively fast by recycling adversarial examples which failed once. We also compare our method with the state-of-the-art techniques and prove that our method is more effective to generate adversarial examples of the ImageNet dataset through experiments.\",\"PeriodicalId\":6741,\"journal\":{\"name\":\"2021 International Conference on Information Networking (ICOIN)\",\"volume\":\"35 1\",\"pages\":\"814-817\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Information Networking (ICOIN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICOIN50884.2021.9333975\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Information Networking (ICOIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOIN50884.2021.9333975","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Recycling of Adversarial Attacks on the DNN of Autonomous Cars
There are several DNN-driven autonomous cars being developed in the world. However, despite their splendid progress, DNNs frequently demonstrate incorrect behaviors which can lead to fatal damages. For example, an adversarial example generated by adding a small perturbation to an image causes a misclassification of the DNN. Numerous techniques have been studied so far in order to research those adversarial examples and the results are remarkable. However, the results are not good on the huge and complex ImageNet dataset. In this paper, we propose the recycling of adversarial attacks, which shows a high success rate of the ImageNet attack. Our method is highly successful and relatively fast by recycling adversarial examples which failed once. We also compare our method with the state-of-the-art techniques and prove that our method is more effective to generate adversarial examples of the ImageNet dataset through experiments.