{"title":"展示和讲述模式中的闪避攻击","authors":"Dongseop Lee, Hyunjin Kim, Jaecheol Ryou","doi":"10.23919/ICACT48636.2020.9061558","DOIUrl":null,"url":null,"abstract":"Recently, deep learning technology has been applied to various fields with high performance and various services. Image recognition is also used in various fields with high performance by incorporating deep learning technology. However, deep learning technology is vulnerable to evasion attacks that cause the model to be misclassified by modulating the original image. In this paper, we generate an adversarial example using the forward-backward-splitting iterative procedure. Then perform an evasion attack on the show and tell model.","PeriodicalId":296763,"journal":{"name":"2020 22nd International Conference on Advanced Communication Technology (ICACT)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evasion Attack in Show and Tell Model\",\"authors\":\"Dongseop Lee, Hyunjin Kim, Jaecheol Ryou\",\"doi\":\"10.23919/ICACT48636.2020.9061558\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, deep learning technology has been applied to various fields with high performance and various services. Image recognition is also used in various fields with high performance by incorporating deep learning technology. However, deep learning technology is vulnerable to evasion attacks that cause the model to be misclassified by modulating the original image. In this paper, we generate an adversarial example using the forward-backward-splitting iterative procedure. Then perform an evasion attack on the show and tell model.\",\"PeriodicalId\":296763,\"journal\":{\"name\":\"2020 22nd International Conference on Advanced Communication Technology (ICACT)\",\"volume\":\"59 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 22nd International Conference on Advanced Communication Technology (ICACT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/ICACT48636.2020.9061558\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 22nd International Conference on Advanced Communication Technology (ICACT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ICACT48636.2020.9061558","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Recently, deep learning technology has been applied to various fields with high performance and various services. Image recognition is also used in various fields with high performance by incorporating deep learning technology. However, deep learning technology is vulnerable to evasion attacks that cause the model to be misclassified by modulating the original image. In this paper, we generate an adversarial example using the forward-backward-splitting iterative procedure. Then perform an evasion attack on the show and tell model.