{"title":"Security Verification Software Platform of Data-efficient Image Transformer Based on Fast Gradient Sign Method","authors":"In-pyo Hong, Gyu-ho Choi, Pan-koo Kim, Chang Choi","doi":"10.1145/3555776.3577731","DOIUrl":null,"url":null,"abstract":"Recently, research using knowledge distillation in artificial intelligence (AI) has been actively conducted. In particular, data-efficient image transformer (DeiT) is a representative transformer model using knowledge distillation in image classification. However, DeiT's safety against the patch unit's adversarial attacks was not verified. Furthermore, existing DeiT research did not prove security robustness against adversarial attacks. In order to verify the vulnerability of adversarial attacks, we conducted an attack using the fast gradient sign method (FGSM) targeting the DeiT model based on knowledge distillation. As a result of the experiment, an accuracy of 93.99% was shown in DeiT verification based on Normal data (Cifar-10). In contrast, when verified with abnormal data based on FGSM (adversarial examples), the accuracy decreased by 83.49% to 10.50%. By analyzing the vulnerability pattern related to adversarial attacks, we confirmed that FGSM showed successful attack performance through weight control of DeiT. Moreover, we verified that DeiT has security limitations for practical application.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":0.4000,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Computing Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3555776.3577731","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 1
Abstract
Recently, research using knowledge distillation in artificial intelligence (AI) has been actively conducted. In particular, data-efficient image transformer (DeiT) is a representative transformer model using knowledge distillation in image classification. However, DeiT's safety against the patch unit's adversarial attacks was not verified. Furthermore, existing DeiT research did not prove security robustness against adversarial attacks. In order to verify the vulnerability of adversarial attacks, we conducted an attack using the fast gradient sign method (FGSM) targeting the DeiT model based on knowledge distillation. As a result of the experiment, an accuracy of 93.99% was shown in DeiT verification based on Normal data (Cifar-10). In contrast, when verified with abnormal data based on FGSM (adversarial examples), the accuracy decreased by 83.49% to 10.50%. By analyzing the vulnerability pattern related to adversarial attacks, we confirmed that FGSM showed successful attack performance through weight control of DeiT. Moreover, we verified that DeiT has security limitations for practical application.