{"title":"An adversarial example generation scheme for gait recognition","authors":"Changbao Li, Wenmin Li, J. Ding","doi":"10.1117/12.2639315","DOIUrl":null,"url":null,"abstract":"Gait recognition is widely used due to its advantages of long-distance recognition, unnecessity of active participation of the recognized person, etc. In recent years, many gait recognition models based on deep neural networks have achieved relatively high accuracy. However, many studies have shown that deep neural networks are vulnerable to adversarial attacks, and the recognition of deep neural networks can be made wrong by the addition of small perturbance to the input samples. Therefore, it is very important to explore the robustness of neural networks for gait recognition. Since the structure and parameters of the gait recognition model are often difficult to obtain in practical applications, a half-white-box adversarial attack method based on GAN was proposed in thesis. The difference between the adversarial examples generated by using this method and the original examples is difficult to be distinguished by the naked eye. Through experiments, it is found that the input of adversarial examples has a large impact on the output of the gait recognition model. To make sure that the generated adversarial perturbance can be easily implemented in the physical environment, we changed the model structure and increased the input data based on the above method, and proposed a new method. In this method, the generator of GAN can generate adversarial perturbance in specific shapes. The experimental results show that even without knowing the network structure and parameters of the target model, the accuracy of the gait recognition model in the face of adversarial examples will decrease, indicating that there are problems with the robustness of even very advanced gait recognition models.","PeriodicalId":336892,"journal":{"name":"Neural Networks, Information and Communication Engineering","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks, Information and Communication Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2639315","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Gait recognition is widely used due to its advantages of long-distance recognition, unnecessity of active participation of the recognized person, etc. In recent years, many gait recognition models based on deep neural networks have achieved relatively high accuracy. However, many studies have shown that deep neural networks are vulnerable to adversarial attacks, and the recognition of deep neural networks can be made wrong by the addition of small perturbance to the input samples. Therefore, it is very important to explore the robustness of neural networks for gait recognition. Since the structure and parameters of the gait recognition model are often difficult to obtain in practical applications, a half-white-box adversarial attack method based on GAN was proposed in thesis. The difference between the adversarial examples generated by using this method and the original examples is difficult to be distinguished by the naked eye. Through experiments, it is found that the input of adversarial examples has a large impact on the output of the gait recognition model. To make sure that the generated adversarial perturbance can be easily implemented in the physical environment, we changed the model structure and increased the input data based on the above method, and proposed a new method. In this method, the generator of GAN can generate adversarial perturbance in specific shapes. The experimental results show that even without knowing the network structure and parameters of the target model, the accuracy of the gait recognition model in the face of adversarial examples will decrease, indicating that there are problems with the robustness of even very advanced gait recognition models.