An adversarial example generation scheme for gait recognition

Changbao Li, Wenmin Li, J. Ding
{"title":"An adversarial example generation scheme for gait recognition","authors":"Changbao Li, Wenmin Li, J. Ding","doi":"10.1117/12.2639315","DOIUrl":null,"url":null,"abstract":"Gait recognition is widely used due to its advantages of long-distance recognition, unnecessity of active participation of the recognized person, etc. In recent years, many gait recognition models based on deep neural networks have achieved relatively high accuracy. However, many studies have shown that deep neural networks are vulnerable to adversarial attacks, and the recognition of deep neural networks can be made wrong by the addition of small perturbance to the input samples. Therefore, it is very important to explore the robustness of neural networks for gait recognition. Since the structure and parameters of the gait recognition model are often difficult to obtain in practical applications, a half-white-box adversarial attack method based on GAN was proposed in thesis. The difference between the adversarial examples generated by using this method and the original examples is difficult to be distinguished by the naked eye. Through experiments, it is found that the input of adversarial examples has a large impact on the output of the gait recognition model. To make sure that the generated adversarial perturbance can be easily implemented in the physical environment, we changed the model structure and increased the input data based on the above method, and proposed a new method. In this method, the generator of GAN can generate adversarial perturbance in specific shapes. The experimental results show that even without knowing the network structure and parameters of the target model, the accuracy of the gait recognition model in the face of adversarial examples will decrease, indicating that there are problems with the robustness of even very advanced gait recognition models.","PeriodicalId":336892,"journal":{"name":"Neural Networks, Information and Communication Engineering","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks, Information and Communication Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2639315","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Gait recognition is widely used due to its advantages of long-distance recognition, unnecessity of active participation of the recognized person, etc. In recent years, many gait recognition models based on deep neural networks have achieved relatively high accuracy. However, many studies have shown that deep neural networks are vulnerable to adversarial attacks, and the recognition of deep neural networks can be made wrong by the addition of small perturbance to the input samples. Therefore, it is very important to explore the robustness of neural networks for gait recognition. Since the structure and parameters of the gait recognition model are often difficult to obtain in practical applications, a half-white-box adversarial attack method based on GAN was proposed in thesis. The difference between the adversarial examples generated by using this method and the original examples is difficult to be distinguished by the naked eye. Through experiments, it is found that the input of adversarial examples has a large impact on the output of the gait recognition model. To make sure that the generated adversarial perturbance can be easily implemented in the physical environment, we changed the model structure and increased the input data based on the above method, and proposed a new method. In this method, the generator of GAN can generate adversarial perturbance in specific shapes. The experimental results show that even without knowing the network structure and parameters of the target model, the accuracy of the gait recognition model in the face of adversarial examples will decrease, indicating that there are problems with the robustness of even very advanced gait recognition models.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
步态识别的对抗样例生成方案
步态识别具有识别距离远、不需要被识别人积极参与等优点,得到了广泛的应用。近年来,许多基于深度神经网络的步态识别模型已经取得了较高的准确率。然而,许多研究表明,深度神经网络容易受到对抗性攻击,并且在输入样本中添加小的扰动可能会使深度神经网络的识别出错。因此,研究神经网络在步态识别中的鲁棒性是非常重要的。针对实际应用中步态识别模型的结构和参数难以获取的问题,本文提出了一种基于GAN的半白盒对抗攻击方法。利用该方法生成的对抗样例与原始样例之间的差异难以用肉眼区分。通过实验发现,对抗样例的输入对步态识别模型的输出有很大的影响。为了保证生成的对抗性摄动能够在物理环境中容易实现,我们在上述方法的基础上改变了模型结构,增加了输入数据,提出了一种新的方法。在这种方法中,GAN的生成器可以产生特定形状的对抗性扰动。实验结果表明,即使在不知道目标模型的网络结构和参数的情况下,步态识别模型面对对抗样例的准确率也会下降,这表明即使是非常先进的步态识别模型,其鲁棒性也存在问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Improve vulnerability prediction performance using self-attention mechanism and convolutional neural network Design of digital pulse-position modulation system based on minimum distance method Design of an externally adjustable oscillator circuit Research on non-intrusive video capture technology based on FPD-linkⅢ The communication process of digital binary pulse-position modulation with additive white Gaussian noise
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1