性别分类对抗性攻击对人脸识别的可转移性分析:固定和可变攻击扰动

IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IET Biometrics Pub Date : 2022-06-27 DOI:10.1049/bme2.12082
Zohra Rezgui, Amina Bassit, Raymond Veldhuis
{"title":"性别分类对抗性攻击对人脸识别的可转移性分析:固定和可变攻击扰动","authors":"Zohra Rezgui,&nbsp;Amina Bassit,&nbsp;Raymond Veldhuis","doi":"10.1049/bme2.12082","DOIUrl":null,"url":null,"abstract":"<p>Most deep learning-based image classification models are vulnerable to adversarial attacks that introduce imperceptible changes to the input images for the purpose of model misclassification. It has been demonstrated that these attacks, targeting a specific model, are transferable among models performing the same task. However, models performing different tasks but sharing the same input space and model architecture were never considered in the transferability scenarios presented in the literature. In this paper, this phenomenon was analysed in the context of VGG16-based and ResNet50-based biometric classifiers. The authors investigate the impact of two white-box attacks on a gender classifier and contrast a defence method as a countermeasure. Then, using adversarial images generated by the attacks, a pre-trained face recognition classifier is attacked in a black-box fashion. Two verification comparison settings are employed, in which images perturbed with the same and different magnitude of the perturbation are compared. The authors’ results indicate transferability in the fixed perturbation setting for a Fast Gradient Sign Method attack and non-transferability in a pixel-guided denoiser attack setting. The interpretation of this non-transferability can support the use of fast and train-free adversarial attacks targeting soft biometric classifiers as means to achieve soft biometric privacy protection while maintaining facial identity as utility.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"407-419"},"PeriodicalIF":1.8000,"publicationDate":"2022-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12082","citationCount":"0","resultStr":"{\"title\":\"Transferability analysis of adversarial attacks on gender classification to face recognition: Fixed and variable attack perturbation\",\"authors\":\"Zohra Rezgui,&nbsp;Amina Bassit,&nbsp;Raymond Veldhuis\",\"doi\":\"10.1049/bme2.12082\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Most deep learning-based image classification models are vulnerable to adversarial attacks that introduce imperceptible changes to the input images for the purpose of model misclassification. It has been demonstrated that these attacks, targeting a specific model, are transferable among models performing the same task. However, models performing different tasks but sharing the same input space and model architecture were never considered in the transferability scenarios presented in the literature. In this paper, this phenomenon was analysed in the context of VGG16-based and ResNet50-based biometric classifiers. The authors investigate the impact of two white-box attacks on a gender classifier and contrast a defence method as a countermeasure. Then, using adversarial images generated by the attacks, a pre-trained face recognition classifier is attacked in a black-box fashion. Two verification comparison settings are employed, in which images perturbed with the same and different magnitude of the perturbation are compared. The authors’ results indicate transferability in the fixed perturbation setting for a Fast Gradient Sign Method attack and non-transferability in a pixel-guided denoiser attack setting. The interpretation of this non-transferability can support the use of fast and train-free adversarial attacks targeting soft biometric classifiers as means to achieve soft biometric privacy protection while maintaining facial identity as utility.</p>\",\"PeriodicalId\":48821,\"journal\":{\"name\":\"IET Biometrics\",\"volume\":\"11 5\",\"pages\":\"407-419\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2022-06-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12082\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Biometrics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/bme2.12082\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Biometrics","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/bme2.12082","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

大多数基于深度学习的图像分类模型容易受到对抗性攻击,这种攻击会给输入图像引入难以察觉的变化,从而导致模型错误分类。已经证明,这些针对特定模型的攻击在执行相同任务的模型之间是可转移的。然而,执行不同任务但共享相同输入空间和模型架构的模型在文献中提出的可转移性场景中从未被考虑过。本文在基于vgg16和基于resnet50的生物特征分类器的背景下对这一现象进行了分析。作者调查了两个白盒攻击对性别分类器的影响,并对比了一种防御方法作为对策。然后,使用攻击生成的对抗图像,以黑盒方式攻击预训练的人脸识别分类器。采用两种验证比较设置,对扰动大小相同和不同的图像进行比较。作者的结果表明,在固定扰动设置下,快速梯度符号方法攻击具有可转移性,而在像素引导去噪攻击设置下则具有不可转移性。对这种不可转移性的解释可以支持使用针对软生物识别分类器的快速和无训练的对抗性攻击,作为实现软生物识别隐私保护的手段,同时保持面部身份的实用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Transferability analysis of adversarial attacks on gender classification to face recognition: Fixed and variable attack perturbation

Most deep learning-based image classification models are vulnerable to adversarial attacks that introduce imperceptible changes to the input images for the purpose of model misclassification. It has been demonstrated that these attacks, targeting a specific model, are transferable among models performing the same task. However, models performing different tasks but sharing the same input space and model architecture were never considered in the transferability scenarios presented in the literature. In this paper, this phenomenon was analysed in the context of VGG16-based and ResNet50-based biometric classifiers. The authors investigate the impact of two white-box attacks on a gender classifier and contrast a defence method as a countermeasure. Then, using adversarial images generated by the attacks, a pre-trained face recognition classifier is attacked in a black-box fashion. Two verification comparison settings are employed, in which images perturbed with the same and different magnitude of the perturbation are compared. The authors’ results indicate transferability in the fixed perturbation setting for a Fast Gradient Sign Method attack and non-transferability in a pixel-guided denoiser attack setting. The interpretation of this non-transferability can support the use of fast and train-free adversarial attacks targeting soft biometric classifiers as means to achieve soft biometric privacy protection while maintaining facial identity as utility.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IET Biometrics
IET Biometrics COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
5.90
自引率
0.00%
发文量
46
审稿时长
33 weeks
期刊介绍: The field of biometric recognition - automated recognition of individuals based on their behavioural and biological characteristics - has now reached a level of maturity where viable practical applications are both possible and increasingly available. The biometrics field is characterised especially by its interdisciplinarity since, while focused primarily around a strong technological base, effective system design and implementation often requires a broad range of skills encompassing, for example, human factors, data security and database technologies, psychological and physiological awareness, and so on. Also, the technology focus itself embraces diversity, since the engineering of effective biometric systems requires integration of image analysis, pattern recognition, sensor technology, database engineering, security design and many other strands of understanding. The scope of the journal is intentionally relatively wide. While focusing on core technological issues, it is recognised that these may be inherently diverse and in many cases may cross traditional disciplinary boundaries. The scope of the journal will therefore include any topics where it can be shown that a paper can increase our understanding of biometric systems, signal future developments and applications for biometrics, or promote greater practical uptake for relevant technologies: Development and enhancement of individual biometric modalities including the established and traditional modalities (e.g. face, fingerprint, iris, signature and handwriting recognition) and also newer or emerging modalities (gait, ear-shape, neurological patterns, etc.) Multibiometrics, theoretical and practical issues, implementation of practical systems, multiclassifier and multimodal approaches Soft biometrics and information fusion for identification, verification and trait prediction Human factors and the human-computer interface issues for biometric systems, exception handling strategies Template construction and template management, ageing factors and their impact on biometric systems Usability and user-oriented design, psychological and physiological principles and system integration Sensors and sensor technologies for biometric processing Database technologies to support biometric systems Implementation of biometric systems, security engineering implications, smartcard and associated technologies in implementation, implementation platforms, system design and performance evaluation Trust and privacy issues, security of biometric systems and supporting technological solutions, biometric template protection Biometric cryptosystems, security and biometrics-linked encryption Links with forensic processing and cross-disciplinary commonalities Core underpinning technologies (e.g. image analysis, pattern recognition, computer vision, signal processing, etc.), where the specific relevance to biometric processing can be demonstrated Applications and application-led considerations Position papers on technology or on the industrial context of biometric system development Adoption and promotion of standards in biometrics, improving technology acceptance, deployment and interoperability, avoiding cross-cultural and cross-sector restrictions Relevant ethical and social issues
期刊最新文献
A Multimodal Biometric Recognition Method Based on Federated Learning Deep and Shallow Feature Fusion in Feature Score Level for Palmprint Recognition Research on TCN Model Based on SSARF Feature Selection in the Field of Human Behavior Recognition A Finger Vein Recognition Algorithm Based on the Histogram of Variable Curvature Directional Binary Statistics A Survey on Automatic Face Recognition Using Side-View Face Images
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1