IFAST: Weakly Supervised Interpretable Face Anti-Spoofing From Single-Shot Binocular NIR Images

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS IEEE Transactions on Information Forensics and Security Pub Date : 2024-09-23 DOI:10.1109/TIFS.2024.3465930
Jiancheng Huang;Donghao Zhou;Jianzhuang Liu;Linxiao Shi;Shifeng Chen
{"title":"IFAST: Weakly Supervised Interpretable Face Anti-Spoofing From Single-Shot Binocular NIR Images","authors":"Jiancheng Huang;Donghao Zhou;Jianzhuang Liu;Linxiao Shi;Shifeng Chen","doi":"10.1109/TIFS.2024.3465930","DOIUrl":null,"url":null,"abstract":"Single-shot face anti-spoofing (FAS) is a key technique for securing face recognition systems, relying solely on static images as input. However, single-shot FAS remains a challenging and under-explored problem due to two reasons: 1) On the data side, learning FAS from RGB images is largely context-dependent, and single-shot images without additional annotations contain limited semantic information. 2) On the model side, existing single-shot FAS models struggle to provide proper evidence for their decisions, and FAS methods based on depth estimation require expensive per-pixel annotations. To address these issues, we construct and release a large binocular NIR image dataset named BNI-FAS, which contains more than 300,000 real face and plane attack images, and propose an Interpretable FAS Transformer (IFAST) that requires only weak supervision to produce interpretable predictions. Our IFAST generates pixel-wise disparity maps using the proposed disparity estimation Transformer with Dynamic Matching Attention (DMA) blocks. Besides, we design a confidence map generator to work in tandem with a dual-teacher distillation module to obtain the final discriminant results. Comprehensive experiments show that our IFAST achieves state-of-the-art performance on BNI-FAS, verifying its effectiveness of single-shot FAS on binocular NIR images. The project page is available at \n<uri>https://ifast-bni.github.io/</uri>\n.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"19 ","pages":"9270-9284"},"PeriodicalIF":6.3000,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10685520/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Single-shot face anti-spoofing (FAS) is a key technique for securing face recognition systems, relying solely on static images as input. However, single-shot FAS remains a challenging and under-explored problem due to two reasons: 1) On the data side, learning FAS from RGB images is largely context-dependent, and single-shot images without additional annotations contain limited semantic information. 2) On the model side, existing single-shot FAS models struggle to provide proper evidence for their decisions, and FAS methods based on depth estimation require expensive per-pixel annotations. To address these issues, we construct and release a large binocular NIR image dataset named BNI-FAS, which contains more than 300,000 real face and plane attack images, and propose an Interpretable FAS Transformer (IFAST) that requires only weak supervision to produce interpretable predictions. Our IFAST generates pixel-wise disparity maps using the proposed disparity estimation Transformer with Dynamic Matching Attention (DMA) blocks. Besides, we design a confidence map generator to work in tandem with a dual-teacher distillation module to obtain the final discriminant results. Comprehensive experiments show that our IFAST achieves state-of-the-art performance on BNI-FAS, verifying its effectiveness of single-shot FAS on binocular NIR images. The project page is available at https://ifast-bni.github.io/ .
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
IFAST:从单张双目近红外图像中提取弱监督可解释人脸反欺骗图像
单次人脸防欺骗(FAS)是确保人脸识别系统安全的一项关键技术,它完全依赖于静态图像作为输入。然而,由于以下两个原因,单镜头人脸防欺骗仍然是一个具有挑战性且未得到充分探索的问题:1) 在数据方面,从 RGB 图像学习 FAS 很大程度上取决于上下文,而没有额外注释的单张图像包含的语义信息有限。2) 在模型方面,现有的单张图像图像分析模型很难为其决策提供适当的证据,而基于深度估计的图像分析方法需要昂贵的每像素注释。为了解决这些问题,我们构建并发布了一个名为 BNI-FAS 的大型双目近红外图像数据集,其中包含 30 多万张真实的人脸和平面攻击图像,并提出了一种可解释的 FAS 变换器(IFAST),它只需要弱监督就能生成可解释的预测。我们的 IFAST 利用所提出的带有动态匹配注意(DMA)块的差异估计变换器生成像素级差异图。此外,我们还设计了一个置信度图生成器,与双教师提炼模块协同工作,以获得最终的判别结果。综合实验表明,我们的 IFAST 在 BNI-FAS 上达到了最先进的性能,验证了其在双目近红外图像上单次 FAS 的有效性。项目页面可在 https://ifast-bni.github.io/ 上查阅。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
期刊最新文献
On the Efficient Design of Stacked Intelligent Metasurfaces for Secure SISO Transmission Attackers Are Not the Same! Unveiling the Impact of Feature Distribution on Label Inference Attacks Backdoor Online Tracing With Evolving Graphs LHADRO: A Robust Control Framework for Autonomous Vehicles Under Cyber-Physical Attacks Towards Mobile Palmprint Recognition via Multi-view Hierarchical Graph Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1