{"title":"IFAST: Weakly Supervised Interpretable Face Anti-Spoofing From Single-Shot Binocular NIR Images","authors":"Jiancheng Huang;Donghao Zhou;Jianzhuang Liu;Linxiao Shi;Shifeng Chen","doi":"10.1109/TIFS.2024.3465930","DOIUrl":null,"url":null,"abstract":"Single-shot face anti-spoofing (FAS) is a key technique for securing face recognition systems, relying solely on static images as input. However, single-shot FAS remains a challenging and under-explored problem due to two reasons: 1) On the data side, learning FAS from RGB images is largely context-dependent, and single-shot images without additional annotations contain limited semantic information. 2) On the model side, existing single-shot FAS models struggle to provide proper evidence for their decisions, and FAS methods based on depth estimation require expensive per-pixel annotations. To address these issues, we construct and release a large binocular NIR image dataset named BNI-FAS, which contains more than 300,000 real face and plane attack images, and propose an Interpretable FAS Transformer (IFAST) that requires only weak supervision to produce interpretable predictions. Our IFAST generates pixel-wise disparity maps using the proposed disparity estimation Transformer with Dynamic Matching Attention (DMA) blocks. Besides, we design a confidence map generator to work in tandem with a dual-teacher distillation module to obtain the final discriminant results. Comprehensive experiments show that our IFAST achieves state-of-the-art performance on BNI-FAS, verifying its effectiveness of single-shot FAS on binocular NIR images. The project page is available at \n<uri>https://ifast-bni.github.io/</uri>\n.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"19 ","pages":"9270-9284"},"PeriodicalIF":6.3000,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10685520/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Single-shot face anti-spoofing (FAS) is a key technique for securing face recognition systems, relying solely on static images as input. However, single-shot FAS remains a challenging and under-explored problem due to two reasons: 1) On the data side, learning FAS from RGB images is largely context-dependent, and single-shot images without additional annotations contain limited semantic information. 2) On the model side, existing single-shot FAS models struggle to provide proper evidence for their decisions, and FAS methods based on depth estimation require expensive per-pixel annotations. To address these issues, we construct and release a large binocular NIR image dataset named BNI-FAS, which contains more than 300,000 real face and plane attack images, and propose an Interpretable FAS Transformer (IFAST) that requires only weak supervision to produce interpretable predictions. Our IFAST generates pixel-wise disparity maps using the proposed disparity estimation Transformer with Dynamic Matching Attention (DMA) blocks. Besides, we design a confidence map generator to work in tandem with a dual-teacher distillation module to obtain the final discriminant results. Comprehensive experiments show that our IFAST achieves state-of-the-art performance on BNI-FAS, verifying its effectiveness of single-shot FAS on binocular NIR images. The project page is available at
https://ifast-bni.github.io/
.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features