首页 > 最新文献

2019 International Conference on Biometrics (ICB)最新文献

英文 中文
Likelihood Ratio based Loss to finetune CNNs for Very Low Resolution Face Verification 基于似然比损失的极低分辨率人脸验证微调cnn
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987249
Dan Zeng, R. Veldhuis, L. Spreeuwers, Qijun Zhao
In this paper, we propose a likelihood ratio based loss for very low-resolution face verification. Existing loss functions either improve the softmax loss to learn large-margin facial features or impose Euclidean margin constraints between image pairs. These methods are proved to be better than traditional softmax, but fail to guarantee the best discrimination features. Therefore, we propose a loss function based on likelihood ratio classifier, an optimal classifier in Neyman-Pearson sense, to give the highest verification rate at a given false accept rate, which is suitable for biometrics verification. To verify the efficacy of the proposed loss function, we apply it to address the very low-resolution face recognition problem. We conduct extensive experiments on the challenging SCface dataset with the resolution of the faces to be recognized below 16 × 16. The results show that the proposed approach outperforms state-of-the-art methods.
在本文中,我们提出了一种基于似然比损失的极低分辨率人脸验证方法。现有的损失函数要么改进softmax损失来学习大边缘的面部特征,要么在图像对之间施加欧几里得边缘约束。这些方法被证明比传统的softmax方法更好,但不能保证最佳的识别特征。因此,我们提出了一种基于损失函数的似然比分类器,即内曼-皮尔逊意义上的最优分类器,在给定的错误接受率下给出最高的验证率,适用于生物特征验证。为了验证所提出的损失函数的有效性,我们将其应用于解决极低分辨率的人脸识别问题。我们在具有挑战性的SCface数据集上进行了大量的实验,待识别的人脸分辨率低于16 × 16。结果表明,所提出的方法优于最先进的方法。
{"title":"Likelihood Ratio based Loss to finetune CNNs for Very Low Resolution Face Verification","authors":"Dan Zeng, R. Veldhuis, L. Spreeuwers, Qijun Zhao","doi":"10.1109/ICB45273.2019.8987249","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987249","url":null,"abstract":"In this paper, we propose a likelihood ratio based loss for very low-resolution face verification. Existing loss functions either improve the softmax loss to learn large-margin facial features or impose Euclidean margin constraints between image pairs. These methods are proved to be better than traditional softmax, but fail to guarantee the best discrimination features. Therefore, we propose a loss function based on likelihood ratio classifier, an optimal classifier in Neyman-Pearson sense, to give the highest verification rate at a given false accept rate, which is suitable for biometrics verification. To verify the efficacy of the proposed loss function, we apply it to address the very low-resolution face recognition problem. We conduct extensive experiments on the challenging SCface dataset with the resolution of the faces to be recognized below 16 × 16. The results show that the proposed approach outperforms state-of-the-art methods.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"42 20","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120887672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
OU-ISIR Wearable Sensor-based Gait Challenge: Age and Gender 基于OU-ISIR可穿戴传感器的步态挑战:年龄和性别
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987235
T. N. Thanh, Yuichi Hattori, Md. Atiqur Rahman Ahad, Anindya Das Antar, Masud Ahmed, D. Muramatsu, Yasushi Makihara, Y. Yagi, Sozo Inoue, Tahera Hossain
Recently, wearable computing resources, such as smart-phones, are developing fast due to the advancements of technology and their great supports to human life. People are using smartphone for communication, work, entertainment, business, traveling, and browsing information. However, the health-care application is very limited due to many challenges. We would like to break the limitation and boost up the research to support human health. One of the important steps for a health-care system is to understand age and gender of the user through gait, who is wearing the sensor. Gait is chosen because it is the most dominant daily activity, which is considered to contain not only identity but also physical, medical conditions. To this end, we organize a challenging competition on gender and age prediction using wearable sensors. The evaluation is mainly based on the published OU-ISIR inertial dataset which is currently the world largest inertial gait dataset*.
近年来,智能手机等可穿戴计算资源由于技术的进步和对人类生活的巨大支持而发展迅速。人们正在使用智能手机进行通信、工作、娱乐、商务、旅游和浏览信息。然而,由于许多挑战,医疗保健应用非常有限。我们希望打破这一局限,加强研究,支持人类健康。医疗保健系统的重要步骤之一是通过步态了解佩戴传感器的用户的年龄和性别。选择步态是因为它是最主要的日常活动,它被认为不仅包含身份,而且包含身体,医疗条件。为此,我们组织了一场使用可穿戴传感器预测性别和年龄的具有挑战性的比赛。评估主要基于已发布的OU-ISIR惯性数据集,该数据集是目前世界上最大的惯性步态数据集*。
{"title":"OU-ISIR Wearable Sensor-based Gait Challenge: Age and Gender","authors":"T. N. Thanh, Yuichi Hattori, Md. Atiqur Rahman Ahad, Anindya Das Antar, Masud Ahmed, D. Muramatsu, Yasushi Makihara, Y. Yagi, Sozo Inoue, Tahera Hossain","doi":"10.1109/ICB45273.2019.8987235","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987235","url":null,"abstract":"Recently, wearable computing resources, such as smart-phones, are developing fast due to the advancements of technology and their great supports to human life. People are using smartphone for communication, work, entertainment, business, traveling, and browsing information. However, the health-care application is very limited due to many challenges. We would like to break the limitation and boost up the research to support human health. One of the important steps for a health-care system is to understand age and gender of the user through gait, who is wearing the sensor. Gait is chosen because it is the most dominant daily activity, which is considered to contain not only identity but also physical, medical conditions. To this end, we organize a challenging competition on gender and age prediction using wearable sensors. The evaluation is mainly based on the published OU-ISIR inertial dataset which is currently the world largest inertial gait dataset*.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127402624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A novel scheme to address the fusion uncertainty in multi-modal continuous authentication schemes on mobile devices 一种解决移动设备上多模态连续认证方案中融合不确定性的新方案
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987390
Max Smith-Creasey, M. Rajarajan
Interest in continuous mobile authentication schemes has increased in recent years. These schemes use sensors on mobile devices to collect the biometric data about a user. The use of multiple sensors in a multi-modal scheme has been shown to improve the accuracy. However, sensor scores are often combined using simplistic techniques such as averaging. To date, the effect of uncertainty in score fusion has not been explored. In this paper, we present a novel Dempster-Shafer based score fusion approach for continuous authentication schemes. Our approach combines the sensor scores factoring in the uncertainty of the sensor. We propose and evaluate five techniques for computing uncertainty. Our proof-of-concept system is tested on three state-of-the-art datasets and compared with common fusion techniques. We find that our proposed approach yields the highest accuracies compared to the other fusion techniques and achieves equal error rates as low as 8.05%.
近年来,人们对连续移动身份验证方案的兴趣越来越大。这些方案使用移动设备上的传感器来收集用户的生物特征数据。在多模态方案中使用多个传感器已被证明可以提高精度。然而,传感器得分通常使用简单的技术,如平均相结合。到目前为止,不确定性对分数融合的影响还没有研究。在本文中,我们提出了一种新的基于Dempster-Shafer分数融合的连续认证方案。我们的方法结合了传感器分数,考虑了传感器的不确定性。我们提出并评估了五种计算不确定性的技术。我们的概念验证系统在三个最先进的数据集上进行了测试,并与常见的融合技术进行了比较。我们发现,与其他融合技术相比,我们提出的方法产生了最高的精度,并且错误率低至8.05%。
{"title":"A novel scheme to address the fusion uncertainty in multi-modal continuous authentication schemes on mobile devices","authors":"Max Smith-Creasey, M. Rajarajan","doi":"10.1109/ICB45273.2019.8987390","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987390","url":null,"abstract":"Interest in continuous mobile authentication schemes has increased in recent years. These schemes use sensors on mobile devices to collect the biometric data about a user. The use of multiple sensors in a multi-modal scheme has been shown to improve the accuracy. However, sensor scores are often combined using simplistic techniques such as averaging. To date, the effect of uncertainty in score fusion has not been explored. In this paper, we present a novel Dempster-Shafer based score fusion approach for continuous authentication schemes. Our approach combines the sensor scores factoring in the uncertainty of the sensor. We propose and evaluate five techniques for computing uncertainty. Our proof-of-concept system is tested on three state-of-the-art datasets and compared with common fusion techniques. We find that our proposed approach yields the highest accuracies compared to the other fusion techniques and achieves equal error rates as low as 8.05%.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114567193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
FLDet: A CPU Real-time Joint Face and Landmark Detector FLDet:一种CPU实时联合人脸和地标检测器
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987289
Chubin Zhuang, Shifeng Zhang, Xiangyu Zhu, Zhen Lei, Jinqiao Wang, S. Li
Face detection and alignment are considered as two independent tasks and conducted sequentially in most face applications. However, these two tasks are highly related and they can be integrated into a single model. In this paper, we propose a novel single-shot detector for joint face detection and alignment, namely FLDet, with remarkable performance on both speed and accuracy. Specifically, the FLDet consists of three main modules: Rapidly Digested Backbone (RDB), Lightweight Feature Pyramid Network (LFPN) and Multi-task Detection Module (MDM). The RDB quickly shrinks the spatial size of feature maps to guarantee the CPU real-time speed. The LFPN integrates different detection layers in a top-down fashion to enrich the feature of low-level layers with little extra time overhead. The MDM jointly performs face and landmark detection over different layers to handle faces of various scales. Besides, we introduce a new data augmentation strategy to take full usage of the face alignment dataset. As a result, the proposed FLDet can run at 20 FPS on a single CPU core and 120 FPS using a GPU for VGA-resolution images. Notably, the FLDet can be trained end-to-end and its inference time is invariant to the number of faces. We achieve competitive results on both face detection and face alignment benchmark datasets, including AFW, PASCAL FACE, FDDB and AFLW.
在大多数人脸应用中,人脸检测和对齐被视为两个独立的任务,并依次进行。然而,这两个任务是高度相关的,它们可以集成到一个模型中。在本文中,我们提出了一种新的用于联合人脸检测和对准的单镜头检测器,即FLDet,它在速度和精度上都有显著的性能。其中,FLDet主要由RDB (rapid digest Backbone)、LFPN (Lightweight Feature Pyramid Network)和MDM (Multi-task Detection Module)三个模块组成。RDB可以快速缩小特征映射的空间大小,保证CPU的实时性。LFPN以自顶向下的方式集成了不同的检测层,从而丰富了低层的特征,同时减少了额外的时间开销。MDM通过对不同层的人脸和地标进行联合检测,处理不同尺度的人脸。此外,我们引入了一种新的数据增强策略,以充分利用人脸对齐数据集。因此,所提出的FLDet可以在单个CPU内核上以20 FPS运行,在使用GPU的情况下以120 FPS运行vga分辨率图像。值得注意的是,FLDet可以端到端训练,其推理时间与人脸数量不变。我们在人脸检测和人脸对齐基准数据集(包括AFW、PASCAL face、FDDB和AFLW)上取得了具有竞争力的结果。
{"title":"FLDet: A CPU Real-time Joint Face and Landmark Detector","authors":"Chubin Zhuang, Shifeng Zhang, Xiangyu Zhu, Zhen Lei, Jinqiao Wang, S. Li","doi":"10.1109/ICB45273.2019.8987289","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987289","url":null,"abstract":"Face detection and alignment are considered as two independent tasks and conducted sequentially in most face applications. However, these two tasks are highly related and they can be integrated into a single model. In this paper, we propose a novel single-shot detector for joint face detection and alignment, namely FLDet, with remarkable performance on both speed and accuracy. Specifically, the FLDet consists of three main modules: Rapidly Digested Backbone (RDB), Lightweight Feature Pyramid Network (LFPN) and Multi-task Detection Module (MDM). The RDB quickly shrinks the spatial size of feature maps to guarantee the CPU real-time speed. The LFPN integrates different detection layers in a top-down fashion to enrich the feature of low-level layers with little extra time overhead. The MDM jointly performs face and landmark detection over different layers to handle faces of various scales. Besides, we introduce a new data augmentation strategy to take full usage of the face alignment dataset. As a result, the proposed FLDet can run at 20 FPS on a single CPU core and 120 FPS using a GPU for VGA-resolution images. Notably, the FLDet can be trained end-to-end and its inference time is invariant to the number of faces. We achieve competitive results on both face detection and face alignment benchmark datasets, including AFW, PASCAL FACE, FDDB and AFLW.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134024941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Suppressing Gender and Age in Face Templates Using Incremental Variable Elimination 用增量变量消除法抑制人脸模板中的性别和年龄
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987331
Philipp Terhörst, N. Damer, Florian Kirchbuchner, Arjan Kuijper
Recent research on soft-biometrics showed that more information than just the person’s identity can be deduced from biometric data. Using face templates only, information about gender, age, ethnicity, health state of the person, and even the sexual orientation can be automatically obtained. Since for most applications these templates are expected to be used for recognition purposes only, this raises major privacy issues. Previous work addressed this problem purely on image level regarding function creep attackers without knowledge about the systems privacy mechanism. In this work, we propose a soft-biometric privacy enhancing approach that reduces a given biometric template by eliminating its most important variables for predicting soft-biometric attributes. Training a decision tree ensemble allows deriving a variable importance measure that is used to incrementally eliminate variables that allow predicting sensitive attributes. Unlike previous work, we consider a scenario of function creep attackers with explicit knowledge about the privacy mechanism and evaluated our approach on a publicly available database. The experiments were conducted to eight baseline solutions. The results showed that in many cases IVE is able to suppress gender and age to a high degree with a negligible loss of the templates recognition ability. Contrary to previous work, which is limited to the suppression of binary (gender) attributes, IVE is able, by design, to suppress binary, categorical, and continuous attributes.
最近对软生物识别技术的研究表明,从生物识别数据中可以推断出除了个人身份之外的更多信息。仅使用面部模板,就可以自动获得有关人的性别、年龄、种族、健康状况甚至性取向的信息。由于对于大多数应用程序来说,这些模板预计仅用于识别目的,因此这引起了主要的隐私问题。以前的工作纯粹是在图像级别上解决这个问题,涉及功能蠕变攻击者,而不了解系统的隐私机制。在这项工作中,我们提出了一种软生物特征隐私增强方法,通过消除预测软生物特征属性的最重要变量来减少给定的生物特征模板。训练决策树集合允许派生一个变量重要性度量,该度量用于增量地消除允许预测敏感属性的变量。与之前的工作不同,我们考虑了一个功能蠕变攻击者的场景,他们对隐私机制有明确的了解,并在一个公开可用的数据库上评估了我们的方法。实验针对8种基准溶液进行。结果表明,在许多情况下,IVE能够高度抑制性别和年龄,而模板识别能力的损失可以忽略不计。与以往的工作相反,它仅限于抑制二元(性别)属性,IVE能够,通过设计,抑制二元,分类和连续属性。
{"title":"Suppressing Gender and Age in Face Templates Using Incremental Variable Elimination","authors":"Philipp Terhörst, N. Damer, Florian Kirchbuchner, Arjan Kuijper","doi":"10.1109/ICB45273.2019.8987331","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987331","url":null,"abstract":"Recent research on soft-biometrics showed that more information than just the person’s identity can be deduced from biometric data. Using face templates only, information about gender, age, ethnicity, health state of the person, and even the sexual orientation can be automatically obtained. Since for most applications these templates are expected to be used for recognition purposes only, this raises major privacy issues. Previous work addressed this problem purely on image level regarding function creep attackers without knowledge about the systems privacy mechanism. In this work, we propose a soft-biometric privacy enhancing approach that reduces a given biometric template by eliminating its most important variables for predicting soft-biometric attributes. Training a decision tree ensemble allows deriving a variable importance measure that is used to incrementally eliminate variables that allow predicting sensitive attributes. Unlike previous work, we consider a scenario of function creep attackers with explicit knowledge about the privacy mechanism and evaluated our approach on a publicly available database. The experiments were conducted to eight baseline solutions. The results showed that in many cases IVE is able to suppress gender and age to a high degree with a negligible loss of the templates recognition ability. Contrary to previous work, which is limited to the suppression of binary (gender) attributes, IVE is able, by design, to suppress binary, categorical, and continuous attributes.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117070300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Face Sketch Colorization via Supervised GANs 基于监督gan的人脸素描着色
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987296
S. RamyaY., Soumyadeep Ghosh, Mayank Vatsa, Richa Singh
Face sketch recognition is one of the most challenging heterogeneous face recognition problems. The large domain difference of hand-drawn sketches and color photos along with the subjectivity/variations due to eye-witness descriptions and skill of sketch artists makes the problem demanding. Therefore, despite several research attempts, sketch to photo matching is still considered an arduous problem. In this research, we propose to transform a hand-drawn sketch to a color photo using an end to end two-stage generative adversarial model followed by learning a discriminative classifier for matching the transformed images with color photos. The proposed image to image transformation model reduces the modality gap of the sketch images and color photos resulting in higher identification accuracies and images with better visual quality than the ground truth sketch images.
人脸素描识别是异构人脸识别中最具挑战性的问题之一。手绘草图和彩色照片的大域差异,以及由于目击者描述和素描艺术家技能的主观性/变化,使问题变得苛刻。因此,尽管有一些研究尝试,素描与照片的匹配仍然被认为是一个艰巨的问题。在本研究中,我们提出使用端到端两阶段生成对抗模型将手绘草图转换为彩色照片,然后学习判别分类器将转换后的图像与彩色照片进行匹配。所提出的图像到图像转换模型减小了素描图像与彩色照片的模态差距,使得识别精度更高,图像视觉质量优于地面真实素描图像。
{"title":"Face Sketch Colorization via Supervised GANs","authors":"S. RamyaY., Soumyadeep Ghosh, Mayank Vatsa, Richa Singh","doi":"10.1109/ICB45273.2019.8987296","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987296","url":null,"abstract":"Face sketch recognition is one of the most challenging heterogeneous face recognition problems. The large domain difference of hand-drawn sketches and color photos along with the subjectivity/variations due to eye-witness descriptions and skill of sketch artists makes the problem demanding. Therefore, despite several research attempts, sketch to photo matching is still considered an arduous problem. In this research, we propose to transform a hand-drawn sketch to a color photo using an end to end two-stage generative adversarial model followed by learning a discriminative classifier for matching the transformed images with color photos. The proposed image to image transformation model reduces the modality gap of the sketch images and color photos resulting in higher identification accuracies and images with better visual quality than the ground truth sketch images.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123600105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SEFD: A Simple and Effective Single Stage Face Detector SEFD:一个简单有效的单级人脸检测器
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987231
Lei Shi, Xiang Xu, I. Kakadiaris
Recently, the state-of-the-art face detectors are extending a backbone network by adding more feature fusion and context extractor layers to localize multi-scale faces. Therefore, they are struggling to balance the computational efficiency and performance of face detectors. In this paper, we introduce a simple and effective face detector (SEFD). SEFD leverages a computationally light-weight Feature Aggregation Module (FAM) to achieve high computational efficiency of feature fusion and context enhancement. In addition, the aggregation loss is introduced to mitigate the imbalance of the power of feature representation for the classification and regression tasks due to the backbone network initialized by the pre-trained model that focuses on the classification task other than both the regression and classification tasks. SEFD achieves state-of-the-art performance on the UFDD dataset and mAPs of 95.3%, 94.1%, 88.3% and 94.9%, 94.0%, 88.2% on the easy, medium and hard subsets of WIDER Face validation and testing datasets, respectively.
最近,最先进的人脸检测器正在通过添加更多的特征融合和上下文提取层来扩展骨干网络,以定位多尺度的人脸。因此,他们正在努力平衡人脸检测器的计算效率和性能。本文介绍了一种简单有效的人脸检测器。SEFD利用计算量轻的特征聚合模块(FAM)来实现特征融合和上下文增强的高计算效率。此外,为了缓解由于预训练模型初始化的骨干网络只关注分类任务而不是回归和分类任务而导致的分类和回归任务特征表示能力的不平衡,引入了聚合损失。SEFD在UFDD数据集和mAPs上达到了最先进的性能,分别为95.3%、94.1%、88.3%和94.9%、94.0%、88.2%,分别用于wide Face验证和测试数据集的简单、中等和硬子集。
{"title":"SEFD: A Simple and Effective Single Stage Face Detector","authors":"Lei Shi, Xiang Xu, I. Kakadiaris","doi":"10.1109/ICB45273.2019.8987231","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987231","url":null,"abstract":"Recently, the state-of-the-art face detectors are extending a backbone network by adding more feature fusion and context extractor layers to localize multi-scale faces. Therefore, they are struggling to balance the computational efficiency and performance of face detectors. In this paper, we introduce a simple and effective face detector (SEFD). SEFD leverages a computationally light-weight Feature Aggregation Module (FAM) to achieve high computational efficiency of feature fusion and context enhancement. In addition, the aggregation loss is introduced to mitigate the imbalance of the power of feature representation for the classification and regression tasks due to the backbone network initialized by the pre-trained model that focuses on the classification task other than both the regression and classification tasks. SEFD achieves state-of-the-art performance on the UFDD dataset and mAPs of 95.3%, 94.1%, 88.3% and 94.9%, 94.0%, 88.2% on the easy, medium and hard subsets of WIDER Face validation and testing datasets, respectively.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117165296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Deep Contactless Fingerprint Unwarping 深度非接触式指纹解锁
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987292
Ali Dabouei, Sobhan Soleymani, J. Dawson, N. Nasrabadi
Contactless fingerprints have emerged as a convenient, inexpensive, and hygienic way of capturing fingerprint samples. However, cross-matching contactless fingerprints to the legacy contact-based fingerprints is a challenging task due to the elastic and perspective distortion between the two modalities. Current cross-matching methods merely rectify the elastic distortion of the contact-based samples to reduce the geometric mismatch and ignore the perspective distortion of contactless fingerprints. Adopting classical deformation correction techniques to compensate for the perspective distortion requires a large number of minutiae-annotated contactless fingerprints. However, annotating minutiae of contactless samples is a labor-intensive and inaccurate task especially for regions which are severely distorted by the perspective projection. In this study, we propose a deep model to rectify the perspective distortion of contactless fingerprints by combining a rectification and a ridge enhancement network. The ridge enhancement network provides indirect supervision for training the rectification network and removes the need for the ground truth values of the estimated warp parameters. Comprehensive experiments using two public datasets of contactless fingerprints show that the proposed unwarping approach, on average, results in a 17% increase in the number of detectable minutiae from contactless fingerprints. Consequently, the proposed model achieves the equal error rate of 7.71% and Rank-1 accuracy of 61.01% on the challenging dataset of ‘2D/3D’ fingerprints.
非接触式指纹已经成为一种方便、廉价、卫生的指纹采集方法。然而,由于非接触式指纹与传统的接触式指纹之间的弹性和视角畸变,交叉匹配是一项具有挑战性的任务。目前的交叉匹配方法仅仅纠正了基于接触的样本的弹性畸变,以减少几何不匹配,而忽略了非接触指纹的视角畸变。采用经典的变形校正技术来补偿透视畸变需要大量的微小标注的非接触式指纹。然而,非接触式样本的细节标注是一项劳动密集型且不准确的任务,特别是对于被透视投影严重扭曲的区域。在本研究中,我们提出了一种结合纠偏和脊增强网络的深度模型来纠正非接触式指纹的视角失真。脊增强网络为校正网络的训练提供了间接监督,并且消除了对估计翘曲参数的地面真值的需要。使用两个公开的非接触式指纹数据集进行的综合实验表明,提出的不扭曲方法平均可以使非接触式指纹的可检测细节数量增加17%。在具有挑战性的“2D/3D”指纹数据集上,该模型的错误率为7.71%,Rank-1准确率为61.01%。
{"title":"Deep Contactless Fingerprint Unwarping","authors":"Ali Dabouei, Sobhan Soleymani, J. Dawson, N. Nasrabadi","doi":"10.1109/ICB45273.2019.8987292","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987292","url":null,"abstract":"Contactless fingerprints have emerged as a convenient, inexpensive, and hygienic way of capturing fingerprint samples. However, cross-matching contactless fingerprints to the legacy contact-based fingerprints is a challenging task due to the elastic and perspective distortion between the two modalities. Current cross-matching methods merely rectify the elastic distortion of the contact-based samples to reduce the geometric mismatch and ignore the perspective distortion of contactless fingerprints. Adopting classical deformation correction techniques to compensate for the perspective distortion requires a large number of minutiae-annotated contactless fingerprints. However, annotating minutiae of contactless samples is a labor-intensive and inaccurate task especially for regions which are severely distorted by the perspective projection. In this study, we propose a deep model to rectify the perspective distortion of contactless fingerprints by combining a rectification and a ridge enhancement network. The ridge enhancement network provides indirect supervision for training the rectification network and removes the need for the ground truth values of the estimated warp parameters. Comprehensive experiments using two public datasets of contactless fingerprints show that the proposed unwarping approach, on average, results in a 17% increase in the number of detectable minutiae from contactless fingerprints. Consequently, the proposed model achieves the equal error rate of 7.71% and Rank-1 accuracy of 61.01% on the challenging dataset of ‘2D/3D’ fingerprints.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129903369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Does Generative Face Completion Help Face Recognition? 生成人脸补全是否有助于人脸识别?
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987388
Joe Mathai, I. Masi, Wael AbdAlmageed
Face occlusions, covering either the majority or discriminative parts of the face, can break facial perception and produce a drastic loss of information. Biometric systems such as recent deep face recognition models are not immune to obstructions or other objects covering parts of the face. While most of the current face recognition methods are not optimized to handle occlusions, there have been a few attempts to improve robustness directly in the training stage. Unlike those, we propose to study the effect of generative face completion on the recognition. We offer a face completion encoder-decoder, based on a convolutional operator with a gating mechanism, trained with an ample set of face occlusions. To systematically evaluate the impact of realistic occlusions on recognition, we propose to play the occlusion game: we render 3D objects onto different face parts, providing precious knowledge of what the impact is of effectively removing those occlusions. Extensive experiments on the Labeled Faces in the Wild (LFW), and its more difficult variant LFW-BLUFR, testify that face completion is able to partially restore face perception in machine vision systems for improved recognition.
面部遮挡,无论是覆盖了脸部的大部分还是有区别的部分,都会破坏面部感知并造成严重的信息丢失。生物识别系统,如最近的深度人脸识别模型,不能对遮挡面部部分的障碍物或其他物体免疫。虽然目前大多数人脸识别方法都没有针对遮挡进行优化,但已经有一些尝试直接在训练阶段提高鲁棒性。与此不同,我们建议研究生成式人脸补全对识别的影响。我们提供了一个基于带有门控机制的卷积算子的人脸补全编码器,并使用大量的人脸遮挡进行了训练。为了系统地评估真实遮挡对识别的影响,我们建议玩遮挡游戏:我们将3D物体渲染到不同的面部部位,提供有效去除这些遮挡的影响的宝贵知识。在野外标记面部(LFW)及其更困难的变体LFW- blufr上进行的大量实验证明,面部补全能够部分恢复机器视觉系统中的面部感知,以提高识别能力。
{"title":"Does Generative Face Completion Help Face Recognition?","authors":"Joe Mathai, I. Masi, Wael AbdAlmageed","doi":"10.1109/ICB45273.2019.8987388","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987388","url":null,"abstract":"Face occlusions, covering either the majority or discriminative parts of the face, can break facial perception and produce a drastic loss of information. Biometric systems such as recent deep face recognition models are not immune to obstructions or other objects covering parts of the face. While most of the current face recognition methods are not optimized to handle occlusions, there have been a few attempts to improve robustness directly in the training stage. Unlike those, we propose to study the effect of generative face completion on the recognition. We offer a face completion encoder-decoder, based on a convolutional operator with a gating mechanism, trained with an ample set of face occlusions. To systematically evaluate the impact of realistic occlusions on recognition, we propose to play the occlusion game: we render 3D objects onto different face parts, providing precious knowledge of what the impact is of effectively removing those occlusions. Extensive experiments on the Labeled Faces in the Wild (LFW), and its more difficult variant LFW-BLUFR, testify that face completion is able to partially restore face perception in machine vision systems for improved recognition.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128664281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
On the Effectiveness of Laser Speckle Contrast Imaging and Deep Neural Networks for Detecting Known and Unknown Fingerprint Presentation Attacks 激光散斑对比成像和深度神经网络检测已知和未知指纹表示攻击的有效性研究
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987428
H. Mirzaalian, Mohamed E. Hussein, W. Abd-Almageed
Fingerprint presentation attack detection (FPAD) is becoming an increasingly challenging problem due to the continuous advancement of attack techniques, which generate "realistic-looking" fake fingerprint presentations. Recently, laser speckle contrast imaging (LSCI) has been introduced as a new sensing modality for FPAD. LSCI has the interesting characteristic of capturing the blood flow under the skin surface. Toward studying the importance and effectiveness of LSCI for FPAD, we conduct a comprehensive study using different patch-based deep neural network architectures. Our studied architectures include 2D and 3D convo-lutional networks as well as a recurrent network using long short-term memory (LSTM) units. The study demonstrates that strong FPAD performance can be achieved using LSCI. We evaluate the different models over a new large dataset. The dataset consists of 3743 bona fide samples, collected from 335 unique subjects, and 218 presentation attack samples, including six different types of attacks. To examine the effect of changing the training and testing sets, we conduct a 3-fold cross validation evaluation. To examine the effect of the presence of an unseen attack, we apply a leave-one-attack out strategy. The FPAD classification results of the networks, which are separately optimized and tuned for the temporal and spatial patch-sizes, indicate that the best performance is achieved by LSTM.
随着攻击技术的不断进步,指纹呈现攻击检测(FPAD)成为一个越来越具有挑战性的问题,因为攻击技术可以产生“逼真”的假指纹呈现。近年来,激光散斑对比成像(LSCI)作为一种新的FPAD传感方式被引入。LSCI有一个有趣的特点,它能捕捉到皮肤表面下的血流。为了研究LSCI对FPAD的重要性和有效性,我们使用不同的基于patch的深度神经网络架构进行了全面的研究。我们研究的架构包括2D和3D卷积网络以及使用长短期记忆(LSTM)单元的循环网络。研究表明,使用LSCI可以实现较强的FPAD性能。我们在一个新的大型数据集上评估不同的模型。该数据集包括3743个真实样本,来自335个独特的主题,以及218个表示攻击样本,包括六种不同类型的攻击。为了检验改变训练集和测试集的效果,我们进行了3次交叉验证评估。为了检验一个看不见的攻击存在的影响,我们应用了一个留一个攻击的策略。分别对时空补丁大小进行优化和调优的网络FPAD分类结果表明,LSTM的分类性能最好。
{"title":"On the Effectiveness of Laser Speckle Contrast Imaging and Deep Neural Networks for Detecting Known and Unknown Fingerprint Presentation Attacks","authors":"H. Mirzaalian, Mohamed E. Hussein, W. Abd-Almageed","doi":"10.1109/ICB45273.2019.8987428","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987428","url":null,"abstract":"Fingerprint presentation attack detection (FPAD) is becoming an increasingly challenging problem due to the continuous advancement of attack techniques, which generate \"realistic-looking\" fake fingerprint presentations. Recently, laser speckle contrast imaging (LSCI) has been introduced as a new sensing modality for FPAD. LSCI has the interesting characteristic of capturing the blood flow under the skin surface. Toward studying the importance and effectiveness of LSCI for FPAD, we conduct a comprehensive study using different patch-based deep neural network architectures. Our studied architectures include 2D and 3D convo-lutional networks as well as a recurrent network using long short-term memory (LSTM) units. The study demonstrates that strong FPAD performance can be achieved using LSCI. We evaluate the different models over a new large dataset. The dataset consists of 3743 bona fide samples, collected from 335 unique subjects, and 218 presentation attack samples, including six different types of attacks. To examine the effect of changing the training and testing sets, we conduct a 3-fold cross validation evaluation. To examine the effect of the presence of an unseen attack, we apply a leave-one-attack out strategy. The FPAD classification results of the networks, which are separately optimized and tuned for the temporal and spatial patch-sizes, indicate that the best performance is achieved by LSTM.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128221255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
2019 International Conference on Biometrics (ICB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1