首页 > 最新文献

2017 IEEE International Joint Conference on Biometrics (IJCB)最新文献

英文 中文
Score normalization in stratified biometric systems 分层生物识别系统的评分归一化
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272712
S. Tulyakov, Nishant Sankaran, S. Setlur, V. Govindaraju
Stratified biometric system can be defined as a system in which the subjects, their templates or matching scores can be separated into two or more categories, or strata, and the matching decisions can be made separately for each stratum. In this paper we investigate the properties of the strat-ifiedbiometric system and, in particular, possible strata creation strategies, score normalization and acceptance decisions, expected performance improvements due to stratification. We perform our experiments on face recognition matching scores from IARPA Janus CS2 dataset.
分层生物识别系统可以定义为将受试者、其模板或匹配分数分为两个或两个以上的类别或层次,并对每个层次分别进行匹配决策的系统。在本文中,我们研究了层化生物识别系统的特性,特别是可能的地层创建策略,评分规范化和接受决策,由于分层而预期的性能改进。我们对IARPA Janus CS2数据集的人脸识别匹配分数进行了实验。
{"title":"Score normalization in stratified biometric systems","authors":"S. Tulyakov, Nishant Sankaran, S. Setlur, V. Govindaraju","doi":"10.1109/BTAS.2017.8272712","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272712","url":null,"abstract":"Stratified biometric system can be defined as a system in which the subjects, their templates or matching scores can be separated into two or more categories, or strata, and the matching decisions can be made separately for each stratum. In this paper we investigate the properties of the strat-ifiedbiometric system and, in particular, possible strata creation strategies, score normalization and acceptance decisions, expected performance improvements due to stratification. We perform our experiments on face recognition matching scores from IARPA Janus CS2 dataset.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114971167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Demography-based facial retouching detection using subclass supervised sparse autoencoder 基于子类监督稀疏自编码器的人口统计学面部修饰检测
Pub Date : 2017-09-22 DOI: 10.1109/BTAS.2017.8272732
Aparna Bharati, Mayank Vatsa, Richa Singh, K. Bowyer, Xin Tong
Digital retouching of face images is becoming more widespread due to the introduction of software packages that automate the task. Several researchers have introduced algorithms to detect whether a face image is original or retouched. However, previous work on this topic has not considered whether or how accuracy of retouching detection varies with the demography of face images. In this paper, we introduce a new Multi-Demographic Retouched Faces (MDRF) dataset, which contains images belonging to two genders, male and female, and three ethnicities, Indian, Chinese, and Caucasian. Further, retouched images are created using two different retouching software packages. The second major contribution of this research is a novel semi-supervised autoencoder incorporating “sub-class” information to improve classification. The proposed approach outperforms existing state-of-the-art detection algorithms for the task of generalized retouching detection. Experiments conducted with multiple combinations of ethnicities show that accuracy of retouching detection can vary greatly based on the demographics of the training and testing images.
由于自动化软件包的引入,面部图像的数字修饰正变得越来越普遍。一些研究人员已经引入了算法来检测人脸图像是原始的还是经过修饰的。然而,之前关于这一主题的工作并没有考虑修饰检测的准确性是否以及如何随着人脸图像的人口统计学而变化。在本文中,我们引入了一个新的多人口修饰面部(MDRF)数据集,该数据集包含两种性别(男性和女性)和三个种族(印度人、中国人和高加索人)的图像。此外,使用两种不同的修饰软件包创建修饰图像。本研究的第二个主要贡献是一种新型的半监督自动编码器,该编码器结合了“子类”信息来改进分类。提出的方法优于现有的最先进的检测算法,用于广义修饰检测任务。多种族组合的实验表明,修图检测的准确性会因训练和测试图像的人口统计学特征而有很大差异。
{"title":"Demography-based facial retouching detection using subclass supervised sparse autoencoder","authors":"Aparna Bharati, Mayank Vatsa, Richa Singh, K. Bowyer, Xin Tong","doi":"10.1109/BTAS.2017.8272732","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272732","url":null,"abstract":"Digital retouching of face images is becoming more widespread due to the introduction of software packages that automate the task. Several researchers have introduced algorithms to detect whether a face image is original or retouched. However, previous work on this topic has not considered whether or how accuracy of retouching detection varies with the demography of face images. In this paper, we introduce a new Multi-Demographic Retouched Faces (MDRF) dataset, which contains images belonging to two genders, male and female, and three ethnicities, Indian, Chinese, and Caucasian. Further, retouched images are created using two different retouching software packages. The second major contribution of this research is a novel semi-supervised autoencoder incorporating “sub-class” information to improve classification. The proposed approach outperforms existing state-of-the-art detection algorithms for the task of generalized retouching detection. Experiments conducted with multiple combinations of ethnicities show that accuracy of retouching detection can vary greatly based on the demographics of the training and testing images.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"462 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132591968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
FingerNet: An unified deep network for fingerprint minutiae extraction FingerNet:用于指纹细节提取的统一深度网络
Pub Date : 2017-09-07 DOI: 10.1109/BTAS.2017.8272688
Yao Tang, Fei Gao, Jufu Feng, Yuhang Liu
Minutiae extraction is of critical importance in automated fingerprint recognition. Previous works on rolled/slap fingerprints failed on latent fingerprints due to noisy ridge patterns and complex background noises. In this paper, we propose a new way to design deep convolutional network combining domain knowledge and the representation ability of deep learning. In terms of orientation estimation, segmentation, enhancement and minutiae extraction, several typical traditional methods performed well on rolled/slap fingerprints are transformed into convolutional manners and integrated as an unified plain network. We demonstrate that this pipeline is equivalent to a shallow network with fixed weights. The network is then expanded to enhance its representation ability and the weights are released to learn complex background variance from data, while preserving end-to-end differentiability. Experimental results on NIST SD27 latent database and FVC 2004 slap database demonstrate that the proposed algorithm outperforms the state-of-the-art minutiae extraction algorithms. Code is made publicly available at: https://github.com/felixTY/FingerNet.
指纹特征提取是指纹自动识别的关键。由于脊纹图案的噪声和背景噪声的复杂性,以往对卷纹/拍打指纹的研究在潜在指纹上失败。本文提出了一种结合领域知识和深度学习的表示能力来设计深度卷积网络的新方法。从方向估计、分割、增强和细节提取等方面,将几种典型的传统方法转化为卷积方式,整合成一个统一的平面网络。我们证明了这个管道相当于一个固定权重的浅网络。然后对网络进行扩展以增强其表示能力,并释放权重以从数据中学习复杂的背景方差,同时保持端到端可微性。在NIST SD27潜势数据库和FVC 2004拍打数据库上的实验结果表明,该算法优于目前最先进的细节提取算法。代码可以在https://github.com/felixTY/FingerNet上公开获取。
{"title":"FingerNet: An unified deep network for fingerprint minutiae extraction","authors":"Yao Tang, Fei Gao, Jufu Feng, Yuhang Liu","doi":"10.1109/BTAS.2017.8272688","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272688","url":null,"abstract":"Minutiae extraction is of critical importance in automated fingerprint recognition. Previous works on rolled/slap fingerprints failed on latent fingerprints due to noisy ridge patterns and complex background noises. In this paper, we propose a new way to design deep convolutional network combining domain knowledge and the representation ability of deep learning. In terms of orientation estimation, segmentation, enhancement and minutiae extraction, several typical traditional methods performed well on rolled/slap fingerprints are transformed into convolutional manners and integrated as an unified plain network. We demonstrate that this pipeline is equivalent to a shallow network with fixed weights. The network is then expanded to enhance its representation ability and the weights are released to learn complex background variance from data, while preserving end-to-end differentiability. Experimental results on NIST SD27 latent database and FVC 2004 slap database demonstrate that the proposed algorithm outperforms the state-of-the-art minutiae extraction algorithms. Code is made publicly available at: https://github.com/felixTY/FingerNet.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125575745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 126
Facial 3D model registration under occlusions with sensiblepoints-based reinforced hypothesis refinement 基于敏感点的面部三维模型配准
Pub Date : 2017-09-02 DOI: 10.1109/BTAS.2017.8272734
Yuhang Wu, I. Kakadiaris
Registering a 3D facial model to a 2D image under occlusion is difficult. First, not all of the detected facial landmarks are accurate under occlusions. Second, the number of reliable landmarks may not be enough to constrain the problem. We propose a method to synthesize additional points (Sensible Points) to create pose hypotheses. The visual clues extracted from the fiducial points, non-fiducial points, and facial contour are jointly employed to verify the hypotheses. We define a reward function to measure whether the projected dense 3D model is well-aligned with the confidence maps generated by two fully convolutional networks, and use the function to train recurrent policy networks to move the Sensible Points. The same reward function is employed in testing to select the best hypothesis from a candidate pool of hypotheses. Experimentation demonstrates that the proposed approach is very promising in solving the facial model registration problem under occlusion.
将3D面部模型注册到遮挡下的2D图像是困难的。首先,在遮挡下,并非所有检测到的面部标志都是准确的。其次,可靠地标的数量可能不足以限制这个问题。我们提出了一种合成附加点(敏感点)来创建姿态假设的方法。利用从基准点、非基准点和面部轮廓中提取的视觉线索来验证假设。我们定义了一个奖励函数来衡量投影的密集3D模型是否与两个全卷积网络生成的置信度图很好地对齐,并使用该函数来训练循环策略网络来移动敏感点。在测试中使用相同的奖励函数从候选假设池中选择最佳假设。实验结果表明,该方法在解决遮挡下的人脸模型配准问题上具有较好的应用前景。
{"title":"Facial 3D model registration under occlusions with sensiblepoints-based reinforced hypothesis refinement","authors":"Yuhang Wu, I. Kakadiaris","doi":"10.1109/BTAS.2017.8272734","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272734","url":null,"abstract":"Registering a 3D facial model to a 2D image under occlusion is difficult. First, not all of the detected facial landmarks are accurate under occlusions. Second, the number of reliable landmarks may not be enough to constrain the problem. We propose a method to synthesize additional points (Sensible Points) to create pose hypotheses. The visual clues extracted from the fiducial points, non-fiducial points, and facial contour are jointly employed to verify the hypotheses. We define a reward function to measure whether the projected dense 3D model is well-aligned with the confidence maps generated by two fully convolutional networks, and use the function to train recurrent policy networks to move the Sensible Points. The same reward function is employed in testing to select the best hypothesis from a candidate pool of hypotheses. Experimentation demonstrates that the proposed approach is very promising in solving the facial model registration problem under occlusion.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129349901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Subspace selection to suppress confounding source domain information in AAM transfer learning AAM迁移学习中抑制混杂源域信息的子空间选择
Pub Date : 2017-08-28 DOI: 10.1109/BTAS.2017.8272730
Azin Asgarian, A. Ashraf, David J. Fleet, B. Taati
Active appearance models (AAMs) have seen tremendous success in face analysis. However, model learning depends on the availability of detailed annotation of canonical landmark points. As a result, when accurate AAM fitting is required on a different set of variations (expression, pose, identity), a new dataset is collected and annotated. To overcome the need for time consuming data collection and annotation, transfer learning approaches have received recent attention. The goal is to transfer knowledge from previously available datasets (source) to a new dataset (target). We propose a subspace transfer learning method, in which we select a subspace from the source that best describes the target space. We propose a metric to compute the directional similarity between the source eigenvectors and the target subspace. We show an equivalence between this metric and the variance of target data when projected onto source eigenvectors. Using this equivalence, we select a subset of source principal directions that capture the variance in target data. To define our model, we augment the selected source subspace with the target subspace learned from a handful of target examples. In experiments done on six public datasets, we show that our approach outperforms the state of the art in terms of the RMS fitting error as well as the percentage of test examples for which AAM fitting converges to the ground truth.
主动外观模型(aam)在人脸分析方面取得了巨大成功。然而,模型学习依赖于规范地标点的详细注释的可用性。因此,当需要对一组不同的变量(表情、姿势、身份)进行准确的AAM拟合时,将收集并注释一个新的数据集。为了克服对耗时的数据收集和注释的需求,迁移学习方法最近受到了人们的关注。目标是将知识从以前可用的数据集(源)转移到新的数据集(目标)。我们提出了一种子空间迁移学习方法,该方法从源中选择最能描述目标空间的子空间。我们提出了一个度量来计算源特征向量和目标子空间之间的方向相似性。我们展示了这个度量和目标数据的方差在投影到源特征向量上时的等价性。使用这个等价,我们选择一个捕获目标数据方差的源主方向子集。为了定义我们的模型,我们使用从少量目标示例中学习到的目标子空间来扩展选定的源子空间。在六个公共数据集上进行的实验中,我们表明,我们的方法在均方根拟合误差以及AAM拟合收敛于基本事实的测试示例百分比方面优于目前的技术水平。
{"title":"Subspace selection to suppress confounding source domain information in AAM transfer learning","authors":"Azin Asgarian, A. Ashraf, David J. Fleet, B. Taati","doi":"10.1109/BTAS.2017.8272730","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272730","url":null,"abstract":"Active appearance models (AAMs) have seen tremendous success in face analysis. However, model learning depends on the availability of detailed annotation of canonical landmark points. As a result, when accurate AAM fitting is required on a different set of variations (expression, pose, identity), a new dataset is collected and annotated. To overcome the need for time consuming data collection and annotation, transfer learning approaches have received recent attention. The goal is to transfer knowledge from previously available datasets (source) to a new dataset (target). We propose a subspace transfer learning method, in which we select a subspace from the source that best describes the target space. We propose a metric to compute the directional similarity between the source eigenvectors and the target subspace. We show an equivalence between this metric and the variance of target data when projected onto source eigenvectors. Using this equivalence, we select a subset of source principal directions that capture the variance in target data. To define our model, we augment the selected source subspace with the target subspace learned from a handful of target examples. In experiments done on six public datasets, we show that our approach outperforms the state of the art in terms of the RMS fitting error as well as the percentage of test examples for which AAM fitting converges to the ground truth.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130013782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The unconstrained ear recognition challenge 无约束耳识别挑战
Pub Date : 2017-08-23 DOI: 10.1109/BTAS.2017.8272761
Ž. Emeršič, Dejan Štepec, V. Štruc, P. Peer, Anjith George, Adil Ahmad, E. Omar, T. Boult, Reza Safdari, Yuxiang Zhou, S. Zafeiriou, Dogucan Yaman, Fevziye Irem Eyiokur, H. K. Ekenel
In this paper we present the results of the Unconstrained Ear Recognition Challenge (UERC), a group benchmarking effort centered around the problem of person recognition from ear images captured in uncontrolled conditions. The goal of the challenge was to assess the performance of existing ear recognition techniques on a challenging large-scale dataset and identify open problems that need to be addressed in the future. Five groups from three continents participated in the challenge and contributed six ear recognition techniques for the evaluation, while multiple baselines were made available for the challenge by the UERC organizers. A comprehensive analysis was conducted with all participating approaches addressing essential research questions pertaining to the sensitivity of the technology to head rotation, flipping, gallery size, large-scale recognition and others. The top performer of the UERC was found to ensure robust performance on a smaller part of the dataset (with 180 subjects) regardless of image characteristics, but still exhibited a significant performance drop when the entire dataset comprising 3,704 subjects was used for testing.
在本文中,我们介绍了无约束耳识别挑战(UERC)的结果,这是一组基准测试工作,围绕在非受控条件下捕获的耳图像中识别人的问题。挑战赛的目标是在具有挑战性的大规模数据集上评估现有耳朵识别技术的性能,并确定未来需要解决的开放问题。来自三大洲的五个小组参加了挑战,并为评估提供了六种耳朵识别技术,而UERC组织者为挑战提供了多个基线。对所有参与的方法进行了全面的分析,以解决有关该技术对头部旋转、翻转、画廊大小、大规模识别等的敏感性的基本研究问题。我们发现,无论图像特征如何,UERC的最佳表现都能确保在数据集的较小部分(180个受试者)上的稳健性能,但当使用包含3,704个受试者的整个数据集进行测试时,仍然表现出显著的性能下降。
{"title":"The unconstrained ear recognition challenge","authors":"Ž. Emeršič, Dejan Štepec, V. Štruc, P. Peer, Anjith George, Adil Ahmad, E. Omar, T. Boult, Reza Safdari, Yuxiang Zhou, S. Zafeiriou, Dogucan Yaman, Fevziye Irem Eyiokur, H. K. Ekenel","doi":"10.1109/BTAS.2017.8272761","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272761","url":null,"abstract":"In this paper we present the results of the Unconstrained Ear Recognition Challenge (UERC), a group benchmarking effort centered around the problem of person recognition from ear images captured in uncontrolled conditions. The goal of the challenge was to assess the performance of existing ear recognition techniques on a challenging large-scale dataset and identify open problems that need to be addressed in the future. Five groups from three continents participated in the challenge and contributed six ear recognition techniques for the evaluation, while multiple baselines were made available for the challenge by the UERC organizers. A comprehensive analysis was conducted with all participating approaches addressing essential research questions pertaining to the sensitivity of the technology to head rotation, flipping, gallery size, large-scale recognition and others. The top performer of the UERC was found to ensure robust performance on a smaller part of the dataset (with 180 subjects) regardless of image characteristics, but still exhibited a significant performance drop when the entire dataset comprising 3,704 subjects was used for testing.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115117872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
FaceBoxes: A CPU real-time face detector with high accuracy FaceBoxes:一个高精度的CPU实时人脸检测器
Pub Date : 2017-08-17 DOI: 10.1109/BTAS.2017.8272675
Shifeng Zhang, Xiangyu Zhu, Zhen Lei, Hailin Shi, Xiaobo Wang, S. Li
Although tremendous strides have been made in face detection, one of the remaining open challenges is to achieve real-time speed on the CPU as well as maintain high performance, since effective models for face detection tend to be computationally prohibitive. To address this challenge, we propose a novel face detector, named FaceBoxes, with superior performance on both speed and accuracy. Specifically, our method has a lightweight yet powerful network structure that consists of the Rapidly Digested Convolutional Layers (RDCL) and the Multiple Scale Convolutional Layers (MSCL). The RDCL is designed to enable FaceBoxes to achieve real-time speed on the CPU. The MSCL aims at enriching the receptive fields and discretizing anchors over different layers to handle faces of various scales. Besides, we propose a new anchor densification strategy to make different types of anchors have the same density on the image, which significantly improves the recall rate of small faces. As a consequence, the proposed detector runs at 20 FPS on a single CPU core and 125 FPS using a GPU for VGA-resolution images. Moreover, the speed of FaceBoxes is invariant to the number of faces. We comprehensively evaluate this method and present state-of-the-art detection performance on several face detection benchmark datasets, including the AFW, PASCAL face, and FDDB.
尽管人脸检测已经取得了巨大的进步,但仍然存在的挑战之一是在CPU上实现实时速度并保持高性能,因为用于人脸检测的有效模型往往在计算上令人望而却步。为了解决这一挑战,我们提出了一种新的面部检测器,名为FaceBoxes,在速度和准确性方面都具有卓越的性能。具体来说,我们的方法具有轻量级但强大的网络结构,由快速消化卷积层(RDCL)和多尺度卷积层(MSCL)组成。RDCL旨在使FaceBoxes在CPU上实现实时速度。MSCL旨在丰富不同层次的感受野和离散锚点,以处理不同尺度的面孔。此外,我们提出了一种新的锚点致密化策略,使不同类型的锚点在图像上具有相同的密度,显著提高了小人脸的召回率。因此,所提出的检测器在单个CPU内核上以20 FPS的速度运行,使用GPU以125 FPS的速度运行vga分辨率图像。此外,FaceBoxes的速度与面孔的数量是不变的。我们全面评估了这种方法,并在几个人脸检测基准数据集上展示了最先进的检测性能,包括AFW、PASCAL人脸和FDDB。
{"title":"FaceBoxes: A CPU real-time face detector with high accuracy","authors":"Shifeng Zhang, Xiangyu Zhu, Zhen Lei, Hailin Shi, Xiaobo Wang, S. Li","doi":"10.1109/BTAS.2017.8272675","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272675","url":null,"abstract":"Although tremendous strides have been made in face detection, one of the remaining open challenges is to achieve real-time speed on the CPU as well as maintain high performance, since effective models for face detection tend to be computationally prohibitive. To address this challenge, we propose a novel face detector, named FaceBoxes, with superior performance on both speed and accuracy. Specifically, our method has a lightweight yet powerful network structure that consists of the Rapidly Digested Convolutional Layers (RDCL) and the Multiple Scale Convolutional Layers (MSCL). The RDCL is designed to enable FaceBoxes to achieve real-time speed on the CPU. The MSCL aims at enriching the receptive fields and discretizing anchors over different layers to handle faces of various scales. Besides, we propose a new anchor densification strategy to make different types of anchors have the same density on the image, which significantly improves the recall rate of small faces. As a consequence, the proposed detector runs at 20 FPS on a single CPU core and 125 FPS using a GPU for VGA-resolution images. Moreover, the speed of FaceBoxes is invariant to the number of faces. We comprehensively evaluate this method and present state-of-the-art detection performance on several face detection benchmark datasets, including the AFW, PASCAL face, and FDDB.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128667876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 230
Continuous user authentication via unlabeled phone movement patterns 通过未标记的手机移动模式进行连续用户认证
Pub Date : 2017-08-15 DOI: 10.1109/BTAS.2017.8272696
R. Kumar, P. P. Kundu, Diksha Shukla, V. Phoha
In this paper, we propose a novel continuous authentication system for smartphone users. The proposed system entirely relies on unlabeled phone movement patterns collected through smartphone accelerometer. The data was collected in a completely unconstrained environment over five to twelve days. The contexts of phone usage were identified using k-means clustering. Multiple profiles, one for each context, were created for every user. Five machine learning algorithms were employed for classification of genuine and impostors. The performance of the system was evaluated over a diverse population of 57 users. The mean equal error rates achieved by Logistic Regression, Neural Network, kNN, SVM, and Random Forest were 13.7%, 13.5%, 12.1%, 10.7%, and 5.6% respectively. A series of statistical tests were conducted to compare the performance of the classifiers. The suitability of the proposed system for different types of users was also investigated using the failure to enroll policy.
在本文中,我们提出了一种新的智能手机用户连续认证系统。拟议中的系统完全依赖于通过智能手机加速度计收集的未标记的手机移动模式。数据是在完全不受约束的环境中收集的,收集时间为5到12天。使用k-均值聚类识别手机使用的上下文。为每个用户创建了多个配置文件,每个上下文一个。采用五种机器学习算法对真品和冒牌货进行分类。系统的性能在57个不同的用户群中进行了评估。Logistic回归、神经网络、kNN、SVM和随机森林的平均等错误率分别为13.7%、13.5%、12.1%、10.7%和5.6%。进行了一系列统计测试来比较分类器的性能。本文还利用注册失败策略研究了所提出的系统对不同类型用户的适用性。
{"title":"Continuous user authentication via unlabeled phone movement patterns","authors":"R. Kumar, P. P. Kundu, Diksha Shukla, V. Phoha","doi":"10.1109/BTAS.2017.8272696","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272696","url":null,"abstract":"In this paper, we propose a novel continuous authentication system for smartphone users. The proposed system entirely relies on unlabeled phone movement patterns collected through smartphone accelerometer. The data was collected in a completely unconstrained environment over five to twelve days. The contexts of phone usage were identified using k-means clustering. Multiple profiles, one for each context, were created for every user. Five machine learning algorithms were employed for classification of genuine and impostors. The performance of the system was evaluated over a diverse population of 57 users. The mean equal error rates achieved by Logistic Regression, Neural Network, kNN, SVM, and Random Forest were 13.7%, 13.5%, 12.1%, 10.7%, and 5.6% respectively. A series of statistical tests were conducted to compare the performance of the classifiers. The suitability of the proposed system for different types of users was also investigated using the failure to enroll policy.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127667245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Generative adversarial network-based synthesis of visible faces from polarimetrie thermal faces 基于生成对抗网络的极化热面可见面合成
Pub Date : 2017-08-08 DOI: 10.1109/BTAS.2017.8272687
He Zhang, Vishal M. Patel, B. Riggan, Shuowen Hu
The large domain discrepancy between faces captured in polarimetric (or conventional) thermal and visible domain makes cross-domain face recognition quite a challenging problem for both human-examiners and computer vision algorithms. Previous approaches utilize a two-step procedure (visible feature estimation and visible image reconstruction) to synthesize the visible image given the corresponding polarimetric thermal image. However, these are regarded as two disjoint steps and hence may hinder the performance of visible face reconstruction. We argue that joint optimization would be a better way to reconstruct more photo-realistic images for both computer vision algorithms and human-examiners to examine. To this end, this paper proposes a Generative Adversarial Network-based Visible Face Synthesis (GAN-VFS) method to synthesize more photo-realistic visible face images from their corresponding polarimetric images. To ensure that the encoded visible-features contain more semantically meaningful information in reconstructing the visible face image, a guidance sub-network is involved into the training procedure. To achieve photo realistic property while preserving discriminative characteristics for the reconstructed outputs, an identity loss combined with the perceptual loss are optimized in the framework. Multiple experiments evaluated on different experimental protocols demonstrate that the proposed method achieves state-of-the-art performance.
在偏振(或常规)热域和可见域捕获的人脸之间存在很大的域差异,这使得跨域人脸识别对人类检测人员和计算机视觉算法来说都是一个具有挑战性的问题。以前的方法采用两步程序(可见特征估计和可见图像重建)来合成给定相应偏振热图像的可见图像。然而,这些被认为是两个不相交的步骤,因此可能会阻碍可见人脸重建的表现。我们认为,联合优化将是一种更好的方法来重建更逼真的图像,供计算机视觉算法和人类检查员检查。为此,本文提出了一种基于生成对抗网络的可见人脸合成(GAN-VFS)方法,从相应的偏振图像中合成更逼真的可见人脸图像。为了保证编码后的可见特征在重构过程中包含更多语义上有意义的信息,在训练过程中引入了一个引导子网络。为了在保留重建输出的区别特征的同时实现照片真实感,在框架中优化了身份损失和感知损失相结合的特性。在不同的实验协议下进行的多次实验表明,所提出的方法达到了最先进的性能。
{"title":"Generative adversarial network-based synthesis of visible faces from polarimetrie thermal faces","authors":"He Zhang, Vishal M. Patel, B. Riggan, Shuowen Hu","doi":"10.1109/BTAS.2017.8272687","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272687","url":null,"abstract":"The large domain discrepancy between faces captured in polarimetric (or conventional) thermal and visible domain makes cross-domain face recognition quite a challenging problem for both human-examiners and computer vision algorithms. Previous approaches utilize a two-step procedure (visible feature estimation and visible image reconstruction) to synthesize the visible image given the corresponding polarimetric thermal image. However, these are regarded as two disjoint steps and hence may hinder the performance of visible face reconstruction. We argue that joint optimization would be a better way to reconstruct more photo-realistic images for both computer vision algorithms and human-examiners to examine. To this end, this paper proposes a Generative Adversarial Network-based Visible Face Synthesis (GAN-VFS) method to synthesize more photo-realistic visible face images from their corresponding polarimetric images. To ensure that the encoded visible-features contain more semantically meaningful information in reconstructing the visible face image, a guidance sub-network is involved into the training procedure. To achieve photo realistic property while preserving discriminative characteristics for the reconstructed outputs, an identity loss combined with the perceptual loss are optimized in the framework. Multiple experiments evaluated on different experimental protocols demonstrate that the proposed method achieves state-of-the-art performance.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"2155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128904094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Unconstrained Face Detection and Open-Set Face Recognition Challenge 无约束人脸检测和开放集人脸识别挑战
Pub Date : 2017-08-08 DOI: 10.1109/BTAS.2017.8272759
Manuel Günther, Peiyun Hu, C. Herrmann, Chi-Ho Chan, Min Jiang, Shufan Yang, A. Dhamija, Deva Ramanan, J. Beyerer, J. Kittler, Mohamad Al Jazaery, Mohammad Iqbal Nouyed, G. Guo, Cezary Stankiewicz, T. Boult
Face detection and recognition benchmarks have shifted toward more difficult environments. The challenge presented in this paper addresses the next step in the direction of automatic detection and identification of people from outdoor surveillance cameras. While face detection has shown remarkable success in images collected from the web, surveillance cameras include more diverse occlusions, poses, weather conditions and image blur. Although face verification or closed-set face identification have surpassed human capabilities on some datasets, open-set identification is much more complex as it needs to reject both unknown identities and false accepts from the face detector. We show that unconstrained face detection can approach high detection rates albeit with moderate false accept rates. By contrast, open-set face recognition is currently weak and requires much more attention.
人脸检测和识别基准已经转向更困难的环境。本文提出的挑战解决了从户外监控摄像机自动检测和识别人员方向的下一步。虽然人脸检测在从网络上收集的图像中取得了显著的成功,但监控摄像头包括更多不同的遮挡、姿势、天气状况和图像模糊。虽然人脸验证或闭集人脸识别在某些数据集上已经超越了人类的能力,但开放集识别要复杂得多,因为它需要拒绝来自人脸检测器的未知身份和错误接受。我们表明,无约束的人脸检测可以接近高的检测率,尽管有中等的错误接受率。相比之下,开放集人脸识别目前是薄弱的,需要更多的关注。
{"title":"Unconstrained Face Detection and Open-Set Face Recognition Challenge","authors":"Manuel Günther, Peiyun Hu, C. Herrmann, Chi-Ho Chan, Min Jiang, Shufan Yang, A. Dhamija, Deva Ramanan, J. Beyerer, J. Kittler, Mohamad Al Jazaery, Mohammad Iqbal Nouyed, G. Guo, Cezary Stankiewicz, T. Boult","doi":"10.1109/BTAS.2017.8272759","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272759","url":null,"abstract":"Face detection and recognition benchmarks have shifted toward more difficult environments. The challenge presented in this paper addresses the next step in the direction of automatic detection and identification of people from outdoor surveillance cameras. While face detection has shown remarkable success in images collected from the web, surveillance cameras include more diverse occlusions, poses, weather conditions and image blur. Although face verification or closed-set face identification have surpassed human capabilities on some datasets, open-set identification is much more complex as it needs to reject both unknown identities and false accepts from the face detector. We show that unconstrained face detection can approach high detection rates albeit with moderate false accept rates. By contrast, open-set face recognition is currently weak and requires much more attention.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127989018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
期刊
2017 IEEE International Joint Conference on Biometrics (IJCB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1