首页 > 最新文献

2017 IEEE International Joint Conference on Biometrics (IJCB)最新文献

英文 中文
Learning-based local-patch resolution reconstruction of iris smart-phone images 基于学习的虹膜智能手机图像局部补丁分辨率重建
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272771
F. Alonso-Fernandez, R. Farrugia, J. Bigün
Application of ocular biometrics in mobile and at a distance environments still has several open challenges, with the lack quality and resolution being an evident issue that can severely affects performance. In this paper, we evaluate two trained image reconstruction algorithms in the context of smart-phone biometrics. They are based on the use of coupled dictionaries to learn the mapping relations between low and high resolution images. In addition, reconstruction is made in local overlapped image patches, where up-scaling functions are modelled separately for each patch, allowing to better preserve local details. The experimental setup is complemented with a database of 560 images captured with two different smart-phones, and two iris comparators employed for verification experiments. We show that the trained approaches are substantially superior to bilinear or bicubic interpolations at very low resolutions (images of 13×13 pixels). Under such challenging conditions, an EER of ∼7% can be achieved using individual comparators, which is further pushed down to 4–6% after the fusion of the two systems.
眼生物识别技术在移动和远距离环境中的应用仍然存在一些挑战,缺乏质量和分辨率是一个明显的问题,可能严重影响性能。在本文中,我们评估了智能手机生物识别背景下的两种训练图像重建算法。它们基于使用耦合字典来学习低分辨率和高分辨率图像之间的映射关系。此外,在局部重叠的图像补丁中进行重建,其中每个补丁分别建模上尺度函数,以便更好地保留局部细节。该实验装置由两个不同的智能手机拍摄的560张图像数据库和两个用于验证实验的虹膜比较器进行补充。我们表明,在非常低的分辨率(13×13像素的图像)下,训练的方法实质上优于双线性或双三次插值。在这种具有挑战性的条件下,使用单个比较器可以实现~ 7%的EER,在两个系统融合后进一步降低到4-6%。
{"title":"Learning-based local-patch resolution reconstruction of iris smart-phone images","authors":"F. Alonso-Fernandez, R. Farrugia, J. Bigün","doi":"10.1109/BTAS.2017.8272771","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272771","url":null,"abstract":"Application of ocular biometrics in mobile and at a distance environments still has several open challenges, with the lack quality and resolution being an evident issue that can severely affects performance. In this paper, we evaluate two trained image reconstruction algorithms in the context of smart-phone biometrics. They are based on the use of coupled dictionaries to learn the mapping relations between low and high resolution images. In addition, reconstruction is made in local overlapped image patches, where up-scaling functions are modelled separately for each patch, allowing to better preserve local details. The experimental setup is complemented with a database of 560 images captured with two different smart-phones, and two iris comparators employed for verification experiments. We show that the trained approaches are substantially superior to bilinear or bicubic interpolations at very low resolutions (images of 13×13 pixels). Under such challenging conditions, an EER of ∼7% can be achieved using individual comparators, which is further pushed down to 4–6% after the fusion of the two systems.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124365845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Iris and periocular recognition in arabian race horses using deep convolutional neural networks 用深度卷积神经网络识别阿拉伯赛马的虹膜和眼周
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272736
Mateusz Trokielewicz, M. Szadkowski
This paper presents a study devoted to recognizing horses by means of their iris and periocular features using deep convolutional neural networks (DCNNs). Identification of race horses is crucial for animal identity confirmation prior to racing. As this is usually done shortly before a race, fast and reliable methods that are friendly and inflict no harm upon animals are important. Iris recognition has been shown to work with horse irides, provided that algorithms deployed for such task are fine-tuned for horse irides and input data is of very high quality. In our work, we examine a possibility of utilizing deep convolutional neural networks for a fusion of both iris and periocular region features. With such methodology, ocular biometrics in horses could perform well without employing complicated algorithms that require a lot offline-tuning and prior knowledge of the input image, while at the same time being rotation, translation, and to some extent also image quality invariant. We were able to achieve promising results, with EER=9.5%o using two network architectures with score-level fusion.
本文介绍了一项利用深度卷积神经网络(DCNNs)通过虹膜和眼周特征来识别马的研究。在比赛前对赛马进行身份鉴定是非常重要的。因为这通常是在比赛前不久进行的,所以快速、可靠、友好且不会对动物造成伤害的方法很重要。虹膜识别已被证明可以与马虹膜一起工作,前提是为此类任务部署的算法针对马虹膜进行了微调,并且输入数据的质量非常高。在我们的工作中,我们研究了利用深度卷积神经网络融合虹膜和眼周区域特征的可能性。使用这种方法,马的眼部生物识别可以在不使用复杂算法的情况下表现良好,这些算法需要大量的离线调整和对输入图像的先验知识,同时可以旋转、平移,在某种程度上也可以保持图像质量不变。我们能够获得有希望的结果,使用两个具有分数级融合的网络架构,EER=9.5% 0。
{"title":"Iris and periocular recognition in arabian race horses using deep convolutional neural networks","authors":"Mateusz Trokielewicz, M. Szadkowski","doi":"10.1109/BTAS.2017.8272736","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272736","url":null,"abstract":"This paper presents a study devoted to recognizing horses by means of their iris and periocular features using deep convolutional neural networks (DCNNs). Identification of race horses is crucial for animal identity confirmation prior to racing. As this is usually done shortly before a race, fast and reliable methods that are friendly and inflict no harm upon animals are important. Iris recognition has been shown to work with horse irides, provided that algorithms deployed for such task are fine-tuned for horse irides and input data is of very high quality. In our work, we examine a possibility of utilizing deep convolutional neural networks for a fusion of both iris and periocular region features. With such methodology, ocular biometrics in horses could perform well without employing complicated algorithms that require a lot offline-tuning and prior knowledge of the input image, while at the same time being rotation, translation, and to some extent also image quality invariant. We were able to achieve promising results, with EER=9.5%o using two network architectures with score-level fusion.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124243884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Fingerprint pose estimation based on faster R-CNN 基于更快R-CNN的指纹姿态估计
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272707
J. Ouyang, Jianjiang Feng, Jiwen Lu, Zhenhua Guo, Jie Zhou
Fingerprint pose estimation is one of the bottlenecks of indexing in large scale database. The existing methods of pose estimation are based on manually appointed features (e.g. special points, ridges, orientation filed). In this paper, we propose a method based on deep learning to achieve accurate pose estimation. Faster R-CNN is adopted to detect the center point and rough direction, followed by intra-class and inter-class combination to calculate the precise direction. Extensive experiments on NIST-14 show that (1) the predicted poses are close to manual annotations even when the fingerprints are incomplete or noisy, (2) the estimated poses for matching fingerprint pairs are very consistent and (3) by registering fingerprints using the estimated pose, the accuracy of a state-of-the-art fingerprint indexing system is further improved.
指纹姿态估计是大规模数据库索引的瓶颈之一。现有的姿态估计方法是基于人工指定的特征(如特殊点、脊线、方向场)。在本文中,我们提出了一种基于深度学习的方法来实现准确的姿态估计。采用更快的R-CNN检测中心点和粗方向,再结合类内和类间计算精确方向。在NIST-14上进行的大量实验表明:(1)在指纹不完整或有噪声的情况下,预测的姿态接近人工标注;(2)匹配指纹对的估计姿态非常一致;(3)利用估计的姿态对指纹进行注册,进一步提高了最先进的指纹索引系统的准确性。
{"title":"Fingerprint pose estimation based on faster R-CNN","authors":"J. Ouyang, Jianjiang Feng, Jiwen Lu, Zhenhua Guo, Jie Zhou","doi":"10.1109/BTAS.2017.8272707","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272707","url":null,"abstract":"Fingerprint pose estimation is one of the bottlenecks of indexing in large scale database. The existing methods of pose estimation are based on manually appointed features (e.g. special points, ridges, orientation filed). In this paper, we propose a method based on deep learning to achieve accurate pose estimation. Faster R-CNN is adopted to detect the center point and rough direction, followed by intra-class and inter-class combination to calculate the precise direction. Extensive experiments on NIST-14 show that (1) the predicted poses are close to manual annotations even when the fingerprints are incomplete or noisy, (2) the estimated poses for matching fingerprint pairs are very consistent and (3) by registering fingerprints using the estimated pose, the accuracy of a state-of-the-art fingerprint indexing system is further improved.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115200803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Deep expectation for estimation of fingerprint orientation fields 基于深度期望的指纹方向场估计
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272697
Patrick Schuch, Simon-Daniel Schulz, C. Busch
Estimation of the orientation field is one of the key challenges during biometric feature extraction from a fingerprint sample. Many important processing steps rely on an accurate and reliable estimation. This is especially challenging for samples of low quality, for which in turn accurate preprocessing is essential. Regressional Convolutional Neural Networks have shown their superiority for bad quality samples in the independent benchmark framework FVC-ongoing. This work proposes to incorporate Deep Expectation. Options for further improvements are evaluated in this challenging environment of low quality images and small amount of training data. The findings from the results improve the new algorithm called DEX-OF. Incorporating Deep Expectation, improved regularization, and slight model changes DEX-OF achieves an RMSE of 7.52° on the bad quality dataset and 4.89° at the good quality dataset at FVC-ongoing. These are the best reported error rates so far.
方向场的估计是指纹样本生物特征提取的关键问题之一。许多重要的处理步骤依赖于准确可靠的估计。这对于低质量的样品尤其具有挑战性,因此精确的预处理是必不可少的。在独立基准框架fvc - continuous中,回归卷积神经网络已经显示出对不良样本的优越性。本工作拟纳入Deep expectations。在这种具有挑战性的低质量图像和少量训练数据的环境中,评估了进一步改进的选项。结果的发现改进了称为DEX-OF的新算法。结合深度期望、改进的正则化和轻微的模型变化,DEX-OF在FVC-ongoing的劣质数据集上的RMSE为7.52°,在优质数据集上的RMSE为4.89°。这些是迄今为止报告的最佳错误率。
{"title":"Deep expectation for estimation of fingerprint orientation fields","authors":"Patrick Schuch, Simon-Daniel Schulz, C. Busch","doi":"10.1109/BTAS.2017.8272697","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272697","url":null,"abstract":"Estimation of the orientation field is one of the key challenges during biometric feature extraction from a fingerprint sample. Many important processing steps rely on an accurate and reliable estimation. This is especially challenging for samples of low quality, for which in turn accurate preprocessing is essential. Regressional Convolutional Neural Networks have shown their superiority for bad quality samples in the independent benchmark framework FVC-ongoing. This work proposes to incorporate Deep Expectation. Options for further improvements are evaluated in this challenging environment of low quality images and small amount of training data. The findings from the results improve the new algorithm called DEX-OF. Incorporating Deep Expectation, improved regularization, and slight model changes DEX-OF achieves an RMSE of 7.52° on the bad quality dataset and 4.89° at the good quality dataset at FVC-ongoing. These are the best reported error rates so far.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129804843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Single 2D pressure footprint based person identification 基于单个2D压力足迹的人员识别
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272725
Xinnian Wang, Huiyu Wang, Qi-Chang Cheng, Namusisi Linda Nankabirwa, Zhang Tao
Footprints carry many important human characteristics, such as anatomical structures of the foot, skin texture of the foot sole, standing or walking habits, and so on. They play vital roles in forensic investigations as an alternative biometric. In this paper, we propose an automatic footprint based person identification method using a single bare or socked footprint, which differs from the existing bare footprint based methods. An area rank filter is put forward to remove dust noises. Pressure distribution prior of the hind footprint is proposed to estimate the footprint direction. Both Geometrical Shape Spectrum Representation and Pressure Radial Gradient Map are proposed to represent a footprint in views of geometric shape, anatomical structure and one's standing or walking habits, which are also rotation and translation invariant. We also put forward a regional confidence value based method to compute the similarity values between two footprints. Additionally, we have constructed an evaluation dataset composed of 480 subjects and 19200 bare or socked footprints. Experimental results show that the proposed algorithm outperforms state of the-art algorithms, and its recognition rate reaches 98.75%.
脚印携带着许多重要的人类特征,比如足部的解剖结构、脚底的皮肤纹理、站立或行走习惯等等。它们作为一种替代生物识别技术在法医调查中发挥着至关重要的作用。本文提出了一种不同于现有的基于裸足迹的方法,基于单个裸足迹或嵌套足迹的基于足迹的自动识别方法。提出了一种区域秩滤波器来去除粉尘噪声。提出了后足迹先验压力分布来估计足迹方向。提出了几何形状谱表示和压力径向梯度表示两种方法,从几何形状、解剖结构和站立或行走习惯的角度来表示足迹,这两种方法也是旋转和平移不变的。提出了一种基于区域置信度的足迹相似度计算方法。此外,我们还构建了一个由480名受试者和19200个赤脚或穿袜子的脚印组成的评估数据集。实验结果表明,该算法优于现有算法,识别率达到98.75%。
{"title":"Single 2D pressure footprint based person identification","authors":"Xinnian Wang, Huiyu Wang, Qi-Chang Cheng, Namusisi Linda Nankabirwa, Zhang Tao","doi":"10.1109/BTAS.2017.8272725","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272725","url":null,"abstract":"Footprints carry many important human characteristics, such as anatomical structures of the foot, skin texture of the foot sole, standing or walking habits, and so on. They play vital roles in forensic investigations as an alternative biometric. In this paper, we propose an automatic footprint based person identification method using a single bare or socked footprint, which differs from the existing bare footprint based methods. An area rank filter is put forward to remove dust noises. Pressure distribution prior of the hind footprint is proposed to estimate the footprint direction. Both Geometrical Shape Spectrum Representation and Pressure Radial Gradient Map are proposed to represent a footprint in views of geometric shape, anatomical structure and one's standing or walking habits, which are also rotation and translation invariant. We also put forward a regional confidence value based method to compute the similarity values between two footprints. Additionally, we have constructed an evaluation dataset composed of 480 subjects and 19200 bare or socked footprints. Experimental results show that the proposed algorithm outperforms state of the-art algorithms, and its recognition rate reaches 98.75%.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128249469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Boosting cross-age face verification via generative age normalization 通过生成年龄归一化促进跨年龄人脸验证
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272698
G. Antipov, M. Baccouche, J. Dugelay
Despite the tremendous progress in face verification performance as a result of Deep Learning, the sensitivity to human age variations remains an Achilles' heel of the majority of the contemporary face verification software. A promising solution to this problem consists in synthetic aging/rejuvenation of the input face images to some predefined age categories prior to face verification. We recently proposed [3] Age-cGAN aging/rejuvenation method based on generative adversarial neural networks allowing to synthesize more plausible and realistic faces than alternative non-generative methods. However, in this work, we show that Age-cGAN cannot be directly used for improving face verification due to its slightly imperfect preservation of the original identities in aged/rejuvenated faces. We therefore propose Local Manifold Adaptation (LMA) approach which resolves the stated issue of Age-cGAN resulting in the novel Age-cGAN+LMA aging/rejuvenation method. Based on Age-cGAN+LMA, we design an age normalization algorithm which boosts the accuracy of an off-the-shelf face verification software in the cross-age evaluation scenario.
尽管深度学习在人脸验证性能方面取得了巨大进步,但对人类年龄变化的敏感性仍然是大多数当代人脸验证软件的致命弱点。一个很有前途的解决方案是在人脸验证之前,将输入的人脸图像合成老化/恢复到一些预定义的年龄类别。我们最近提出了基于生成对抗神经网络的[3]Age-cGAN衰老/年轻化方法,与其他非生成方法相比,它可以合成更可信和逼真的人脸。然而,在这项工作中,我们发现Age-cGAN不能直接用于改善面部验证,因为它对衰老/恢复青春的面部的原始身份保存略有不完善。因此,我们提出了局部流形适应(LMA)方法,该方法解决了Age-cGAN的既定问题,从而产生了新的Age-cGAN+LMA衰老/年轻化方法。基于age - cgan +LMA,设计了一种年龄归一化算法,提高了现有人脸验证软件在跨年龄评估场景下的准确性。
{"title":"Boosting cross-age face verification via generative age normalization","authors":"G. Antipov, M. Baccouche, J. Dugelay","doi":"10.1109/BTAS.2017.8272698","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272698","url":null,"abstract":"Despite the tremendous progress in face verification performance as a result of Deep Learning, the sensitivity to human age variations remains an Achilles' heel of the majority of the contemporary face verification software. A promising solution to this problem consists in synthetic aging/rejuvenation of the input face images to some predefined age categories prior to face verification. We recently proposed [3] Age-cGAN aging/rejuvenation method based on generative adversarial neural networks allowing to synthesize more plausible and realistic faces than alternative non-generative methods. However, in this work, we show that Age-cGAN cannot be directly used for improving face verification due to its slightly imperfect preservation of the original identities in aged/rejuvenated faces. We therefore propose Local Manifold Adaptation (LMA) approach which resolves the stated issue of Age-cGAN resulting in the novel Age-cGAN+LMA aging/rejuvenation method. Based on Age-cGAN+LMA, we design an age normalization algorithm which boosts the accuracy of an off-the-shelf face verification software in the cross-age evaluation scenario.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126690955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Cross-pose landmark localization using multi-dropout framework 基于多dropout框架的交叉位姿地标定位
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272722
G. Hsu, Cheng-Hua Hsieh
We propose the Multiple Dropout Framework (MDF) for facial landmark localization across large poses. Unlike most landmark detectors only work for poses less than 45 degree in yaw, the proposed MDF works for pose as large as 90 degree, i.e., full profile. In the proposed MDF, the Single Shot Multibox Detector (SSD) [10] is tailored for fast and precise face detection. Given an SSD detected face, a Multiple Dropout Network (MDN) is proposed to classify the face into either frontal or profile pose, and for each pose another MDN is configured for detecting pose-oriented landmarks. As the MDF framework contains one MDN (pose) classifier and two MDN (landmark) regressors, this study aims to determine the MDN structures and settings appropriate for handling classification and regression tasks. The MDN framework demonstrates the following advantages and observations. (1) Landmark detection across poses can be better approached by incorporating a pose classifier with pose-oriented landmark regressors. (2) Multiple dropouts are required for stabilizing the training of regressor networks. (3) Additional hand-crafted features, such as the Local Binary Pattern (LBP), can improve the accuracy of landmark localization. (4) Face profiling is a powerful tool for offering a large cross-pose training set. A comparison study on benchmark databases shows that the MDN delivers a competitive performance to the state-of-the-art approaches for face alignment across large poses.
我们提出了多重退出框架(Multiple Dropout Framework, MDF),用于大姿态面部地标定位。与大多数地标探测器只适用于偏航小于45度的姿态不同,所提出的MDF适用于大至90度的姿态,即全剖面。在本文提出的MDF中,单镜头多盒检测器(Single Shot Multibox Detector, SSD)[10]是为快速精确的人脸检测而量身定制的。给定一个SSD检测到的人脸,提出了一个多辍学网络(Multiple Dropout Network, MDN)来将人脸分类为正面或侧面姿态,并为每个姿态配置另一个MDN来检测面向姿态的地标。由于MDF框架包含一个MDN(姿态)分类器和两个MDN(地标)回归器,本研究旨在确定适合处理分类和回归任务的MDN结构和设置。MDN框架展示了以下优点和观察结果。(1)结合姿态分类器和面向姿态的地标回归器可以更好地实现跨姿态的地标检测。(2)回归网络训练的稳定性需要多个dropouts。(3)额外的手工特征,如局部二值模式(Local Binary Pattern, LBP),可以提高地标定位的准确性。(4)人脸分析是提供大型交叉姿势训练集的有力工具。对基准数据库的比较研究表明,MDN在大姿态面部对齐方面提供了具有竞争力的性能。
{"title":"Cross-pose landmark localization using multi-dropout framework","authors":"G. Hsu, Cheng-Hua Hsieh","doi":"10.1109/BTAS.2017.8272722","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272722","url":null,"abstract":"We propose the Multiple Dropout Framework (MDF) for facial landmark localization across large poses. Unlike most landmark detectors only work for poses less than 45 degree in yaw, the proposed MDF works for pose as large as 90 degree, i.e., full profile. In the proposed MDF, the Single Shot Multibox Detector (SSD) [10] is tailored for fast and precise face detection. Given an SSD detected face, a Multiple Dropout Network (MDN) is proposed to classify the face into either frontal or profile pose, and for each pose another MDN is configured for detecting pose-oriented landmarks. As the MDF framework contains one MDN (pose) classifier and two MDN (landmark) regressors, this study aims to determine the MDN structures and settings appropriate for handling classification and regression tasks. The MDN framework demonstrates the following advantages and observations. (1) Landmark detection across poses can be better approached by incorporating a pose classifier with pose-oriented landmark regressors. (2) Multiple dropouts are required for stabilizing the training of regressor networks. (3) Additional hand-crafted features, such as the Local Binary Pattern (LBP), can improve the accuracy of landmark localization. (4) Face profiling is a powerful tool for offering a large cross-pose training set. A comparison study on benchmark databases shows that the MDN delivers a competitive performance to the state-of-the-art approaches for face alignment across large poses.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"231 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133256202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A decision-level fusion strategy for multimodal ocular biometric in visible spectrum based on posterior probability 基于后验概率的多模态眼生物识别决策级融合策略
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272772
Abhijit Das, U. Pal, M. A. Ferrer-Ballester, M. Blumenstein
In this work, we propose a posterior probability-based decision-level fusion strategy for multimodal ocular biometric in the visible spectrum employing iris, sclera and peri-ocular trait. To best of our knowledge this is the first attempt to design a multimodal ocular biometrics using all three ocular traits. Employing all these traits in combination can help to increase the reliability and universality of the system. For instance in some scenarios, the sclera and iris can be highly occluded or for completely closed eyes scenario, the peri-ocular trait can be relied on for the decision. The proposed system is constituted of three independent traits and their combinations. The classification output of the trait which produces highest posterior probability is to consider as the final decision. An appreciable reliability and universal applicability of ocular trait are achieved in experiments conducted employing the proposed scheme.
在这项工作中,我们提出了一种基于后验概率的决策级融合策略,用于在可见光谱中利用虹膜、巩膜和眼周特征进行多模态眼生物识别。据我们所知,这是第一次尝试设计使用所有三个眼部特征的多模态眼部生物识别技术。综合运用这些特点,可以提高系统的可靠性和通用性。例如,在某些情况下,巩膜和虹膜可能是高度闭塞的,或者在完全闭上眼睛的情况下,眼周特征可以作为决定的依据。该系统由三个独立的特征及其组合组成。将后验概率最高的特征分类输出作为最终决策。实验结果表明,该方法具有较高的可靠性和普遍适用性。
{"title":"A decision-level fusion strategy for multimodal ocular biometric in visible spectrum based on posterior probability","authors":"Abhijit Das, U. Pal, M. A. Ferrer-Ballester, M. Blumenstein","doi":"10.1109/BTAS.2017.8272772","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272772","url":null,"abstract":"In this work, we propose a posterior probability-based decision-level fusion strategy for multimodal ocular biometric in the visible spectrum employing iris, sclera and peri-ocular trait. To best of our knowledge this is the first attempt to design a multimodal ocular biometrics using all three ocular traits. Employing all these traits in combination can help to increase the reliability and universality of the system. For instance in some scenarios, the sclera and iris can be highly occluded or for completely closed eyes scenario, the peri-ocular trait can be relied on for the decision. The proposed system is constituted of three independent traits and their combinations. The classification output of the trait which produces highest posterior probability is to consider as the final decision. An appreciable reliability and universal applicability of ocular trait are achieved in experiments conducted employing the proposed scheme.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131678617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Optimizing resources on smartphone gait recognition 智能手机步态识别资源优化
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272679
Pablo Fernández López, Jorge Sanchez-Casanova, Paloma Tirado-Martin, J. Liu-Jimenez
Inertial gait recognition is a biometric modality with increasing interest. Gait recognition in smartphones could become one of the most user-friendly recognition systems. Some state-of-art algorithms need to perform cross-comparisons of gait cycles to obtain a comparison result. In this contribution, two facts are studied in order to reduce the computational cost: the influence of using representative gait cycles and the gait signals length. The results obtained show that cross-comparisons could be performed with representative gait cycles without heavily penalizing accuracy and reducing computational cost, and that selecting representative gait cycles from the end of the signal perform better that the ones on the beginning.
惯性步态识别是一种日益受到关注的生物识别方法。智能手机的步态识别可能会成为最用户友好的识别系统之一。一些最先进的算法需要对步态周期进行交叉比较才能获得比较结果。为了减少计算成本,本文研究了两个事实:使用代表性步态周期和步态信号长度的影响。结果表明,在不严重影响准确率和降低计算成本的情况下,可以对具有代表性的步态周期进行交叉比较,并且从信号的末尾选择具有代表性的步态周期比从信号的开始选择具有代表性的步态周期效果更好。
{"title":"Optimizing resources on smartphone gait recognition","authors":"Pablo Fernández López, Jorge Sanchez-Casanova, Paloma Tirado-Martin, J. Liu-Jimenez","doi":"10.1109/BTAS.2017.8272679","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272679","url":null,"abstract":"Inertial gait recognition is a biometric modality with increasing interest. Gait recognition in smartphones could become one of the most user-friendly recognition systems. Some state-of-art algorithms need to perform cross-comparisons of gait cycles to obtain a comparison result. In this contribution, two facts are studied in order to reduce the computational cost: the influence of using representative gait cycles and the gait signals length. The results obtained show that cross-comparisons could be performed with representative gait cycles without heavily penalizing accuracy and reducing computational cost, and that selecting representative gait cycles from the end of the signal perform better that the ones on the beginning.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124111905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Gender and ethnicity classification of Iris images using deep class-encoder 基于深度分类编码器的虹膜图像性别和种族分类
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272755
Maneet Singh, Shruti Nagpal, Mayank Vatsa, Richa Singh, A. Noore, A. Majumdar
Soft biometric modalities have shown their utility in different applications including reducing the search space significantly. This leads to improved recognition performance, reduced computation time, and faster processing of test samples. Some common soft biometric modalities are ethnicity, gender, age, hair color, iris color, presence of facial hair or moles, and markers. This research focuses on performing ethnicity and gender classification on iris images. We present a novel supervised auto-encoder based approach, Deep Class-Encoder, which uses class labels to learn discriminative representation for the given sample by mapping the learned feature vector to its label. The proposed model is evaluated on two datasets each for ethnicity and gender classification. The results obtained using the proposed Deep Class-Encoder demonstrate its effectiveness in comparison to existing approaches and state-of-the-art methods.
软生物识别模式在不同的应用中显示了它们的效用,包括显著减少搜索空间。这可以提高识别性能,减少计算时间,并更快地处理测试样本。一些常见的软生物特征有种族、性别、年龄、头发颜色、虹膜颜色、面部毛发或痣的存在以及标记。本研究的重点是对虹膜图像进行种族和性别分类。我们提出了一种新的基于监督自编码器的方法,深度类编码器,它使用类标签通过将学习到的特征向量映射到它的标签来学习给定样本的判别表示。所提出的模型在两个数据集上进行了评估,每个数据集用于种族和性别分类。使用所提出的深度分类编码器获得的结果表明,与现有方法和最先进的方法相比,它是有效的。
{"title":"Gender and ethnicity classification of Iris images using deep class-encoder","authors":"Maneet Singh, Shruti Nagpal, Mayank Vatsa, Richa Singh, A. Noore, A. Majumdar","doi":"10.1109/BTAS.2017.8272755","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272755","url":null,"abstract":"Soft biometric modalities have shown their utility in different applications including reducing the search space significantly. This leads to improved recognition performance, reduced computation time, and faster processing of test samples. Some common soft biometric modalities are ethnicity, gender, age, hair color, iris color, presence of facial hair or moles, and markers. This research focuses on performing ethnicity and gender classification on iris images. We present a novel supervised auto-encoder based approach, Deep Class-Encoder, which uses class labels to learn discriminative representation for the given sample by mapping the learned feature vector to its label. The proposed model is evaluated on two datasets each for ethnicity and gender classification. The results obtained using the proposed Deep Class-Encoder demonstrate its effectiveness in comparison to existing approaches and state-of-the-art methods.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126702792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
期刊
2017 IEEE International Joint Conference on Biometrics (IJCB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1