首页 > 最新文献

2015 International Conference on Biometrics (ICB)最新文献

英文 中文
Latent fingerprints segmentation based on Rearranged Fourier Subbands 基于重排傅立叶子带的潜在指纹分割
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139063
Phumpat Ruangsakul, V. Areekul, Krisada Phromsuthirak, Arucha Rungchokanun
In this work, we present a latent fingerprint segmentation algorithm based on spatial-frequency domain analysis. The algorithm arranges the overlapped block-based Fourier coefficients into groups of frequency and orientation subbands, called Rearranged Fourier Subband (RFS). The RFS reveals latent fingerprint spectra in only a limited number of subbands. The algorithm then boosts, sorts, and extracts, from complex background and noise, the latent fingerprint spectra in the RFS subbands. Several experiments are evaluated based on ground truth comparison, feature extraction, and latent matching on the NIST SD27 latent database. Our experimental results show that the proposed algorithm achieves better accuracy compared to those of the published automatic segmentation algorithms.
在这项工作中,我们提出了一种基于空频域分析的潜在指纹分割算法。该算法将重叠的基于块的傅里叶系数排列成频率和方向子带组,称为重排傅里叶子带(RFS)。RFS仅在有限的子带中显示潜在指纹光谱。然后,该算法从复杂的背景和噪声中增强、分类和提取RFS子带中的潜在指纹光谱。基于地面真值比较、特征提取和NIST SD27潜在数据库上的潜在匹配,对几个实验进行了评估。实验结果表明,与已有的自动分割算法相比,该算法具有更好的分割精度。
{"title":"Latent fingerprints segmentation based on Rearranged Fourier Subbands","authors":"Phumpat Ruangsakul, V. Areekul, Krisada Phromsuthirak, Arucha Rungchokanun","doi":"10.1109/ICB.2015.7139063","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139063","url":null,"abstract":"In this work, we present a latent fingerprint segmentation algorithm based on spatial-frequency domain analysis. The algorithm arranges the overlapped block-based Fourier coefficients into groups of frequency and orientation subbands, called Rearranged Fourier Subband (RFS). The RFS reveals latent fingerprint spectra in only a limited number of subbands. The algorithm then boosts, sorts, and extracts, from complex background and noise, the latent fingerprint spectra in the RFS subbands. Several experiments are evaluated based on ground truth comparison, feature extraction, and latent matching on the NIST SD27 latent database. Our experimental results show that the proposed algorithm achieves better accuracy compared to those of the published automatic segmentation algorithms.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116932740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Touchless multiview fingerprint quality assessment: rotational bad-positioning detection using Artificial Neural Networks 非接触式多视角指纹质量评估:基于人工神经网络的旋转不良定位检测
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139101
Caue Zaghetto, A. Zaghetto, F. Vidal, Luiz H. M. Aguiar
This paper presents a method based on Artificial Neural Network that evaluates the rotational bad-positioning of fingers on touchless multiview fingerprinting devices. The objective is to determine whether the finger is rotated or not, since a proper positioning of the finger is mandatory for high fingerprint matching rates. A test set of 9000 acquired images has being used to train, validate and test the proposed multilayer Artificial Neural Network classifier. To our knowledge, there is no definitive method that addressed the problem of fingerprint quality on touchless multiview scanners. The proposed finger rotation detection here presented is one of the steps that must be taken into account if a future automatic image quality assessment method is to be considered. Average results show that: (a) our classifier correctly identifies bad-positioning in approximately 94% of cases; and (b) if bad-positioning is detected, the rotation angle is correctly estimated in 90% evaluations.
提出了一种基于人工神经网络的非接触式多视指纹识别设备手指旋转定位不良评价方法。目标是确定手指是否旋转,因为手指的正确位置对于高指纹匹配率是必需的。使用9000张采集图像的测试集来训练、验证和测试所提出的多层人工神经网络分类器。据我们所知,目前还没有明确的方法来解决非接触式多视图扫描仪上指纹质量的问题。如果要考虑未来的自动图像质量评估方法,这里提出的手指旋转检测是必须考虑的步骤之一。平均结果表明:(a)我们的分类器在大约94%的情况下正确识别出不良定位;(b)如果检测到定位不良,在90%的评估中正确估计了旋转角度。
{"title":"Touchless multiview fingerprint quality assessment: rotational bad-positioning detection using Artificial Neural Networks","authors":"Caue Zaghetto, A. Zaghetto, F. Vidal, Luiz H. M. Aguiar","doi":"10.1109/ICB.2015.7139101","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139101","url":null,"abstract":"This paper presents a method based on Artificial Neural Network that evaluates the rotational bad-positioning of fingers on touchless multiview fingerprinting devices. The objective is to determine whether the finger is rotated or not, since a proper positioning of the finger is mandatory for high fingerprint matching rates. A test set of 9000 acquired images has being used to train, validate and test the proposed multilayer Artificial Neural Network classifier. To our knowledge, there is no definitive method that addressed the problem of fingerprint quality on touchless multiview scanners. The proposed finger rotation detection here presented is one of the steps that must be taken into account if a future automatic image quality assessment method is to be considered. Average results show that: (a) our classifier correctly identifies bad-positioning in approximately 94% of cases; and (b) if bad-positioning is detected, the rotation angle is correctly estimated in 90% evaluations.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133507166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Annotating Unconstrained Face Imagery: A scalable approach 标注无约束的人脸图像:一种可扩展的方法
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139094
Emma Taborsky, Kristen C. Allen, Austin Blanton, Anil K. Jain, Brendan Klare
As unconstrained face recognition datasets progress from containing faces that can be automatically detected by commodity face detectors to face imagery with full pose variations that must instead be manually localized, a significant amount of annotation effort is required for developing benchmark datasets. In this work we describe a systematic approach for annotating fully unconstrained face imagery using crowdsourced labor. For such data preparation, a cascade of crowdsourced tasks are performed, which begins with bounding box annotations on all faces contained in images and videos, followed by identification of the labelled person of interest in such imagery, and, finally, landmark annotation of key facial fiducial points. In order to allow such annotations to scale to large volumes of imagery, a software system architecture is provided which achieves a sustained rate of 30,000 annotations per hour (or 500 manual annotations per minute). While previous crowdsourcing guidance described in the literature generally involved multiple choice questions or text input, our tasks required annotators to provide geometric primitives (rectangles and points) in images. As such, algorithms are provided for combining multiple annotations of an image into a single result, and automatically measuring the quality of a given annotation. Finally, other guidance is provided for improving the accuracy and scalability of crowdsourced image annotation for face detection and recognition.
随着无约束人脸识别数据集从包含可由商品人脸检测器自动检测到具有完整姿态变化的人脸图像发展到必须手动定位,开发基准数据集需要大量的注释工作。在这项工作中,我们描述了一个系统的方法来注释完全无约束的面部图像使用众包劳动。对于这样的数据准备,执行一系列众包任务,首先是对图像和视频中包含的所有人脸进行边界框注释,然后识别这些图像中标记的感兴趣的人,最后是关键面部基准点的地标注释。为了允许这种注释扩展到大量的图像,提供了一种软件系统架构,可以实现每小时30,000个注释的持续速率(或每分钟500个手动注释)。虽然以前文献中描述的众包指导通常涉及多项选择题或文本输入,但我们的任务要求注释者提供图像中的几何原语(矩形和点)。因此,提供了将图像的多个注释组合为单个结果并自动测量给定注释质量的算法。最后,为提高众包图像标注用于人脸检测与识别的准确性和可扩展性提供了其他指导。
{"title":"Annotating Unconstrained Face Imagery: A scalable approach","authors":"Emma Taborsky, Kristen C. Allen, Austin Blanton, Anil K. Jain, Brendan Klare","doi":"10.1109/ICB.2015.7139094","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139094","url":null,"abstract":"As unconstrained face recognition datasets progress from containing faces that can be automatically detected by commodity face detectors to face imagery with full pose variations that must instead be manually localized, a significant amount of annotation effort is required for developing benchmark datasets. In this work we describe a systematic approach for annotating fully unconstrained face imagery using crowdsourced labor. For such data preparation, a cascade of crowdsourced tasks are performed, which begins with bounding box annotations on all faces contained in images and videos, followed by identification of the labelled person of interest in such imagery, and, finally, landmark annotation of key facial fiducial points. In order to allow such annotations to scale to large volumes of imagery, a software system architecture is provided which achieves a sustained rate of 30,000 annotations per hour (or 500 manual annotations per minute). While previous crowdsourcing guidance described in the literature generally involved multiple choice questions or text input, our tasks required annotators to provide geometric primitives (rectangles and points) in images. As such, algorithms are provided for combining multiple annotations of an image into a single result, and automatically measuring the quality of a given annotation. Finally, other guidance is provided for improving the accuracy and scalability of crowdsourced image annotation for face detection and recognition.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131483499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
One-handed Keystroke Biometric Identification Competition 单手击键生物识别大赛
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139076
John V. Monaco, G. Perez, C. Tappert, Patrick A. H. Bours, Soumik Mondal, S. Rajkumar, A. Morales, Julian Fierrez, J. Ortega-Garcia
This work presents the results of the One-handed Keystroke Biometric Identification Competition (OhKBIC), an official competition of the 8th IAPR International Conference on Biometrics (ICB). A unique keystroke biometric dataset was collected that includes freely-typed long-text samples from 64 subjects. Samples were collected to simulate normal typing behavior and the severe handicap of only being able to type with one hand. Competition participants designed classification models trained on the normally-typed samples in an attempt to classify an unlabeled dataset that consists of normally-typed and one-handed samples. Participants competed against each other to obtain the highest classification accuracies and submitted classification results through an online system similar to Kaggle. The classification results and top performing strategies are described.
本文介绍了第8届IAPR国际生物识别会议(ICB)的官方比赛单手击键生物识别比赛(OhKBIC)的结果。收集了一个独特的击键生物识别数据集,其中包括来自64个受试者的自由键入的长文本样本。收集样本来模拟正常的打字行为和只能用一只手打字的严重障碍。竞赛参与者设计了在正常类型样本上训练的分类模型,试图对由正常类型和单手样本组成的未标记数据集进行分类。参与者相互竞争,以获得最高的分类准确性,并通过类似于Kaggle的在线系统提交分类结果。描述了分类结果和最佳执行策略。
{"title":"One-handed Keystroke Biometric Identification Competition","authors":"John V. Monaco, G. Perez, C. Tappert, Patrick A. H. Bours, Soumik Mondal, S. Rajkumar, A. Morales, Julian Fierrez, J. Ortega-Garcia","doi":"10.1109/ICB.2015.7139076","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139076","url":null,"abstract":"This work presents the results of the One-handed Keystroke Biometric Identification Competition (OhKBIC), an official competition of the 8th IAPR International Conference on Biometrics (ICB). A unique keystroke biometric dataset was collected that includes freely-typed long-text samples from 64 subjects. Samples were collected to simulate normal typing behavior and the severe handicap of only being able to type with one hand. Competition participants designed classification models trained on the normally-typed samples in an attempt to classify an unlabeled dataset that consists of normally-typed and one-handed samples. Participants competed against each other to obtain the highest classification accuracies and submitted classification results through an online system similar to Kaggle. The classification results and top performing strategies are described.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"257 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121408875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Security analysis of Bloom filter-based iris biometric template protection 基于Bloom过滤器的虹膜生物识别模板防护安全性分析
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139069
J. Bringer, Constance Morel, C. Rathgeb
This paper analyses the unlinkability and the irreversibility of the iris biometric template protection system based on Bloom filters introduced at ICB 2013. Hermans et al. presented at BIOSIG 2014 an attack on the unlinkability of these templates. In the worst case, their attack succeeds with probability at least 96%. But in their attack, they assume protected templates generated from the same iriscode. In this paper, we analyze unlinkability on protected templates generated with two different iriscodes coming from the same iris, and we moreover introduce irreversibility analysis that exploits non-uniformity of the data. Our experiments thus practically demonstrate new vulnerabilities of the scheme.
本文分析了icb2013上介绍的基于布隆过滤器的虹膜生物识别模板保护系统的不链接性和不可逆性。Hermans等人在BIOSIG 2014上提出了对这些模板不可链接性的攻击。在最坏的情况下,他们的攻击成功率至少为96%。但在他们的攻击中,他们假设受保护的模板是由相同的虹膜码生成的。本文分析了来自同一虹膜的两个不同虹膜码生成的保护模板的不可链接性,并引入了利用数据不均匀性的不可逆性分析。因此,我们的实验实际上证明了该方案的新漏洞。
{"title":"Security analysis of Bloom filter-based iris biometric template protection","authors":"J. Bringer, Constance Morel, C. Rathgeb","doi":"10.1109/ICB.2015.7139069","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139069","url":null,"abstract":"This paper analyses the unlinkability and the irreversibility of the iris biometric template protection system based on Bloom filters introduced at ICB 2013. Hermans et al. presented at BIOSIG 2014 an attack on the unlinkability of these templates. In the worst case, their attack succeeds with probability at least 96%. But in their attack, they assume protected templates generated from the same iriscode. In this paper, we analyze unlinkability on protected templates generated with two different iriscodes coming from the same iris, and we moreover introduce irreversibility analysis that exploits non-uniformity of the data. Our experiments thus practically demonstrate new vulnerabilities of the scheme.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131140022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Audio-visual twins database 视听双胞胎数据库
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139115
Jing Li, Li Zhang, Dong Guo, Shaojie Zhuo, T. Sim
Identical twins pose an interesting challenge for recognition systems due to their similar appearance. Although various biometrics have been proposed for the problem, existing works are quite limited due to the difficulty of obtaining a twins database. To encourage the methods for twins recognition and make a fair comparison of them by using the same database, we collected an audio-visual twins database at the Sixth Mojiang International Twins Festival held on 1 May 2010, China. Our database contains 39 pairs of twins in total, including Chinese, American and Russian subjects. This database contains several face images, facial motion videos and audio records for each subject. In this paper, we describe the collection procedure, organization of the database, and usage method of the database. We also show our experiments on face verification, facial motion verification and speaker verification for twins to provide usage examples of the database.
同卵双胞胎由于外表相似,对识别系统构成了一个有趣的挑战。尽管针对这个问题已经提出了各种生物识别技术,但由于难以获得双胞胎数据库,现有的工作相当有限。为了鼓励双胞胎识别方法的发展,利用相同的数据库对双胞胎进行公平的比较,我们在2010年5月1日在中国举行的第六届墨江国际双胞胎节上收集了一个视听双胞胎数据库。我们的数据库共包含39对双胞胎,包括中国、美国和俄罗斯的受试者。该数据库包含每个受试者的几张面部图像、面部运动视频和音频记录。本文介绍了该数据库的采集过程、数据库的组织以及数据库的使用方法。我们还展示了我们在双胞胎面部验证、面部动作验证和说话人验证方面的实验,以提供数据库的使用实例。
{"title":"Audio-visual twins database","authors":"Jing Li, Li Zhang, Dong Guo, Shaojie Zhuo, T. Sim","doi":"10.1109/ICB.2015.7139115","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139115","url":null,"abstract":"Identical twins pose an interesting challenge for recognition systems due to their similar appearance. Although various biometrics have been proposed for the problem, existing works are quite limited due to the difficulty of obtaining a twins database. To encourage the methods for twins recognition and make a fair comparison of them by using the same database, we collected an audio-visual twins database at the Sixth Mojiang International Twins Festival held on 1 May 2010, China. Our database contains 39 pairs of twins in total, including Chinese, American and Russian subjects. This database contains several face images, facial motion videos and audio records for each subject. In this paper, we describe the collection procedure, organization of the database, and usage method of the database. We also show our experiments on face verification, facial motion verification and speaker verification for twins to provide usage examples of the database.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129999819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Grid structured morphological pattern spectrum for off-line signature verification 网格结构形态图谱离线签名验证
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139106
B H Shekar, R. Bharathi, J. Kittler, Y. Vizilter, Leonid Mestestskiy
In this paper, we present a grid structured morphological pattern spectrum based approach for off-line signature verification. The proposed approach has three major phases: preprocessing, feature extraction and verification. In the feature extraction phase, the signature image is partitioned into eight equally sized vertical grids and grid structured morphological pattern spectra for each grid is obtained. The grid structured morphological spectrum is represented in the form of 10-bin histogram and normalised to overcome the problem of scaling. The eighty dimensional feature vector is obtained by concatenating all the eight vertical morphological spectrum based normalised histogram. For verification purpose, we have considered two well known classifiers, namely SVM and MLP and conducted experiments on standard signature datasets namely CEDAR, GPDS-160 and MUKOS, a regional language (Kannada) dataset. The comparative study is also provided with the well known approaches to exhibit the performance of the proposed approach.
本文提出了一种基于网格结构形态模式谱的离线签名验证方法。该方法分为预处理、特征提取和验证三个主要阶段。在特征提取阶段,将特征图像划分为8个大小相等的垂直网格,获得每个网格的网格结构形态模式谱。网格结构的形态谱以10 bin直方图的形式表示,并进行归一化以克服缩放问题。将8个垂直形态谱基于归一化直方图串联得到80维特征向量。为了验证目的,我们考虑了两种众所周知的分类器,即SVM和MLP,并在标准签名数据集CEDAR、GPDS-160和区域语言(卡纳达语)数据集MUKOS上进行了实验。通过比较研究,还提供了一些已知的方法来展示所提出方法的性能。
{"title":"Grid structured morphological pattern spectrum for off-line signature verification","authors":"B H Shekar, R. Bharathi, J. Kittler, Y. Vizilter, Leonid Mestestskiy","doi":"10.1109/ICB.2015.7139106","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139106","url":null,"abstract":"In this paper, we present a grid structured morphological pattern spectrum based approach for off-line signature verification. The proposed approach has three major phases: preprocessing, feature extraction and verification. In the feature extraction phase, the signature image is partitioned into eight equally sized vertical grids and grid structured morphological pattern spectra for each grid is obtained. The grid structured morphological spectrum is represented in the form of 10-bin histogram and normalised to overcome the problem of scaling. The eighty dimensional feature vector is obtained by concatenating all the eight vertical morphological spectrum based normalised histogram. For verification purpose, we have considered two well known classifiers, namely SVM and MLP and conducted experiments on standard signature datasets namely CEDAR, GPDS-160 and MUKOS, a regional language (Kannada) dataset. The comparative study is also provided with the well known approaches to exhibit the performance of the proposed approach.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131888091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
3D face recognition with asymptotic cones based principal curvatures 基于渐近锥体主曲率的三维人脸识别
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139111
Yinhang Tang, Xiang Sun, Di Huang, J. Morvan, Yunhong Wang, Liming Chen
The classical curvatures of smooth surfaces (Gaussian, mean and principal curvatures) have been widely used in 3D face recognition (FR). However, facial surfaces resulting from 3D sensors are discrete meshes. In this paper, we present a general framework and define three principal curvatures on discrete surfaces for the purpose of 3D FR. These principal curvatures are derived from the construction of asymptotic cones associated to any Borel subset of the discrete surface. They describe the local geometry of the underlying mesh. First two of them correspond to the classical principal curvatures in the smooth case. We isolate the third principal curvature that carries out meaningful geometric shape information. The three principal curvatures in different Borel subsets scales give multi-scale local facial surface descriptors. We combine the proposed principal curvatures with the LNP-based facial descriptor and SRC for recognition. The identification and verification experiments demonstrate the practicability and accuracy of the third principal curvature and the fusion of multi-scale Borel subset descriptors on 3D face from FRGC v2.0.
光滑曲面的经典曲率(高斯曲率、平均曲率和主曲率)在三维人脸识别中得到了广泛的应用。然而,由3D传感器产生的面部表面是离散的网格。在本文中,我们提出了一个一般的框架,并在离散曲面上定义了三个主曲率。这些主曲率是由与离散曲面的任意Borel子集相关的渐近锥的构造得到的。它们描述底层网格的局部几何形状。前两个对应于光滑情况下的经典主曲率。我们分离了第三个主曲率,它提供了有意义的几何形状信息。不同Borel子集尺度下的三个主曲率给出了多尺度局部表面描述符。我们将提出的主曲率与基于lnp的面部描述符和SRC相结合进行识别。通过识别和验证实验,验证了FRGC v2.0中第三主曲率和多尺度Borel子集描述子融合在三维人脸上的实用性和准确性。
{"title":"3D face recognition with asymptotic cones based principal curvatures","authors":"Yinhang Tang, Xiang Sun, Di Huang, J. Morvan, Yunhong Wang, Liming Chen","doi":"10.1109/ICB.2015.7139111","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139111","url":null,"abstract":"The classical curvatures of smooth surfaces (Gaussian, mean and principal curvatures) have been widely used in 3D face recognition (FR). However, facial surfaces resulting from 3D sensors are discrete meshes. In this paper, we present a general framework and define three principal curvatures on discrete surfaces for the purpose of 3D FR. These principal curvatures are derived from the construction of asymptotic cones associated to any Borel subset of the discrete surface. They describe the local geometry of the underlying mesh. First two of them correspond to the classical principal curvatures in the smooth case. We isolate the third principal curvature that carries out meaningful geometric shape information. The three principal curvatures in different Borel subsets scales give multi-scale local facial surface descriptors. We combine the proposed principal curvatures with the LNP-based facial descriptor and SRC for recognition. The identification and verification experiments demonstrate the practicability and accuracy of the third principal curvature and the fusion of multi-scale Borel subset descriptors on 3D face from FRGC v2.0.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133856571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Single sensor-based multi-quality multi-modal biometric score database and its performance evaluation 基于单传感器的多质量多模态生物特征评分数据库及其性能评价
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139068
Takuhiro Kimura, Yasushi Makihara, D. Muramatsu, Y. Yagi
We constructed a large-scale multi-quality multi-modal biometric score database to advance studies on quality-dependent score-level fusion. In particular, we focused on single sensor-based multi-modal biometrics because of their advantages of simple system construction, low cost, and wide availability in real situations such as CCTV footage-based criminal investigation, unlike conventional individual sensor-based multi-modal biometrics that require multiple sensors. As for the modalities of multiple biometrics, we extracted gait, head, and the height biometrics from a single walking image sequence, and considered spatial resolution (SR) and temporal resolution (TR) as quality measures that simultaneously affect the scores of individual modalities. We then computed biometric scores of 1912 subjects under a total of 130 combinations of the quality measures, i.e., 13 SRs and 10 TRs, and constructed a very large-scale biometric score database composed of 1,814,488 genuine scores and 3,467,486,568 imposter scores. We finally provide performance evaluation results both for quality-independent and quality-dependent score-level fusion approaches using two protocols that will be beneficial to the score-level fusion research community.
我们构建了一个大规模的多质量多模态生物特征评分数据库,以推进质量依赖评分水平融合的研究。与传统的需要多个传感器的单传感器多模态生物识别技术不同,基于单传感器的多模态生物识别技术具有系统结构简单、成本低、可广泛应用于基于CCTV视频的犯罪调查等实际情况的优点。对于多生物特征的模式,我们从单个步行图像序列中提取步态、头部和高度的生物特征,并将空间分辨率(SR)和时间分辨率(TR)作为质量度量,同时影响单个模式的得分。然后,我们计算了1912名受试者在130种质量测量组合下的生物特征分数,即13种SRs和10种TRs,并构建了一个由1,814,488个真实分数和3,467,486,568个冒名顶替分数组成的非常大规模的生物特征分数数据库。最后,我们使用两种协议提供了质量独立和质量依赖的分数级融合方法的性能评估结果,这将有利于分数级融合研究界。
{"title":"Single sensor-based multi-quality multi-modal biometric score database and its performance evaluation","authors":"Takuhiro Kimura, Yasushi Makihara, D. Muramatsu, Y. Yagi","doi":"10.1109/ICB.2015.7139068","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139068","url":null,"abstract":"We constructed a large-scale multi-quality multi-modal biometric score database to advance studies on quality-dependent score-level fusion. In particular, we focused on single sensor-based multi-modal biometrics because of their advantages of simple system construction, low cost, and wide availability in real situations such as CCTV footage-based criminal investigation, unlike conventional individual sensor-based multi-modal biometrics that require multiple sensors. As for the modalities of multiple biometrics, we extracted gait, head, and the height biometrics from a single walking image sequence, and considered spatial resolution (SR) and temporal resolution (TR) as quality measures that simultaneously affect the scores of individual modalities. We then computed biometric scores of 1912 subjects under a total of 130 combinations of the quality measures, i.e., 13 SRs and 10 TRs, and constructed a very large-scale biometric score database composed of 1,814,488 genuine scores and 3,467,486,568 imposter scores. We finally provide performance evaluation results both for quality-independent and quality-dependent score-level fusion approaches using two protocols that will be beneficial to the score-level fusion research community.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133581502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
k-Nearest Neighborhood Structure (k-NNS) based alignment-free method for fingerprint template protection 基于k-最近邻结构(k-NNS)的无对齐指纹模板保护方法
Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139100
M. Sandhya, M. Prasad
In this paper we focus on constructing k-Nearest Neighborhood Structure(k - NNS) for minutiae points in a fingerprint image. For each minutiae point in a fingerprint, a k - NNS is constructed taking the local and global features of minutiae points. This structure is quantized and mapped onto a 2D array to generate a fixed length 1D bit-string. Further this bit string is applied with a DFT to generate a complex vector. Finally the complex vector is multiplied by a user specific random matrix to generate the cancelable template. We tested our proposed method on database FV C2002 and experimental results depicts the validity of the proposed method in terms of requirements of cancelable biometrics namely diversity, accuracy, irreversibility and revocability.
本文主要研究指纹图像中细微点的k近邻结构(k - NNS)的构造。对于指纹中的每一个细微点,分别利用细微点的局部特征和全局特征构建k -神经网络。该结构被量子化并映射到二维数组上,以生成固定长度的1D位串。进一步,将该位串应用于DFT以生成复向量。最后,将复向量与用户指定的随机矩阵相乘,生成可取消模板。我们在数据库FV C2002上测试了我们提出的方法,实验结果描述了所提出的方法在可取消生物特征要求方面的有效性,即多样性,准确性,不可逆性和可撤销性。
{"title":"k-Nearest Neighborhood Structure (k-NNS) based alignment-free method for fingerprint template protection","authors":"M. Sandhya, M. Prasad","doi":"10.1109/ICB.2015.7139100","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139100","url":null,"abstract":"In this paper we focus on constructing k-Nearest Neighborhood Structure(k - NNS) for minutiae points in a fingerprint image. For each minutiae point in a fingerprint, a k - NNS is constructed taking the local and global features of minutiae points. This structure is quantized and mapped onto a 2D array to generate a fixed length 1D bit-string. Further this bit string is applied with a DFT to generate a complex vector. Finally the complex vector is multiplied by a user specific random matrix to generate the cancelable template. We tested our proposed method on database FV C2002 and experimental results depicts the validity of the proposed method in terms of requirements of cancelable biometrics namely diversity, accuracy, irreversibility and revocability.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123575256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
期刊
2015 International Conference on Biometrics (ICB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1