首页 > 最新文献

2011 International Joint Conference on Biometrics (IJCB)最新文献

英文 中文
Contourlet appearance model for facial age estimation 面部年龄估计的轮廓波外观模型
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117601
Khoa Luu, Keshav Seshadri, M. Savvides, T. D. Bui, C. Suen
In this paper we propose a novel Contourlet Appearance Model (CAM) that is more accurate and faster at localizing facial landmarks than Active Appearance Models (AAMs). Our CAM also has the ability to not only extract holistic texture information, as AAMs do, but can also extract local texture information using the Nonsubsampled Contourlet Transform (NSCT). We demonstrate the efficiency of our method by applying it to the problem of facial age estimation. Compared to previously published age estimation techniques, our approach yields more accurate results when tested on various face aging databases.
本文提出了一种新的Contourlet外观模型(CAM),它比主动外观模型(AAMs)更准确、更快地定位面部标志。我们的CAM不仅能够像aam那样提取整体纹理信息,而且还可以使用非下采样Contourlet变换(NSCT)提取局部纹理信息。通过将该方法应用于人脸年龄估计问题,证明了该方法的有效性。与先前发表的年龄估计技术相比,我们的方法在各种面部老化数据库上测试时产生了更准确的结果。
{"title":"Contourlet appearance model for facial age estimation","authors":"Khoa Luu, Keshav Seshadri, M. Savvides, T. D. Bui, C. Suen","doi":"10.1109/IJCB.2011.6117601","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117601","url":null,"abstract":"In this paper we propose a novel Contourlet Appearance Model (CAM) that is more accurate and faster at localizing facial landmarks than Active Appearance Models (AAMs). Our CAM also has the ability to not only extract holistic texture information, as AAMs do, but can also extract local texture information using the Nonsubsampled Contourlet Transform (NSCT). We demonstrate the efficiency of our method by applying it to the problem of facial age estimation. Compared to previously published age estimation techniques, our approach yields more accurate results when tested on various face aging databases.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124980287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 81
NFRAD: Near-Infrared Face Recognition at a Distance NFRAD:远距离近红外人脸识别
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117486
Hyun-ju Maeng, Hyun-Cheol Choi, U. Park, Seong-Whan Lee, Anil K. Jain
Face recognition at a distance is gaining wide attention in order to augment the surveillance systems with face recognition capability. However, face recognition at a distance in nighttime has not yet received adequate attention considering the increased security threats at nighttime. We introduce a new face image database, called Near-Infrared Face Recognition at a Distance Database (NFRAD-DB). Images in NFRAD-DB are collected at a distance of up to 60 meters with 50 different subjects using a near-infrared camera, a telescope, and near-infrared illuminator. We provide face recognition performance using FaceVACS, DoG-SIFT, and DoG-MLBP representations. The face recognition test consisted of NIR images of these 50 subjects at 60 meters as probe and visible images at 1 meter with additional mug shot images of 10,000 subjects as gallery. Rank-1 identification accuracy of 28 percent was achieved from the proposed method compared to 18 percent rank-1 accuracy of a state of the art face recognition system, FaceVACS. These recognition results are encouraging given this challenging matching problem due to the illumination pattern and insufficient brightness in NFRAD images.
远距离人脸识别技术是增强监控系统人脸识别能力的重要手段。然而,考虑到夜间安全威胁的增加,夜间远距离人脸识别尚未得到足够的重视。我们介绍了一个新的人脸图像数据库,称为近红外人脸识别在一个距离数据库(NFRAD-DB)。NFRAD-DB中的图像是使用近红外相机、望远镜和近红外照明灯在60米的距离上收集50个不同对象的图像。我们使用FaceVACS、DoG-SIFT和DoG-MLBP表示提供人脸识别性能。人脸识别测试以这50名受试者在60米处的近红外图像为探针,在1米处的可见光图像,外加1万名受试者的脸部照片作为画廊。与最先进的人脸识别系统FaceVACS的18%的Rank-1准确率相比,该方法实现了28%的Rank-1识别准确率。考虑到NFRAD图像中由于光照模式和亮度不足导致的匹配问题具有挑战性,这些识别结果令人鼓舞。
{"title":"NFRAD: Near-Infrared Face Recognition at a Distance","authors":"Hyun-ju Maeng, Hyun-Cheol Choi, U. Park, Seong-Whan Lee, Anil K. Jain","doi":"10.1109/IJCB.2011.6117486","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117486","url":null,"abstract":"Face recognition at a distance is gaining wide attention in order to augment the surveillance systems with face recognition capability. However, face recognition at a distance in nighttime has not yet received adequate attention considering the increased security threats at nighttime. We introduce a new face image database, called Near-Infrared Face Recognition at a Distance Database (NFRAD-DB). Images in NFRAD-DB are collected at a distance of up to 60 meters with 50 different subjects using a near-infrared camera, a telescope, and near-infrared illuminator. We provide face recognition performance using FaceVACS, DoG-SIFT, and DoG-MLBP representations. The face recognition test consisted of NIR images of these 50 subjects at 60 meters as probe and visible images at 1 meter with additional mug shot images of 10,000 subjects as gallery. Rank-1 identification accuracy of 28 percent was achieved from the proposed method compared to 18 percent rank-1 accuracy of a state of the art face recognition system, FaceVACS. These recognition results are encouraging given this challenging matching problem due to the illumination pattern and insufficient brightness in NFRAD images.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124582993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Person-specific face representation for recognition 用于识别的个人特定面部表征
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117478
G. Chiachia, A. Falcão, A. Rocha
Most face recognition methods rely on a common feature space to represent the faces, in which the face aspects that better distinguish among all the persons are emphasized. This strategy may be inadequate to represent more appropriate aspects of a specific person's face, since there may be some aspects that are good at distinguishing only a given person from the others. Based on this idea and supported by some findings in the human perception of faces, we propose a face recognition framework that associates a feature space to each person that we intend to recognize. Such feature spaces are conceived to underline the discriminating face aspects of the persons they represent. In order to recognize a probe, we match it to the gallery in all the feature spaces and fuse the results to establish the identity. With the help of an algorithm that we devise, the Discriminant Patch Selection, we were capable of carrying out experiments to intuitively compare the traditional approaches with the person-specific representation. In the performed experiments, the person-specific face representation always resulted in a better identification of the faces.
大多数人脸识别方法都依赖于一个共同的特征空间来表示人脸,其中强调的是能够更好地区分所有人的人脸特征。这种策略可能不足以代表一个特定的人的脸部的更合适的方面,因为可能有一些方面只擅长区分给定的人和其他人。基于这一想法,并在人类对人脸感知的一些发现的支持下,我们提出了一个人脸识别框架,该框架将特征空间与我们打算识别的每个人联系起来。这样的特征空间被认为是为了强调他们所代表的人的歧视性面部方面。为了识别探针,我们将其与所有特征空间中的画廊进行匹配,并将结果融合以建立身份。在我们设计的一种算法的帮助下,我们能够进行实验,直观地将传统方法与个人特定表示进行比较。在已完成的实验中,个人特定的面孔表征总是导致更好的面孔识别。
{"title":"Person-specific face representation for recognition","authors":"G. Chiachia, A. Falcão, A. Rocha","doi":"10.1109/IJCB.2011.6117478","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117478","url":null,"abstract":"Most face recognition methods rely on a common feature space to represent the faces, in which the face aspects that better distinguish among all the persons are emphasized. This strategy may be inadequate to represent more appropriate aspects of a specific person's face, since there may be some aspects that are good at distinguishing only a given person from the others. Based on this idea and supported by some findings in the human perception of faces, we propose a face recognition framework that associates a feature space to each person that we intend to recognize. Such feature spaces are conceived to underline the discriminating face aspects of the persons they represent. In order to recognize a probe, we match it to the gallery in all the feature spaces and fuse the results to establish the identity. With the help of an algorithm that we devise, the Discriminant Patch Selection, we were capable of carrying out experiments to intuitively compare the traditional approaches with the person-specific representation. In the performed experiments, the person-specific face representation always resulted in a better identification of the faces.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123754087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards incremental and large scale face recognition 面向增量、大规模人脸识别
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117583
Junjie Yan, Zhen Lei, Dong Yi, S. Li
Linear discriminant analysis with nearest neighborhood classifier (LDA + NN) has been commonly used in face recognition, but it often confronts with two problems in real applications: (1) it cannot incrementally deal with the information of training instances; (2) it cannot achieve fast search against large scale gallery set. In this paper, we use incremental LDA (ILDA) and hashing based search method to deal with these two problems. Firstly two incremental LDA algorithms are proposed under spectral regression framework, namely exact incremental spectral regression discriminant analysis (EI-SRDA) and approximate incremental spectral regression discriminant analysis (AI-SRDA). Secondly we propose a similarity hashing algorithm of sub-linear complexity to achieve quick recognition from large gallery set. Experiments on FRGC and self-collected 100,000 faces database show the effective of our methods.
线性判别分析与最近邻分类器(LDA + NN)是人脸识别中常用的方法,但在实际应用中经常面临两个问题:(1)不能对训练实例的信息进行增量处理;(2)无法实现对大型图库集的快速搜索。本文采用增量LDA (ILDA)和基于哈希的搜索方法来解决这两个问题。首先在光谱回归框架下提出了精确增量光谱回归判别分析(EI-SRDA)和近似增量光谱回归判别分析(AI-SRDA)两种增量LDA算法。其次,提出了一种次线性复杂度的相似性哈希算法,实现了对大型图库集的快速识别。在FRGC和自采集的10万张人脸数据库上进行的实验证明了该方法的有效性。
{"title":"Towards incremental and large scale face recognition","authors":"Junjie Yan, Zhen Lei, Dong Yi, S. Li","doi":"10.1109/IJCB.2011.6117583","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117583","url":null,"abstract":"Linear discriminant analysis with nearest neighborhood classifier (LDA + NN) has been commonly used in face recognition, but it often confronts with two problems in real applications: (1) it cannot incrementally deal with the information of training instances; (2) it cannot achieve fast search against large scale gallery set. In this paper, we use incremental LDA (ILDA) and hashing based search method to deal with these two problems. Firstly two incremental LDA algorithms are proposed under spectral regression framework, namely exact incremental spectral regression discriminant analysis (EI-SRDA) and approximate incremental spectral regression discriminant analysis (AI-SRDA). Secondly we propose a similarity hashing algorithm of sub-linear complexity to achieve quick recognition from large gallery set. Experiments on FRGC and self-collected 100,000 faces database show the effective of our methods.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123623849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Palmprint indexing based on ridge features 基于脊状特征的掌纹索引
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117505
X. Yang, Jianjiang Feng, Jie Zhou
In recent years, law enforcement agencies are increasingly using palmprint to identify criminals. For law enforcement palmprint identification systems, efficiency is a very important but challenging problem because of large database size and poor image quality. Existing palmprint identification systems are not sufficiently fast for practical applications. To solve this problem, a novel palmprint indexing algorithm based on ridge features is proposed in this paper. A palmprint is pre-aligned by registering its orientation field with respect to a set of reference orientation fields, which are obtained by clustering training palmprint orientation fields. Indexing is based on comparing ridge orientation fields and ridge density maps, which is much faster than minutiae matching. Proposed algorithm achieved an error rate of 1% at a penetration rate of 2.25% on a palmprint database consisting of 13,416 palmprints. Searching a query palmprint over the whole database takes only 0.22 seconds.
近年来,执法机构越来越多地使用掌纹来识别罪犯。对于执法掌纹识别系统来说,由于数据库规模大,图像质量差,效率是一个非常重要但又具有挑战性的问题。现有的掌纹识别系统在实际应用中速度不够快。为了解决这一问题,本文提出了一种基于脊特征的掌纹索引算法。通过对一组参考方向场进行注册,对掌纹进行预对齐。参考方向场是通过聚类训练掌纹方向场得到的。索引是基于比较山脊方向场和山脊密度图,这比细节匹配快得多。该算法在由13416个掌纹组成的掌纹数据库上,以2.25%的渗透率实现了1%的错误率。在整个数据库中搜索查询掌纹只需要0.22秒。
{"title":"Palmprint indexing based on ridge features","authors":"X. Yang, Jianjiang Feng, Jie Zhou","doi":"10.1109/IJCB.2011.6117505","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117505","url":null,"abstract":"In recent years, law enforcement agencies are increasingly using palmprint to identify criminals. For law enforcement palmprint identification systems, efficiency is a very important but challenging problem because of large database size and poor image quality. Existing palmprint identification systems are not sufficiently fast for practical applications. To solve this problem, a novel palmprint indexing algorithm based on ridge features is proposed in this paper. A palmprint is pre-aligned by registering its orientation field with respect to a set of reference orientation fields, which are obtained by clustering training palmprint orientation fields. Indexing is based on comparing ridge orientation fields and ridge density maps, which is much faster than minutiae matching. Proposed algorithm achieved an error rate of 1% at a penetration rate of 2.25% on a palmprint database consisting of 13,416 palmprints. Searching a query palmprint over the whole database takes only 0.22 seconds.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127627628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Face verification using large feature sets and one shot similarity 基于大特征集和单镜头相似度的人脸验证
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117498
Huimin Guo, W. R. Schwartz, L. Davis
We present a method for face verification that combines Partial Least Squares (PLS) and the One-Shot similarity model[28]. First, a large feature set combining shape, texture and color information is used to describe a face. Then PLS is applied to reduce the dimensionality of the feature set with multi-channel feature weighting. This provides a discriminative facial descriptor. PLS regression is used to compute the similarity score of an image pair by One-Shot learning. Given two feature vector representing face images, the One-Shot algorithm learns discriminative models exclusively for the vectors being compared. A small set of unlabeled images, not containing images belonging to the people being compared, is used as a reference (negative) set. The approach is evaluated on the Labeled Face in the Wild (LFW) benchmark and shows very comparable results to the state-of-the-art methods (achieving 86.12% classification accuracy) while maintaining simplicity and good generalization ability.
我们提出了一种结合偏最小二乘(PLS)和一次性相似性模型的人脸验证方法[28]。首先,利用结合形状、纹理和颜色信息的大型特征集对人脸进行描述。然后利用PLS对特征集进行多通道特征加权降维。这提供了一个判别性的面部描述符。采用单次学习的方法,利用PLS回归计算图像对的相似度得分。给定两个代表人脸图像的特征向量,One-Shot算法专门为被比较的向量学习判别模型。一小组未标记的图像,不包含属于被比较的人的图像,被用作参考(否定)集。该方法在Labeled Face in The Wild (LFW)基准上进行了评估,显示出与最先进的方法非常相似的结果(达到86.12%的分类准确率),同时保持了简单性和良好的泛化能力。
{"title":"Face verification using large feature sets and one shot similarity","authors":"Huimin Guo, W. R. Schwartz, L. Davis","doi":"10.1109/IJCB.2011.6117498","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117498","url":null,"abstract":"We present a method for face verification that combines Partial Least Squares (PLS) and the One-Shot similarity model[28]. First, a large feature set combining shape, texture and color information is used to describe a face. Then PLS is applied to reduce the dimensionality of the feature set with multi-channel feature weighting. This provides a discriminative facial descriptor. PLS regression is used to compute the similarity score of an image pair by One-Shot learning. Given two feature vector representing face images, the One-Shot algorithm learns discriminative models exclusively for the vectors being compared. A small set of unlabeled images, not containing images belonging to the people being compared, is used as a reference (negative) set. The approach is evaluated on the Labeled Face in the Wild (LFW) benchmark and shows very comparable results to the state-of-the-art methods (achieving 86.12% classification accuracy) while maintaining simplicity and good generalization ability.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127795950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Fusion of multiple clues for photo-attack detection in face recognition systems 人脸识别系统中多线索融合的光攻击检测
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117522
Roberto Tronci, Daniele Muntoni, Gianluca Fadda, Maurizio Pili, Nicola Sirena, G. Murgia, Marco Ristori, Sardegna Ricerche, F. Roli
We faced the problem of detecting 2-D face spoofing attacks performed by placing a printed photo of a real user in front of the camera. For this type of attack it is not possible to relay just on the face movements as a clue of vitality because the attacker can easily simulate such a case, and also because real users often show a “low vitality” during the authentication session. In this paper, we perform both video and static analysis in order to employ complementary information about motion, texture and liveness and consequently to obtain a more robust classification.
我们面临的问题是,通过在相机前放置真实用户的打印照片来检测二维面部欺骗攻击。对于这种类型的攻击,不可能仅仅将面部运动作为活力的线索,因为攻击者可以很容易地模拟这种情况,而且还因为真实用户在身份验证会话期间经常表现出“低活力”。在本文中,我们同时进行视频和静态分析,以便利用关于运动,纹理和活力的互补信息,从而获得更健壮的分类。
{"title":"Fusion of multiple clues for photo-attack detection in face recognition systems","authors":"Roberto Tronci, Daniele Muntoni, Gianluca Fadda, Maurizio Pili, Nicola Sirena, G. Murgia, Marco Ristori, Sardegna Ricerche, F. Roli","doi":"10.1109/IJCB.2011.6117522","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117522","url":null,"abstract":"We faced the problem of detecting 2-D face spoofing attacks performed by placing a printed photo of a real user in front of the camera. For this type of attack it is not possible to relay just on the face movements as a clue of vitality because the attacker can easily simulate such a case, and also because real users often show a “low vitality” during the authentication session. In this paper, we perform both video and static analysis in order to employ complementary information about motion, texture and liveness and consequently to obtain a more robust classification.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126818991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 82
On the evidential value of fingerprints 论指纹的证据价值
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117550
Hee-seung Choi, Abhishek Nagar, Anil K. Jain
Fingerprint evidence is routinely used by forensics and law enforcement agencies worldwide to apprehend and convict criminals, a practice in use for over 100 years. The use of fingerprints has been accepted as an infallible proof of identity based on two premises: (i) permanence or persistence, and (ii) uniqueness or individuality. However, in the absence of any theoretical results that establish the uniqueness or individuality of fingerprints, the use of fingerprints in various court proceedings is being questioned. This has raised awareness in the forensics community about the need to quantify the evidential value of fingerprint matching. A few studies that have studied this problem estimate this evidential value in one of two ways: (i) feature modeling, where a statistical (generative) model for fingerprint features, primarily minutiae, is developed which is then used to estimate the matching error and (ii) match score modeling, where a set of match scores obtained over a database is used to estimate the matching error rates. Our focus here is on match score modeling and we develop metrics to evaluate the effectiveness and reliability of the proposed evidential measure. Compared to previous approaches, the proposed measure allows explicit utilization of prior odds. Further, we also incorporate fingerprint image quality to improve the reliability of the estimated evidential value.
指纹证据通常被世界各地的法医和执法机构用来逮捕和定罪罪犯,这种做法已经使用了100多年。使用指纹已被接受为基于两个前提的绝对可靠的身份证明:(i)永久性或持久性,和(ii)独特性或个别性。然而,由于没有任何理论结果可以确定指纹的独特性或个性,在各种法庭诉讼中使用指纹受到质疑。这提高了法医学界对量化指纹匹配证据价值的必要性的认识。一些研究这个问题的研究以两种方式之一来估计这个证据值:(i)特征建模,其中开发了指纹特征(主要是细节)的统计(生成)模型,然后用于估计匹配误差;(ii)匹配分数建模,其中通过数据库获得的一组匹配分数用于估计匹配错误率。我们在这里的重点是匹配得分模型,我们开发的指标来评估有效性和可靠性提出的证据措施。与以前的方法相比,所提出的方法允许明确利用先验赔率。此外,我们还结合指纹图像质量来提高估计证据值的可靠性。
{"title":"On the evidential value of fingerprints","authors":"Hee-seung Choi, Abhishek Nagar, Anil K. Jain","doi":"10.1109/IJCB.2011.6117550","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117550","url":null,"abstract":"Fingerprint evidence is routinely used by forensics and law enforcement agencies worldwide to apprehend and convict criminals, a practice in use for over 100 years. The use of fingerprints has been accepted as an infallible proof of identity based on two premises: (i) permanence or persistence, and (ii) uniqueness or individuality. However, in the absence of any theoretical results that establish the uniqueness or individuality of fingerprints, the use of fingerprints in various court proceedings is being questioned. This has raised awareness in the forensics community about the need to quantify the evidential value of fingerprint matching. A few studies that have studied this problem estimate this evidential value in one of two ways: (i) feature modeling, where a statistical (generative) model for fingerprint features, primarily minutiae, is developed which is then used to estimate the matching error and (ii) match score modeling, where a set of match scores obtained over a database is used to estimate the matching error rates. Our focus here is on match score modeling and we develop metrics to evaluate the effectiveness and reliability of the proposed evidential measure. Compared to previous approaches, the proposed measure allows explicit utilization of prior odds. Further, we also incorporate fingerprint image quality to improve the reliability of the estimated evidential value.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"205 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126033705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Palm vein recognition with Local Binary Patterns and Local Derivative Patterns 基于局部二值模式和局部导数模式的手掌静脉识别
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117804
Leila Mirmohamadsadeghi, A. Drygajlo
Palm vein feature extraction from near infrared images is a challenging problem in hand pattern recognition. In this paper, a promising new approach based on local texture patterns is proposed. First, operators and histograms of multi-scale Local Binary Patterns (LBPs) are investigated in order to identify new efficient descriptors for palm vein patterns. Novel higher-order local pattern descriptors based on Local Derivative Pattern (LDP) histograms are then investigated for palm vein description. Both feature extraction methods are compared and evaluated in the framework of verification and identification tasks. Extensive experiments on CASIA Multi-Spectral Palmprint Image Database V1.0 (CASIA database) identify the LBP and LDP descriptors which are better adapted to palm vein texture. Tests on the CASIA datasets also show that the best adapted LDP descriptors consistently outperform their LBP counterparts in both palm vein verification and identification.
近红外图像掌纹特征提取是手部模式识别中的一个难点。本文提出了一种基于局部纹理模式的新方法。首先,研究了多尺度局部二值模式(lbp)的算子和直方图,以识别新的有效的手掌静脉模式描述符。研究了基于局部导数模式直方图的手掌静脉高阶局部模式描述子。在验证和识别任务的框架下,对两种特征提取方法进行了比较和评价。在CASIA多光谱掌纹图像数据库V1.0 (CASIA数据库)上进行了大量实验,发现了更适合掌纹纹理的LBP和LDP描述符。在CASIA数据集上的测试也表明,最适合的LDP描述符在手掌静脉验证和识别方面始终优于LBP对应的描述符。
{"title":"Palm vein recognition with Local Binary Patterns and Local Derivative Patterns","authors":"Leila Mirmohamadsadeghi, A. Drygajlo","doi":"10.1109/IJCB.2011.6117804","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117804","url":null,"abstract":"Palm vein feature extraction from near infrared images is a challenging problem in hand pattern recognition. In this paper, a promising new approach based on local texture patterns is proposed. First, operators and histograms of multi-scale Local Binary Patterns (LBPs) are investigated in order to identify new efficient descriptors for palm vein patterns. Novel higher-order local pattern descriptors based on Local Derivative Pattern (LDP) histograms are then investigated for palm vein description. Both feature extraction methods are compared and evaluated in the framework of verification and identification tasks. Extensive experiments on CASIA Multi-Spectral Palmprint Image Database V1.0 (CASIA database) identify the LBP and LDP descriptors which are better adapted to palm vein texture. Tests on the CASIA datasets also show that the best adapted LDP descriptors consistently outperform their LBP counterparts in both palm vein verification and identification.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125906506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 97
Generic 3D face pose estimation using facial shapes 使用面部形状的通用3D面部姿态估计
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117472
J. Heo, M. Savvides
Generic 3D face pose estimation from a single 2D facial image is an extremely crucial requirement for face-related research areas. To meet with the remaining challenges for face pose estimation, suggested Murphy-Chutorian et al. [13], we believe that the first step is to create a large corpus of a 3D facial shape database in which the statistical relationship between projected 2D shapes and corresponding pose parameters can be easily observed. Because facial geometry provides the most essential information for facial pose, understanding the effect of pose parameters in 2D facial shapes is a key step toward solving the remaining challenges. In this paper, we present necessary tasks to reconstruct 3D facial shapes from multiple 2D images and then explain how to generate 2D projected shapes at any rotation interval. To deal with self occlusions, a novel hidden points removal (HPR) algorithm is also proposed. By flexibly changing the number of points in 2D shapes, we evaluate the performance of two different approaches for achieving generic 3D pose estimation in both coarse and fine levels and analyze the importance of facial shapes toward generic 3D pose estimation.
从单个二维人脸图像中估计通用的三维人脸姿态是人脸相关研究领域的一个极其重要的要求。Murphy-Chutorian et al.[13]建议,为了应对面部姿态估计的剩余挑战,我们认为第一步是创建一个大型的3D面部形状数据库语料库,其中可以很容易地观察到投影的2D形状与相应姿态参数之间的统计关系。由于面部几何为面部姿态提供了最基本的信息,因此了解姿态参数在二维面部形状中的影响是解决其余挑战的关键一步。在本文中,我们提出了从多个二维图像重建三维面部形状的必要任务,然后解释了如何在任何旋转间隔生成二维投影形状。针对自遮挡,提出了一种新的隐点去除(HPR)算法。通过灵活地改变二维形状中的点的数量,我们在粗和细两个层面上评估了两种不同的方法在实现通用三维姿态估计方面的性能,并分析了面部形状对通用三维姿态估计的重要性。
{"title":"Generic 3D face pose estimation using facial shapes","authors":"J. Heo, M. Savvides","doi":"10.1109/IJCB.2011.6117472","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117472","url":null,"abstract":"Generic 3D face pose estimation from a single 2D facial image is an extremely crucial requirement for face-related research areas. To meet with the remaining challenges for face pose estimation, suggested Murphy-Chutorian et al. [13], we believe that the first step is to create a large corpus of a 3D facial shape database in which the statistical relationship between projected 2D shapes and corresponding pose parameters can be easily observed. Because facial geometry provides the most essential information for facial pose, understanding the effect of pose parameters in 2D facial shapes is a key step toward solving the remaining challenges. In this paper, we present necessary tasks to reconstruct 3D facial shapes from multiple 2D images and then explain how to generate 2D projected shapes at any rotation interval. To deal with self occlusions, a novel hidden points removal (HPR) algorithm is also proposed. By flexibly changing the number of points in 2D shapes, we evaluate the performance of two different approaches for achieving generic 3D pose estimation in both coarse and fine levels and analyze the importance of facial shapes toward generic 3D pose estimation.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126849297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2011 International Joint Conference on Biometrics (IJCB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1