首页 > 最新文献

2011 International Joint Conference on Biometrics (IJCB)最新文献

英文 中文
Face and eye detection on hard datasets 硬数据集上的人脸和眼睛检测
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117593
Jonathan Parris, Michael J. Wilber, B. Heflin, H. Rara, Ahmed El-Barkouky, A. Farag, J. Movellan, anonymous, M. C. Santana, J. Lorenzo-Navarro, Mohammad Nayeem Teli, S. Marcel, Cosmin Atanasoaei, T. Boult
Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups/companies focusing on long-range face detection outperform leading commercial applications.
人脸和眼睛检测算法被部署在各种各样的应用中。不幸的是,目前还没有对这些探测器在困难环境下的表现进行定量比较。我们创建了一个低光和远距离图像的数据集,其中包含了人脸和眼睛探测器在解决现实世界问题时遇到的一些问题。我们创建的数据集由重新成像的图像(照片头)和半合成头组成,这些图像在不同的条件下进行了成像,包括低光、大气模糊和距离分别为3米、50米、80米和200米。本文通过与Viola Jones检测器和四种领先的商用人脸检测器的比较,分析了参与人脸和眼睛算法的检测和定位性能。性能在不同条件下进行表征,并通过每张图像的亮度和对比度进行参数化。在眼睛的定位精度方面,专注于远程人脸检测的团体/公司表现优于领先的商业应用。
{"title":"Face and eye detection on hard datasets","authors":"Jonathan Parris, Michael J. Wilber, B. Heflin, H. Rara, Ahmed El-Barkouky, A. Farag, J. Movellan, anonymous, M. C. Santana, J. Lorenzo-Navarro, Mohammad Nayeem Teli, S. Marcel, Cosmin Atanasoaei, T. Boult","doi":"10.1109/IJCB.2011.6117593","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117593","url":null,"abstract":"Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups/companies focusing on long-range face detection outperform leading commercial applications.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115012020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Long range iris acquisition system for stationary and mobile subjects 用于固定和移动对象的远程虹膜采集系统
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117484
Shreyas Venugopalan, U. Prasad, Khalid Harun, Kyle Neblett, D. Toomey, Joseph Heyman, M. Savvides
Most iris based biometric systems require a lot of cooperation from the users so that iris images of acceptable quality may be acquired. Features from these may then be used for recognition purposes. Relatively fewer works in literature address the question of less cooperative iris acquisition systems in order to reduce constraints on users. In this paper, we describe our ongoing work in designing and developing such a system. It is capable of capturing images of the iris up to distances of 8 meters with a resolution of 200 pixels across the diameter. If the resolution requirement is decreased to 150 pixels, then the same system may be used to capture images from up to 12 meters. We have incorporated velocity estimation and focus tracking modules so that images may be acquired from subjects on the move as well. We describe the various components that make up the system, including the lenses used, the imaging sensor, our auto-focus function and velocity estimation module. All the hardware components are Commercial Off The Shelf (COTS) with little or no modifications. We also present preliminary iris acquisition results using our system for both stationary and mobile subjects.
大多数基于虹膜的生物识别系统需要用户的大量配合才能获得质量可接受的虹膜图像。然后,这些特征可以用于识别目的。文献中相对较少的作品解决了较少合作的虹膜获取系统的问题,以减少对用户的约束。在本文中,我们描述了我们正在进行的设计和开发这样一个系统的工作。它能够以直径200像素的分辨率捕捉距离达8米的虹膜图像。如果分辨率要求降低到150像素,那么相同的系统可以用于捕获最远12米的图像。我们已经整合了速度估计和焦点跟踪模块,以便图像可以从移动的对象中获得。我们描述了组成系统的各个组件,包括使用的镜头,成像传感器,我们的自动对焦功能和速度估计模块。所有硬件组件都是商用现货(COTS),很少或没有修改。我们还介绍了使用我们的系统对固定和移动受试者进行虹膜采集的初步结果。
{"title":"Long range iris acquisition system for stationary and mobile subjects","authors":"Shreyas Venugopalan, U. Prasad, Khalid Harun, Kyle Neblett, D. Toomey, Joseph Heyman, M. Savvides","doi":"10.1109/IJCB.2011.6117484","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117484","url":null,"abstract":"Most iris based biometric systems require a lot of cooperation from the users so that iris images of acceptable quality may be acquired. Features from these may then be used for recognition purposes. Relatively fewer works in literature address the question of less cooperative iris acquisition systems in order to reduce constraints on users. In this paper, we describe our ongoing work in designing and developing such a system. It is capable of capturing images of the iris up to distances of 8 meters with a resolution of 200 pixels across the diameter. If the resolution requirement is decreased to 150 pixels, then the same system may be used to capture images from up to 12 meters. We have incorporated velocity estimation and focus tracking modules so that images may be acquired from subjects on the move as well. We describe the various components that make up the system, including the lenses used, the imaging sensor, our auto-focus function and velocity estimation module. All the hardware components are Commercial Off The Shelf (COTS) with little or no modifications. We also present preliminary iris acquisition results using our system for both stationary and mobile subjects.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"13 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115386719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
3D face sketch modeling and assessment for component based face recognition 基于组件的人脸识别的三维人脸草图建模与评估
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117501
Shaun J. Canavan, Xing Zhang, L. Yin, Yong Zhang
3D facial representations have been widely used for face recognition. There has been intensive research on geometric matching and similarity measurement on 3D range data and 3D geometric meshes of individual faces. However, little investigation has been done on geometric measurement for 3D sketch models. In this paper, we study the 3D face recognition from 3D face sketches which are derived from hand-drawn sketches and machine generated sketches. First, we have developed a 3D sketch modeling approach to create 3D facial sketch models from 2D facial sketch images. Second, we compared the 3D sketches to the existing 3D scans. Third, the 3D face similarity is measured between 3D sketches versus 3D scans, and 3D sketches versus 3D sketches based on the spatial Hidden Markov Model (HMM) classification. Experiments are conducted on both the BU-4DFE database and YSU face sketch database, resulting in a recognition rate at around 92% on average.
三维人脸表征在人脸识别中得到了广泛的应用。人脸的三维距离数据和三维几何网格的几何匹配和相似度测量已经得到了广泛的研究。然而,关于三维草图模型几何测量的研究却很少。本文研究了基于手绘草图和机器生成草图的三维人脸草图的三维人脸识别。首先,我们开发了一种3D草图建模方法,从2D面部草图图像创建3D面部草图模型。其次,我们将3D草图与现有的3D扫描进行了比较。第三,基于空间隐马尔可夫模型(HMM)分类,测量了3D草图与3D扫描、3D草图与3D草图之间的3D人脸相似性。在BU-4DFE数据库和YSU人脸草图数据库上进行了实验,平均识别率在92%左右。
{"title":"3D face sketch modeling and assessment for component based face recognition","authors":"Shaun J. Canavan, Xing Zhang, L. Yin, Yong Zhang","doi":"10.1109/IJCB.2011.6117501","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117501","url":null,"abstract":"3D facial representations have been widely used for face recognition. There has been intensive research on geometric matching and similarity measurement on 3D range data and 3D geometric meshes of individual faces. However, little investigation has been done on geometric measurement for 3D sketch models. In this paper, we study the 3D face recognition from 3D face sketches which are derived from hand-drawn sketches and machine generated sketches. First, we have developed a 3D sketch modeling approach to create 3D facial sketch models from 2D facial sketch images. Second, we compared the 3D sketches to the existing 3D scans. Third, the 3D face similarity is measured between 3D sketches versus 3D scans, and 3D sketches versus 3D sketches based on the spatial Hidden Markov Model (HMM) classification. Experiments are conducted on both the BU-4DFE database and YSU face sketch database, resulting in a recognition rate at around 92% on average.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"26 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115447016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Robust head pose estimation via semi-supervised manifold learning with ℓ1-graph regularization 基于1-图正则化的半监督流形学习鲁棒头姿估计
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117529
Hao Ji, Fei Su, Yujia Zhu
In this paper, a new ℓ1-graph regularized semi-supervised manifold learning (LRSML) method is proposed for robust human head pose estimation problem. The manifold is constructed under Biased Manifold Embedding (BME) framework which computes a biased neighborhood of each point in the feature space with ℓ1-graph regularization. The construction process of ℓ1-graph is assumed to be unsupervised without harnessing any data label information and uncovers the underlying ℓ1-norm driven sparse reconstruction relationship of each sample. The LRSML is more robust to noises and has the potential to convey more discriminative information compared to conventional manifold learning methods. Furthermore, utilizing both labeled and unlabeled information improve the pose estimation accuracy and generalization capability. Numerous experiments show the superiority of our method over several current state of the art methods on publicly available dataset.
提出了一种新的1-图正则化半监督流形学习(LRSML)方法,用于鲁棒人头姿估计问题。该流形在有偏流形嵌入(BME)框架下构造,用1-图正则化方法计算特征空间中每个点的有偏邻域。假设在不利用任何数据标签信息的情况下,构建1-图的过程是无监督的,并揭示了每个样本的底层1-范数驱动的稀疏重建关系。与传统的流形学习方法相比,LRSML对噪声具有更强的鲁棒性,并且具有传递更多判别信息的潜力。同时利用标记信息和未标记信息,提高了姿态估计的精度和泛化能力。大量的实验表明,我们的方法在公共可用数据集上优于几种当前最先进的方法。
{"title":"Robust head pose estimation via semi-supervised manifold learning with ℓ1-graph regularization","authors":"Hao Ji, Fei Su, Yujia Zhu","doi":"10.1109/IJCB.2011.6117529","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117529","url":null,"abstract":"In this paper, a new ℓ1-graph regularized semi-supervised manifold learning (LRSML) method is proposed for robust human head pose estimation problem. The manifold is constructed under Biased Manifold Embedding (BME) framework which computes a biased neighborhood of each point in the feature space with ℓ1-graph regularization. The construction process of ℓ1-graph is assumed to be unsupervised without harnessing any data label information and uncovers the underlying ℓ1-norm driven sparse reconstruction relationship of each sample. The LRSML is more robust to noises and has the potential to convey more discriminative information compared to conventional manifold learning methods. Furthermore, utilizing both labeled and unlabeled information improve the pose estimation accuracy and generalization capability. Numerous experiments show the superiority of our method over several current state of the art methods on publicly available dataset.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125224586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fusing with context: A Bayesian approach to combining descriptive attributes 与上下文融合:结合描述性属性的贝叶斯方法
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117490
W. Scheirer, Neeraj Kumar, K. Ricanek, P. Belhumeur, T. Boult
For identity related problems, descriptive attributes can take the form of any information that helps represent an individual, including age data, describable visual attributes, and contextual data. With a rich set of descriptive attributes, it is possible to enhance the base matching accuracy of a traditional face identification system through intelligent score weighting. If we can factor any attribute differences between people into our match score calculation, we can deemphasize incorrect results, and ideally lift the correct matching record to a higher rank position. Naturally, the presence of all descriptive attributes during a match instance cannot be expected, especially when considering non-biometric context. Thus, in this paper, we examine the application of Bayesian Attribute Networks to combine descriptive attributes and produce accurate weighting factors to apply to match scores from face recognition systems based on incomplete observations made at match time. We also examine the pragmatic concerns of attribute network creation, and introduce a Noisy-OR formulation for streamlined truth value assignment and more accurate weighting. Experimental results show that incorporating descriptive attributes into the matching process significantly enhances face identification over the baseline by up to 32.8%.
对于与身份相关的问题,描述性属性可以采用任何有助于表示个人的信息的形式,包括年龄数据、可描述的视觉属性和上下文数据。利用一组丰富的描述性属性,可以通过智能评分加权来提高传统人脸识别系统的基础匹配精度。如果我们能够在匹配分数计算中考虑到人们之间的任何属性差异,我们就可以弱化不正确的结果,并将正确的匹配记录提升到更高的排名位置。当然,不能期望在匹配实例中出现所有描述性属性,特别是在考虑非生物识别上下文时。因此,在本文中,我们研究了贝叶斯属性网络的应用,以结合描述性属性并产生准确的加权因子,以应用于基于在比赛时进行的不完整观察的人脸识别系统的匹配分数。我们还研究了属性网络创建的实用问题,并引入了简化真值分配和更准确加权的noise - or公式。实验结果表明,在匹配过程中加入描述性属性后,人脸识别效果比基线提高了32.8%。
{"title":"Fusing with context: A Bayesian approach to combining descriptive attributes","authors":"W. Scheirer, Neeraj Kumar, K. Ricanek, P. Belhumeur, T. Boult","doi":"10.1109/IJCB.2011.6117490","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117490","url":null,"abstract":"For identity related problems, descriptive attributes can take the form of any information that helps represent an individual, including age data, describable visual attributes, and contextual data. With a rich set of descriptive attributes, it is possible to enhance the base matching accuracy of a traditional face identification system through intelligent score weighting. If we can factor any attribute differences between people into our match score calculation, we can deemphasize incorrect results, and ideally lift the correct matching record to a higher rank position. Naturally, the presence of all descriptive attributes during a match instance cannot be expected, especially when considering non-biometric context. Thus, in this paper, we examine the application of Bayesian Attribute Networks to combine descriptive attributes and produce accurate weighting factors to apply to match scores from face recognition systems based on incomplete observations made at match time. We also examine the pragmatic concerns of attribute network creation, and introduce a Noisy-OR formulation for streamlined truth value assignment and more accurate weighting. Experimental results show that incorporating descriptive attributes into the matching process significantly enhances face identification over the baseline by up to 32.8%.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124990420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Fusion of directional transitional features for off-line signature verification 面向离线签名验证的方向过渡特征融合
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117515
Konstantinos Tselios, E. Zois, A. Nassiopoulos, G. Economou
In this work, a feature extraction method for off-line signature recognition and verification is proposed, described and validated. This approach is based on the exploitation of the relative pixel distribution over predetermined two and three-step paths along the signature trace. The proposed procedure can be regarded as a model for estimating the transitional probabilities of the signature stroke, arcs and angles. Partitioning the signature image with respect to its center of gravity is applied to the two-step part of the feature extraction algorithm, while an enhanced three-step algorithm utilizes the entire signature image. Fusion at feature level generates a multidimensional vector which encodes the spatial details of each writer. The classifier model is composed of the combination of a first stage similarity score along with a continuous SVM output. Results based on the estimation of the EER on domestic signature datasets and well known international corpuses demonstrate the high efficiency of the proposed methodology.
本文提出了一种离线签名识别与验证的特征提取方法,并对其进行了描述和验证。该方法基于沿签名轨迹在预定的两步和三步路径上的相对像素分布的利用。该方法可作为估计特征笔划、弧度和角度过渡概率的一种模型。特征提取算法的两步部分是对签名图像的重心进行分割,而增强的三步算法是对整个签名图像进行分割。特征级的融合生成一个多维向量,该向量对每个写作者的空间细节进行编码。该分类器模型由第一阶段相似度评分和连续支持向量机输出的组合组成。基于国内签名数据集和国际知名语料库的EER估计结果表明,该方法具有较高的效率。
{"title":"Fusion of directional transitional features for off-line signature verification","authors":"Konstantinos Tselios, E. Zois, A. Nassiopoulos, G. Economou","doi":"10.1109/IJCB.2011.6117515","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117515","url":null,"abstract":"In this work, a feature extraction method for off-line signature recognition and verification is proposed, described and validated. This approach is based on the exploitation of the relative pixel distribution over predetermined two and three-step paths along the signature trace. The proposed procedure can be regarded as a model for estimating the transitional probabilities of the signature stroke, arcs and angles. Partitioning the signature image with respect to its center of gravity is applied to the two-step part of the feature extraction algorithm, while an enhanced three-step algorithm utilizes the entire signature image. Fusion at feature level generates a multidimensional vector which encodes the spatial details of each writer. The classifier model is composed of the combination of a first stage similarity score along with a continuous SVM output. Results based on the estimation of the EER on domestic signature datasets and well known international corpuses demonstrate the high efficiency of the proposed methodology.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126130643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Cross-spectral face recognition in heterogeneous environments: A case study on matching visible to short-wave infrared imagery 异质环境下的交叉光谱人脸识别:可见光与短波红外图像匹配的案例研究
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117586
N. Kalka, T. Bourlai, B. Cukic, L. Hornak
In this paper we study the problem of cross spectral face recognition in heterogeneous environments. Specifically we investigate the advantages and limitations of matching short wave infrared (SWIR) face images to visible images under controlled or uncontrolled conditions. The contributions of this work are three-fold. First, three different databases are considered, which represent three different data collection conditions, i.e., images acquired in fully controlled (indoors), semi-controlled (indoors at standoff distances ≥ 50m), and uncontrolled (outdoor operational conditions) environments. Second, we demonstrate the possibility of SWIR cross-spectral matching under controlled and challenging scenarios. Third, we illustrate how photometric normalization and our proposed cross-photometric score level fusion rule can be utilized to improve cross-spectral matching performance across all scenarios. We utilized both commercial and academic (texture-based) face matchers and performed a set of experiments indicating that SWIR images can be matched to visible images with encouraging results. Our experiments also indicate that the level of improvement in recognition performance is scenario dependent.
本文研究了异构环境下的交叉光谱人脸识别问题。具体来说,我们研究了短波红外(SWIR)人脸图像在受控或非受控条件下与可见光图像匹配的优点和局限性。这项工作的贡献有三方面。首先,我们考虑了三个不同的数据库,它们代表了三种不同的数据采集条件,即在完全控制(室内)、半控制(室内距离≥50m)和非控制(室外操作条件)环境下获取的图像。其次,我们展示了在可控和具有挑战性的场景下进行SWIR交叉光谱匹配的可能性。第三,我们说明了如何利用光度归一化和我们提出的交叉光度评分水平融合规则来提高所有场景下的交叉光谱匹配性能。我们使用了商业和学术(基于纹理的)人脸匹配器,并进行了一系列实验,表明SWIR图像可以与可见图像匹配,并取得了令人鼓舞的结果。我们的实验还表明,识别性能的提高程度取决于场景。
{"title":"Cross-spectral face recognition in heterogeneous environments: A case study on matching visible to short-wave infrared imagery","authors":"N. Kalka, T. Bourlai, B. Cukic, L. Hornak","doi":"10.1109/IJCB.2011.6117586","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117586","url":null,"abstract":"In this paper we study the problem of cross spectral face recognition in heterogeneous environments. Specifically we investigate the advantages and limitations of matching short wave infrared (SWIR) face images to visible images under controlled or uncontrolled conditions. The contributions of this work are three-fold. First, three different databases are considered, which represent three different data collection conditions, i.e., images acquired in fully controlled (indoors), semi-controlled (indoors at standoff distances ≥ 50m), and uncontrolled (outdoor operational conditions) environments. Second, we demonstrate the possibility of SWIR cross-spectral matching under controlled and challenging scenarios. Third, we illustrate how photometric normalization and our proposed cross-photometric score level fusion rule can be utilized to improve cross-spectral matching performance across all scenarios. We utilized both commercial and academic (texture-based) face matchers and performed a set of experiments indicating that SWIR images can be matched to visible images with encouraging results. Our experiments also indicate that the level of improvement in recognition performance is scenario dependent.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122578136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
Face synthesis from near-infrared to visual light via sparse representation 通过稀疏表示从近红外到可见光的人脸合成
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117534
Zeda Zhang, Yunhong Wang, Zhaoxiang Zhang
This paper presents a novel method for synthesizing artificial visual light (VIS) face images from near-infrared (NIR) inputs. Active NIR imaging is now widely employed because it is unobtrusive, invariant of environmental illuminations, and can penetrate glasses and sweats. Unfortunately, NIR imaging exhibits discrepant photic properties compared with VIS imaging. Based on recent results of research on compressive sensing, natural images can be compressed and recovered with an overcomplete dictionary by sparse representation coefficients. In our approach a pairwise dictionary is trained from randomly sampled coupled face patches, which contains sparse coded base functions to reconstruct representation coefficients via l1-minimization. We will demonstrate that this method is robust to moderate pose and expression variations, and is efficient in computing. Comparative experiments are conducted with state-of-the-art algorithms.
提出了一种利用近红外输入合成人工可见光人脸图像的新方法。主动近红外成像由于其不显眼、不受环境光照的影响,并且可以穿透眼镜和汗液而被广泛应用。不幸的是,与VIS成像相比,近红外成像显示出不同的光学特性。基于近年来压缩感知的研究成果,利用稀疏表示系数对自然图像进行过完备字典的压缩和恢复。在我们的方法中,从随机采样的耦合人脸块中训练成对字典,其中包含稀疏编码的基函数,通过l1最小化来重建表示系数。我们将证明该方法对适度的姿势和表情变化具有鲁棒性,并且计算效率高。用最先进的算法进行了对比实验。
{"title":"Face synthesis from near-infrared to visual light via sparse representation","authors":"Zeda Zhang, Yunhong Wang, Zhaoxiang Zhang","doi":"10.1109/IJCB.2011.6117534","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117534","url":null,"abstract":"This paper presents a novel method for synthesizing artificial visual light (VIS) face images from near-infrared (NIR) inputs. Active NIR imaging is now widely employed because it is unobtrusive, invariant of environmental illuminations, and can penetrate glasses and sweats. Unfortunately, NIR imaging exhibits discrepant photic properties compared with VIS imaging. Based on recent results of research on compressive sensing, natural images can be compressed and recovered with an overcomplete dictionary by sparse representation coefficients. In our approach a pairwise dictionary is trained from randomly sampled coupled face patches, which contains sparse coded base functions to reconstruct representation coefficients via l1-minimization. We will demonstrate that this method is robust to moderate pose and expression variations, and is efficient in computing. Comparative experiments are conducted with state-of-the-art algorithms.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117333247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Counter-measures to photo attacks in face recognition: A public database and a baseline 人脸识别中的照片攻击对策:公共数据库和基线
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117503
André Anjos, S. Marcel
A common technique to by-pass 2-D face recognition systems is to use photographs of spoofed identities. Unfortunately, research in counter-measures to this type of attack have not kept-up - even if such threats have been known for nearly a decade, there seems to exist no consensus on best practices, techniques or protocols for developing and testing spoofing-detectors for face recognition. We attribute the reason for this delay, partly, to the unavailability of public databases and protocols to study solutions and compare results. To this purpose we introduce the publicly available PRINT-ATTACK database and exemplify how to use its companion protocol with a motion-based algorithm that detects correlations between the person's head movements and the scene context. The results are to be used as basis for comparison to other counter-measure techniques. The PRINT-ATTACK database contains 200 videos of real-accesses and 200 videos of spoof attempts using printed photographs of 50 different identities.
绕过二维人脸识别系统的一种常用技术是使用伪造身份的照片。不幸的是,针对这类攻击的对策研究并没有跟上——即使这种威胁已经被发现了近十年,但对于开发和测试用于人脸识别的欺骗探测器的最佳实践、技术或协议,似乎还没有达成共识。我们将这种延迟的部分原因归因于无法获得公共数据库和协议来研究解决方案和比较结果。为此,我们介绍了公开可用的PRINT-ATTACK数据库,并举例说明了如何将其配套协议与基于运动的算法结合使用,该算法可以检测人的头部运动与场景上下文之间的相关性。其结果将作为与其他对策技术比较的基础。PRINT-ATTACK数据库包含200个真实访问视频和200个使用50个不同身份的打印照片进行欺骗的视频。
{"title":"Counter-measures to photo attacks in face recognition: A public database and a baseline","authors":"André Anjos, S. Marcel","doi":"10.1109/IJCB.2011.6117503","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117503","url":null,"abstract":"A common technique to by-pass 2-D face recognition systems is to use photographs of spoofed identities. Unfortunately, research in counter-measures to this type of attack have not kept-up - even if such threats have been known for nearly a decade, there seems to exist no consensus on best practices, techniques or protocols for developing and testing spoofing-detectors for face recognition. We attribute the reason for this delay, partly, to the unavailability of public databases and protocols to study solutions and compare results. To this purpose we introduce the publicly available PRINT-ATTACK database and exemplify how to use its companion protocol with a motion-based algorithm that detects correlations between the person's head movements and the scene context. The results are to be used as basis for comparison to other counter-measure techniques. The PRINT-ATTACK database contains 200 videos of real-accesses and 200 videos of spoof attempts using printed photographs of 50 different identities.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127043952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 342
An adaptive resolution voxelization framework for 3D ear recognition 三维人耳识别的自适应分辨率体素化框架
Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117598
S. Cadavid, Sherin Fathy, Jindan Zhou, M. Abdel-Mottaleb
We present a novel voxelization framework for holistic Three-Dimensional (3D) object representation that accounts for distinct surface features. A voxelization of an object is performed by encoding an attribute or set of attributes of the surface region contained within each voxel occupying the space that the object resides in. To our knowledge, the voxel structures employed in previous methods consist of uniformly-sized voxels. The proposed framework, in contrast, generates structures consisting of variable-sized voxels that are adaptively distributed in higher concentration near distinct surface features. The primary advantage of the proposed method over its fixed resolution counterparts is that it yields a significantly more concise feature representation that is demonstrated to achieve a superior recognition performance. An evaluation of the method is conducted on a 3D ear recognition task. The ear provides a challenging case study because of its high degree of inter-subject similarity.
我们提出了一种新的体素化框架,用于整体三维(3D)对象表示,该框架考虑了不同的表面特征。对象的体素化是通过对每个体素中包含的表面区域的属性或一组属性进行编码来执行的,这些属性占据了对象所在的空间。据我们所知,以前的方法中使用的体素结构由均匀大小的体素组成。相比之下,所提出的框架生成由可变大小的体素组成的结构,这些体素在不同的表面特征附近自适应地以较高的浓度分布。与固定分辨率的方法相比,所提出的方法的主要优点是它产生了更简洁的特征表示,从而获得了更好的识别性能。在一个三维耳识别任务中对该方法进行了评价。耳朵提供了一个具有挑战性的案例研究,因为它具有高度的学科间相似性。
{"title":"An adaptive resolution voxelization framework for 3D ear recognition","authors":"S. Cadavid, Sherin Fathy, Jindan Zhou, M. Abdel-Mottaleb","doi":"10.1109/IJCB.2011.6117598","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117598","url":null,"abstract":"We present a novel voxelization framework for holistic Three-Dimensional (3D) object representation that accounts for distinct surface features. A voxelization of an object is performed by encoding an attribute or set of attributes of the surface region contained within each voxel occupying the space that the object resides in. To our knowledge, the voxel structures employed in previous methods consist of uniformly-sized voxels. The proposed framework, in contrast, generates structures consisting of variable-sized voxels that are adaptively distributed in higher concentration near distinct surface features. The primary advantage of the proposed method over its fixed resolution counterparts is that it yields a significantly more concise feature representation that is demonstrated to achieve a superior recognition performance. An evaluation of the method is conducted on a 3D ear recognition task. The ear provides a challenging case study because of its high degree of inter-subject similarity.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130668136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
2011 International Joint Conference on Biometrics (IJCB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1