首页 > 最新文献

2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference最新文献

英文 中文
A Predictive Model for Gait Recognition 步态识别的预测模型
S. Enokida, R. Shimomoto, T. Wada, T. Ejima
Gait Recognition has been paid an attention to as non-contact and unobtrusive biometric method. Magnitude and phase spectra of horizontal and vertical movement of ankles in a normal walk are effective and efficient signatures in gait recognition. However, gait recognition rate degrades significantly due to variance caused by covariates of clothing, surface or time lapse. In this paper, to improve gait recognition rate on a variety of footwear, a predictive model is proposed. The predictive model is able to estimate slipper gait from shoes gait. By using predictive slipper gait, much higher recognition rate is achieved for slipper gait over time lapse than ones without predictive model. The predictive model designed in this paper succeeds in separation of the variance due to a footwear covariate from the variance due to a time covariate.
步态识别作为一种非接触式、不显眼的生物识别方法,一直受到人们的关注。正常行走时踝关节水平和垂直运动的幅值和相位谱是步态识别的有效特征。然而,由于服装、表面或时间推移等协变量引起的方差,步态识别率明显下降。为了提高对多种鞋类的步态识别率,本文提出了一种预测模型。该预测模型能够从鞋的步态中估计出拖鞋的步态。利用预测步态模型对拖鞋步态随时间变化的识别率比不使用预测模型的识别率高得多。本文设计的预测模型成功地分离了鞋类协变量方差和时间协变量方差。
{"title":"A Predictive Model for Gait Recognition","authors":"S. Enokida, R. Shimomoto, T. Wada, T. Ejima","doi":"10.1109/BCC.2006.4341630","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341630","url":null,"abstract":"Gait Recognition has been paid an attention to as non-contact and unobtrusive biometric method. Magnitude and phase spectra of horizontal and vertical movement of ankles in a normal walk are effective and efficient signatures in gait recognition. However, gait recognition rate degrades significantly due to variance caused by covariates of clothing, surface or time lapse. In this paper, to improve gait recognition rate on a variety of footwear, a predictive model is proposed. The predictive model is able to estimate slipper gait from shoes gait. By using predictive slipper gait, much higher recognition rate is achieved for slipper gait over time lapse than ones without predictive model. The predictive model designed in this paper succeeds in separation of the variance due to a footwear covariate from the variance due to a time covariate.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128634841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Individual Tensorface Subspaces for Efficient and Robust Face Recognition that do not Require Factorization 不需要分解的高效鲁棒人脸识别的单个张面子空间
Sung W. Park, M. Savvides
Facial images change appearance due to multiple factors such as poses, lighting variations, facial expressions, etc. Tensor approach, an extension of the conventional 2D matrix, is appropriate to analyze facial factors since tensors make it possible to construct multilinear models using multiple factor structures. However, tensor algebra provides some difficulties in practical usage. First, it is difficult to decompose the multiple factors (e.g. pose, illumination, expression) of a test image, especially when the factor parameters are unknown or are not in the training set. Second, for face recognition, as the number of factors is larger, it becomes more difficult to construct reliable multilinear models and it requires more memory and computation to build a global model. In this paper, we propose a novel Individual TensorFaces which does not require tensor factorization, a step which was necessary in previous tensorface research for face recognition. Another advantage of this individual subspace approach is that it makes the face recognition tasks computationally and analytically simpler. Based on various experiments, we demonstrate the proposed Individual TensorFaces bring better discriminant power for classification.
面部图像由于姿势、光线变化、面部表情等多种因素而改变外观。张量方法是传统二维矩阵的一种扩展,适合于分析面部因素,因为张量可以使用多因素结构构建多线性模型。然而,张量代数在实际应用中存在一些困难。首先,很难分解测试图像的多个因素(如姿态、光照、表情),特别是当这些因素参数未知或不在训练集中时。其次,对于人脸识别来说,随着因素数量的增加,构建可靠的多线性模型变得更加困难,构建全局模型需要更多的内存和计算量。在本文中,我们提出了一种新的个体张sorfaces,它不需要张量分解,这是以前的张量脸研究中人脸识别所必需的一步。这种单独子空间方法的另一个优点是,它使人脸识别任务在计算和分析上更简单。通过各种实验,我们证明了所提出的单个TensorFaces具有更好的分类能力。
{"title":"Individual Tensorface Subspaces for Efficient and Robust Face Recognition that do not Require Factorization","authors":"Sung W. Park, M. Savvides","doi":"10.1109/BCC.2006.4341637","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341637","url":null,"abstract":"Facial images change appearance due to multiple factors such as poses, lighting variations, facial expressions, etc. Tensor approach, an extension of the conventional 2D matrix, is appropriate to analyze facial factors since tensors make it possible to construct multilinear models using multiple factor structures. However, tensor algebra provides some difficulties in practical usage. First, it is difficult to decompose the multiple factors (e.g. pose, illumination, expression) of a test image, especially when the factor parameters are unknown or are not in the training set. Second, for face recognition, as the number of factors is larger, it becomes more difficult to construct reliable multilinear models and it requires more memory and computation to build a global model. In this paper, we propose a novel Individual TensorFaces which does not require tensor factorization, a step which was necessary in previous tensorface research for face recognition. Another advantage of this individual subspace approach is that it makes the face recognition tasks computationally and analytically simpler. Based on various experiments, we demonstrate the proposed Individual TensorFaces bring better discriminant power for classification.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125715294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Fake Iris Detection Based on Variation of the Reflectance Ratio Between the IRIS and the Sclera 基于虹膜与巩膜反射率变化的鲁棒假虹膜检测
Sung Joo Lee, K. Park, Jaihie Kim
In this paper, we propose a new fake iris detection method based on the changes in the reflectance ratio between the iris and the sclera. The proposed method has four advantages over previous works. First, it is possible to detect fake iris images with high accuracy. Second, our method does not cause inconvenience to users since it can detect fake iris images at a very fast speed. Third, it is possible to show the theoretical background of using the variation of the reflectance ratio between the iris and the sclera. To compare fake iris images with live ones, three types of fake iris images were produced: a printed iris, an artificial eye, and a fake contact lens. In the experiments, we prove that the proposed fake iris detection method achieves high performance when distinguishing between live and fake iris.
本文提出了一种基于虹膜与巩膜之间反射率变化的假虹膜检测方法。与以往的研究相比,本文提出的方法有四个优点。首先,可以高精度地检测假虹膜图像。其次,我们的方法不会给用户带来不便,因为它可以以非常快的速度检测假虹膜图像。第三,利用虹膜和巩膜之间的反射率的变化可以显示理论背景。为了将假虹膜图像与真实虹膜图像进行比较,研究人员制作了三种类型的假虹膜图像:打印虹膜、人工眼睛和假隐形眼镜。在实验中,我们证明了所提出的假虹膜检测方法在区分真虹膜和假虹膜方面取得了很高的性能。
{"title":"Robust Fake Iris Detection Based on Variation of the Reflectance Ratio Between the IRIS and the Sclera","authors":"Sung Joo Lee, K. Park, Jaihie Kim","doi":"10.1109/BCC.2006.4341624","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341624","url":null,"abstract":"In this paper, we propose a new fake iris detection method based on the changes in the reflectance ratio between the iris and the sclera. The proposed method has four advantages over previous works. First, it is possible to detect fake iris images with high accuracy. Second, our method does not cause inconvenience to users since it can detect fake iris images at a very fast speed. Third, it is possible to show the theoretical background of using the variation of the reflectance ratio between the iris and the sclera. To compare fake iris images with live ones, three types of fake iris images were produced: a printed iris, an artificial eye, and a fake contact lens. In the experiments, we prove that the proposed fake iris detection method achieves high performance when distinguishing between live and fake iris.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127447282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Mouse Curve Biometrics 小鼠曲线生物识别
Douglas A. Schulz
A biometric system suitable for validating user identity using only mouse movements and no specialized equipment is presented. Mouse curves (mouse movements with little or no pause between them) are individually classified and used to develop classification histograms, which are representative of an individual's typical mouse use. These classification histograms can then be compared to validate identity. This classification approach is suitable for providing continuous identity validation during an entire user session.
提出了一种适用于仅使用鼠标移动而不需要专门设备来验证用户身份的生物识别系统。鼠标曲线(鼠标移动之间很少或没有停顿)被单独分类并用于开发分类直方图,这代表了个人典型的鼠标使用情况。然后可以比较这些分类直方图来验证身份。这种分类方法适用于在整个用户会话期间提供连续的身份验证。
{"title":"Mouse Curve Biometrics","authors":"Douglas A. Schulz","doi":"10.1109/BCC.2006.4341626","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341626","url":null,"abstract":"A biometric system suitable for validating user identity using only mouse movements and no specialized equipment is presented. Mouse curves (mouse movements with little or no pause between them) are individually classified and used to develop classification histograms, which are representative of an individual's typical mouse use. These classification histograms can then be compared to validate identity. This classification approach is suitable for providing continuous identity validation during an entire user session.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126454506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Robust Feature-Level Multibiometric Classification 鲁棒特征级多生物特征分类
A. Rattani, D. Kisku, M. Bicego, M. Tistarelli
This paper proposes a robust feature level based fusion classifier for face and fingerprint biometrics. The proposed system fuses the two traits at feature extraction level by first making the feature sets compatible for concatenation and then reducing the feature sets to handle the 'problem of curse of dimensionality'; finally the concatenated feature vectors are matched. The system is tested on the database of 50 chimeric users with five samples per trait per person. The results are compared with the monomodal ones and with the fusion at matching score level using the most popular sum rule technique. The system reports an accuracy of 97.41% with a FAR and FRR of 1.98% and 3.18% respectively, outperforming single modalities and score-level fusion.
提出了一种鲁棒的基于特征层次的人脸和指纹生物识别融合分类器。该系统在特征提取层面将两种特征融合在一起,首先使特征集兼容于拼接,然后对特征集进行降维处理以解决“维数诅咒”问题;最后对拼接的特征向量进行匹配。该系统在50个嵌合用户的数据库上进行测试,每个人每个特征5个样本。用最流行的和规则技术将结果与单峰结果和匹配分数水平的融合结果进行了比较。该系统的准确率为97.41%,FAR和FRR分别为1.98%和3.18%,优于单一模式和评分水平融合。
{"title":"Robust Feature-Level Multibiometric Classification","authors":"A. Rattani, D. Kisku, M. Bicego, M. Tistarelli","doi":"10.1109/BCC.2006.4341631","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341631","url":null,"abstract":"This paper proposes a robust feature level based fusion classifier for face and fingerprint biometrics. The proposed system fuses the two traits at feature extraction level by first making the feature sets compatible for concatenation and then reducing the feature sets to handle the 'problem of curse of dimensionality'; finally the concatenated feature vectors are matched. The system is tested on the database of 50 chimeric users with five samples per trait per person. The results are compared with the monomodal ones and with the fusion at matching score level using the most popular sum rule technique. The system reports an accuracy of 97.41% with a FAR and FRR of 1.98% and 3.18% respectively, outperforming single modalities and score-level fusion.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125245282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Toward A Human-Like Similarity Measure for Face Recognition 面向人脸识别的类人相似性测度
S. Krawczyk, E. Lawson, R. Stanchak, B. Kamgar-Parsi
We propose an approach for capturing a human similarity measure (within an artificial neural network, SVM, or other classifiers) for face recognition. That is, the following important and long desired goal appears achievable: "The similarity measure used in a face recognition system should be designed so that humans' ability to perform face recognition and recall are imitated as closely as possible by the machine". For each person of interest, a dedicated classifier is developed. Within the classifier we effectively capture a human classification functionality. This is done by automatically generating and labeling two arbitrarily large sets of morphed images (typically tens of thousands). One set is composed of images with reduced resemblance to the imaged person, yet recognizable by humans as that person (positive exemplars); the second set consists of look-alikes, i.e. "others" who look almost like the imaged person (negative exemplars). Humans, unlike most face recognition systems, do not rank images as a precursor to recognition. Like humans, our system does not rank images, as it is capable of rejecting images of previously unseen faces (or faces which are not of interest) by simply examining their images, and recognizing faces for which it is trained to identify. We demonstrate this capability in our presented experiments, where a large set of impostor images that were not provided during training are consistently rejected by the system.
我们提出了一种捕获人类相似性度量(在人工神经网络,支持向量机或其他分类器中)用于人脸识别的方法。也就是说,以下重要和长期期望的目标似乎是可以实现的:“人脸识别系统中使用的相似性度量应该被设计成使机器尽可能地模仿人类进行人脸识别和回忆的能力”。对于每个感兴趣的人,一个专门的分类器被开发出来。在分类器中,我们有效地捕获了人类分类功能。这是通过自动生成和标记两个任意大的变形图像集(通常是数万)来完成的。一组由与被成像的人相似度降低的图像组成,但人类可以识别出该人(积极范例);第二组由长相相似的人组成。“其他人”看起来几乎和形象中的人一样(负面榜样)。与大多数人脸识别系统不同,人类并不把图像作为识别的前兆。像人类一样,我们的系统不会对图像进行排序,因为它能够通过简单地检查图像来拒绝以前未见过的面孔(或不感兴趣的面孔)的图像,并识别它被训练识别的面孔。我们在实验中展示了这种能力,在训练过程中没有提供的大量冒名顶替图像一直被系统拒绝。
{"title":"Toward A Human-Like Similarity Measure for Face Recognition","authors":"S. Krawczyk, E. Lawson, R. Stanchak, B. Kamgar-Parsi","doi":"10.1109/BCC.2006.4341614","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341614","url":null,"abstract":"We propose an approach for capturing a human similarity measure (within an artificial neural network, SVM, or other classifiers) for face recognition. That is, the following important and long desired goal appears achievable: \"The similarity measure used in a face recognition system should be designed so that humans' ability to perform face recognition and recall are imitated as closely as possible by the machine\". For each person of interest, a dedicated classifier is developed. Within the classifier we effectively capture a human classification functionality. This is done by automatically generating and labeling two arbitrarily large sets of morphed images (typically tens of thousands). One set is composed of images with reduced resemblance to the imaged person, yet recognizable by humans as that person (positive exemplars); the second set consists of look-alikes, i.e. \"others\" who look almost like the imaged person (negative exemplars). Humans, unlike most face recognition systems, do not rank images as a precursor to recognition. Like humans, our system does not rank images, as it is capable of rejecting images of previously unseen faces (or faces which are not of interest) by simply examining their images, and recognizing faces for which it is trained to identify. We demonstrate this capability in our presented experiments, where a large set of impostor images that were not provided during training are consistently rejected by the system.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117349771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Robust IRIS Segmentation Procedure for Unconstrained Subject Presentation 无约束主题呈现的稳健IRIS分割方法
Jinyu Zuo, N. Kalka, N. Schmid
Iris as a biometric, is the most reliable with respect to performance. However, this reliability is a function of the ideality of the data, therefore a robust segmentation algorithm is required to handle non-ideal data. In this paper, a segmentation methodology is proposed that utilizes shape, intensity, and location information that is intrinsic to the pupil/iris. The virtue of this methodology lies in its capability to reliably segment non-ideal imagery that is simultaneously affected with such factors as specular reflection, blur, lighting variation, and off-angle images. We demonstrate the robustness of our segmentation methodology by evaluating ideal and non-ideal datasets, namely CASIA, Iris Challenge Evaluation (ICE) data, WVU, and WVU Off-angle. Furthermore, we compare our performance to that of Camus and Wildes, and Libor Masek's algorithms. We demonstrate an increase in segmentation performance of 7.02%, 8.16%, 20.84%, 26.61%, over the former mentioned algorithms when evaluating these datasets, respectively.
虹膜作为一种生物识别技术,在性能方面是最可靠的。然而,这种可靠性是数据理想性的函数,因此需要一种鲁棒的分割算法来处理非理想数据。本文提出了一种利用瞳孔/虹膜固有的形状、强度和位置信息的分割方法。这种方法的优点在于它能够可靠地分割同时受到镜面反射、模糊、光照变化和偏离角度图像等因素影响的非理想图像。我们通过评估理想和非理想数据集,即CASIA,虹膜挑战评估(ICE)数据,WVU和WVU偏离角度,证明了我们的分割方法的鲁棒性。此外,我们将我们的性能与Camus和Wildes以及Libor Masek的算法进行了比较。在评估这些数据集时,我们证明了与前一种算法相比,分割性能分别提高了7.02%,8.16%,20.84%,26.61%。
{"title":"A Robust IRIS Segmentation Procedure for Unconstrained Subject Presentation","authors":"Jinyu Zuo, N. Kalka, N. Schmid","doi":"10.1109/BCC.2006.4341623","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341623","url":null,"abstract":"Iris as a biometric, is the most reliable with respect to performance. However, this reliability is a function of the ideality of the data, therefore a robust segmentation algorithm is required to handle non-ideal data. In this paper, a segmentation methodology is proposed that utilizes shape, intensity, and location information that is intrinsic to the pupil/iris. The virtue of this methodology lies in its capability to reliably segment non-ideal imagery that is simultaneously affected with such factors as specular reflection, blur, lighting variation, and off-angle images. We demonstrate the robustness of our segmentation methodology by evaluating ideal and non-ideal datasets, namely CASIA, Iris Challenge Evaluation (ICE) data, WVU, and WVU Off-angle. Furthermore, we compare our performance to that of Camus and Wildes, and Libor Masek's algorithms. We demonstrate an increase in segmentation performance of 7.02%, 8.16%, 20.84%, 26.61%, over the former mentioned algorithms when evaluating these datasets, respectively.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"203 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123035058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 82
Changeable Biometrics for Appearance Based Face Recognition 基于外观的人脸识别的可变生物特征
MinYi Jeong, Chulhan Lee, Jongsun Kim, Jeung-Yoon Choi, K. Toh, Jaihie Kim
To enhance security and privacy in biometrics, changeable (or cancelable) biometrics have recently been introduced. The idea is to transform a biometric signal or feature into a new one for enrollment and matching. In this paper, we proposed changeable biometrics for face recognition using an appearance based approach. PCA and ICA coefficient vectors extracted from an input face image are normalized using their norm. The two normalized vectors are scrambled randomly and a new transformed face coefficient vector (transformed template) is generated by addition of the two normalized vectors. When a transformed template is compromised, it is replaced by using a new scrambling rule. Because the transformed template is generated by the addition of two vectors, the original PCA and ICA coefficients cannot be recovered from the transformed coefficients. In our experiment, we compared the performance between the cases when PCA and ICA coefficient vectors are used for verification and when the transformed coefficient vectors are used for verification.
为了提高生物识别技术的安全性和隐私性,最近引入了可更改(或可取消)生物识别技术。这个想法是将生物识别信号或特征转换为新的信号或特征,以便登记和匹配。在本文中,我们提出了一种基于外观的人脸识别方法。从输入的人脸图像中提取PCA和ICA系数向量,使用它们的范数进行归一化。将两个归一化向量随机置乱,将两个归一化向量相加生成一个新的变换后的人脸系数向量(变换后的模板)。当转换后的模板被破坏时,使用新的置乱规则替换它。由于转换后的模板是由两个向量相加生成的,因此不能从转换后的系数中恢复原始的PCA和ICA系数。在我们的实验中,我们比较了使用PCA和ICA系数向量进行验证和使用变换后的系数向量进行验证的性能。
{"title":"Changeable Biometrics for Appearance Based Face Recognition","authors":"MinYi Jeong, Chulhan Lee, Jongsun Kim, Jeung-Yoon Choi, K. Toh, Jaihie Kim","doi":"10.1109/BCC.2006.4341629","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341629","url":null,"abstract":"To enhance security and privacy in biometrics, changeable (or cancelable) biometrics have recently been introduced. The idea is to transform a biometric signal or feature into a new one for enrollment and matching. In this paper, we proposed changeable biometrics for face recognition using an appearance based approach. PCA and ICA coefficient vectors extracted from an input face image are normalized using their norm. The two normalized vectors are scrambled randomly and a new transformed face coefficient vector (transformed template) is generated by addition of the two normalized vectors. When a transformed template is compromised, it is replaced by using a new scrambling rule. Because the transformed template is generated by the addition of two vectors, the original PCA and ICA coefficients cannot be recovered from the transformed coefficients. In our experiment, we compared the performance between the cases when PCA and ICA coefficient vectors are used for verification and when the transformed coefficient vectors are used for verification.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123969173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
High Magnification and Long Distance Face Recognition: Database Acquisition, Evaluation, and Enhancement 高放大和远距离人脸识别:数据库获取、评估和增强
Yi Yao, B. Abidi, N. Kalka, N. Schmid, M. Abidi
In this paper, we describe a face video database obtained from Long Distances and with High Magnifications, IRIS- LDHM. Both indoor and outdoor sequences are collected under uncontrolled surveillance conditions. The significance of this database lies in the fact that it is the first database to provide face images from long distances (indoor: 10 m~20 m and outdoor: 50 m~300 m). The corresponding system magnification is elevated from less than 3times to 20times for indoor and up to 375times for outdoor. The database has applications in experimentations with human identification and authentication in long range surveillance and wide area monitoring. The database will be made public to the research community for perusal towards long range face related research. Deteriorations unique to high magnification and long range face images are investigated in terms of face recognition rates. Magnification blur is proved to be an additional major degradation source, which can be alleviated via blur assessment and deblurring algorithms. Experimental results validate a relative improvement of up to 25% in recognition rates after assessment and enhancement of degradations.
本文介绍了一种远距离高倍人脸视频数据库IRIS- LDHM。室内和室外序列都是在不受控制的监测条件下收集的。该数据库的意义在于,它是第一个提供远距离(室内:10米~20米,室外:50米~300米)人脸图像的数据库,相应的系统放大倍数从室内的不到3倍提高到20倍,室外则高达375倍。该数据库在远程监控和广域监控的人体身份验证实验中具有一定的应用价值。该数据库将向研究界公开,供长期面部相关研究查阅。在人脸识别率方面,研究了高倍率和远距离人脸图像所特有的退化。放大模糊被证明是另一个主要的退化源,可以通过模糊评估和去模糊算法来缓解。实验结果证实,经过评估和增强降解后,识别率相对提高了25%。
{"title":"High Magnification and Long Distance Face Recognition: Database Acquisition, Evaluation, and Enhancement","authors":"Yi Yao, B. Abidi, N. Kalka, N. Schmid, M. Abidi","doi":"10.1109/BCC.2006.4341635","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341635","url":null,"abstract":"In this paper, we describe a face video database obtained from Long Distances and with High Magnifications, IRIS- LDHM. Both indoor and outdoor sequences are collected under uncontrolled surveillance conditions. The significance of this database lies in the fact that it is the first database to provide face images from long distances (indoor: 10 m~20 m and outdoor: 50 m~300 m). The corresponding system magnification is elevated from less than 3times to 20times for indoor and up to 375times for outdoor. The database has applications in experimentations with human identification and authentication in long range surveillance and wide area monitoring. The database will be made public to the research community for perusal towards long range face related research. Deteriorations unique to high magnification and long range face images are investigated in terms of face recognition rates. Magnification blur is proved to be an additional major degradation source, which can be alleviated via blur assessment and deblurring algorithms. Experimental results validate a relative improvement of up to 25% in recognition rates after assessment and enhancement of degradations.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126906500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A Multimodal Approach for 3D Face Modeling and Recognition Using Deformable Mesh Model 一种基于可变形网格模型的多模态人脸建模与识别方法
A. Ansari, M. Abdel-Mottaleb, M. Mahoor
We present a multimodal approach for 3D face modeling and recognition from two frontal and one profile view stereo images of the face. Once the images are captured, the algorithm starts by extracting selected 2D facial features from one of the frontal views and computes a dense disparity map from the two frontal images. We then align a low resolution mesh model to the selected features, adjust its vertices at the selected features and along the profile line using the profile view, increase its vertices to a higher resolution, and re-project them back on the frontal image. Using the coordinates of the re-projected vertices and their corresponding disparities, we capture and compute the 3D facial shape variations using triangulation. The final result is a deformed 3D model specific to a given subject's face. Application of the model in 3D face recognition validates the algorithm with a high recognition rate.
我们提出了一种多模态方法,用于3D人脸建模和识别从两个正面和一个侧面视图的立体图像的脸。一旦图像被捕获,算法首先从其中一个正面视图中提取选定的2D面部特征,并从两个正面图像中计算密集的视差图。然后,我们将低分辨率网格模型与选定的特征对齐,使用轮廓视图调整其在选定特征处和沿轮廓线的顶点,将其顶点增加到更高的分辨率,并将它们重新投影回正面图像上。利用重新投影顶点的坐标及其对应的差值,我们使用三角测量捕获并计算3D面部形状变化。最终的结果是一个特定于给定受试者面部的变形3D模型。该模型在三维人脸识别中的应用验证了该算法具有较高的识别率。
{"title":"A Multimodal Approach for 3D Face Modeling and Recognition Using Deformable Mesh Model","authors":"A. Ansari, M. Abdel-Mottaleb, M. Mahoor","doi":"10.1109/BCC.2006.4341633","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341633","url":null,"abstract":"We present a multimodal approach for 3D face modeling and recognition from two frontal and one profile view stereo images of the face. Once the images are captured, the algorithm starts by extracting selected 2D facial features from one of the frontal views and computes a dense disparity map from the two frontal images. We then align a low resolution mesh model to the selected features, adjust its vertices at the selected features and along the profile line using the profile view, increase its vertices to a higher resolution, and re-project them back on the frontal image. Using the coordinates of the re-projected vertices and their corresponding disparities, we capture and compute the 3D facial shape variations using triangulation. The final result is a deformed 3D model specific to a given subject's face. Application of the model in 3D face recognition validates the algorithm with a high recognition rate.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131093243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1