首页 > 最新文献

2018 International Conference on Biometrics (ICB)最新文献

英文 中文
Detection of Glasses in Near-Infrared Ocular Images 近红外眼图像中眼镜的检测
Pub Date : 2018-02-01 DOI: 10.1109/ICB2018.2018.00039
P. Drozdowski, F. Struck, C. Rathgeb, C. Busch
Eyeglasses change the appearance and visual perception of facial images. Moreover, under objective metrics, glasses generally deteriorate the sample quality of near-infrared ocular images and as a consequence can worsen the biometric performance of iris recognition systems. Automatic detection of glasses is therefore one of the prerequisites for a sufficient quality, interactive sample acquisition process in an automatic iris recognition system. In this paper, three approaches (i.e. a statistical method, a deep learning based method and an algorithmic method based on detection of edges and reflections) for automatic detection of glasses in near-infrared iris images are presented. Those approaches are evaluated using cross-validation on the CASIA-IrisV4-Thousand dataset, which contains 20000 images from 1000 subjects. Individually, they are capable of correctly classifying 95-98% of images, while a majority vote based fusion of the three approaches achieves a correct classification rate (CCR) of 99.54%.
眼镜改变了面部图像的外观和视觉感知。此外,在客观指标下,眼镜通常会降低近红外眼部图像的样本质量,从而降低虹膜识别系统的生物识别性能。因此,在自动虹膜识别系统中,眼镜的自动检测是实现高质量、交互式样本采集过程的先决条件之一。本文提出了近红外虹膜图像中眼镜自动检测的三种方法(即统计方法、基于深度学习的方法和基于边缘和反射检测的算法方法)。这些方法在CASIA-IrisV4-Thousand数据集上进行了交叉验证,该数据集包含来自1000个受试者的20000张图像。单独地,它们能够正确分类95-98%的图像,而基于多数投票的三种方法融合的正确分类率(CCR)为99.54%。
{"title":"Detection of Glasses in Near-Infrared Ocular Images","authors":"P. Drozdowski, F. Struck, C. Rathgeb, C. Busch","doi":"10.1109/ICB2018.2018.00039","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00039","url":null,"abstract":"Eyeglasses change the appearance and visual perception of facial images. Moreover, under objective metrics, glasses generally deteriorate the sample quality of near-infrared ocular images and as a consequence can worsen the biometric performance of iris recognition systems. Automatic detection of glasses is therefore one of the prerequisites for a sufficient quality, interactive sample acquisition process in an automatic iris recognition system. In this paper, three approaches (i.e. a statistical method, a deep learning based method and an algorithmic method based on detection of edges and reflections) for automatic detection of glasses in near-infrared iris images are presented. Those approaches are evaluated using cross-validation on the CASIA-IrisV4-Thousand dataset, which contains 20000 images from 1000 subjects. Individually, they are capable of correctly classifying 95-98% of images, while a majority vote based fusion of the three approaches achieves a correct classification rate (CCR) of 99.54%.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132401104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
IARPA Janus Benchmark - C: Face Dataset and Protocol IARPA Janus Benchmark - C:人脸数据集和协议
Pub Date : 2018-02-01 DOI: 10.1109/ICB2018.2018.00033
Brianna Maze, Jocelyn C. Adams, James A. Duncan, N. Kalka, Tim Miller, C. Otto, Anil K. Jain, W. T. Niggel, Janet Anderson, J. Cheney, P. Grother
Although considerable work has been done in recent years to drive the state of the art in facial recognition towards operation on fully unconstrained imagery, research has always been restricted by a lack of datasets in the public domain. In addition, traditional biometrics experiments such as single image verification and closed set recognition do not adequately evaluate the ways in which unconstrained face recognition systems are used in practice. The IARPA Janus Benchmark–C (IJB-C) face dataset advances the goal of robust unconstrained face recognition, improving upon the previous public domain IJB-B dataset, by increasing dataset size and variability, and by introducing end-to-end protocols that more closely model operational face recognition use cases. IJB-C adds 1,661 new subjects to the 1,870 subjects released in IJB-B, with increased emphasis on occlusion and diversity of subject occupation and geographic origin with the goal of improving representation of the global population. Annotations on IJB-C imagery have been expanded to allow for further covariate analysis, including a spatial occlusion grid to standardize analysis of occlusion. Due to these enhancements, the IJB-C dataset is significantly more challenging than other datasets in the public domain and will advance the state of the art in unconstrained face recognition.
尽管近年来已经做了大量的工作来推动面部识别技术的发展,使其朝着完全不受约束的图像操作的方向发展,但研究一直受到公共领域缺乏数据集的限制。此外,传统的生物识别实验,如单图像验证和闭集识别,并不能充分评估无约束人脸识别系统在实践中的使用方式。IARPA Janus Benchmark-C (IJB-C)人脸数据集推进了鲁棒无约束人脸识别的目标,通过增加数据集的大小和可变性,并通过引入端到端协议,更紧密地模拟操作人脸识别用例,改进了以前的公共领域IJB-B数据集。IJB-C在IJB-B发布的1,870个科目的基础上增加了1,661个新科目,更加强调科目职业和地理来源的遮挡和多样性,目标是提高全球人口的代表性。IJB-C图像上的注释已经扩展,以允许进一步的协变量分析,包括空间遮挡网格,以标准化遮挡分析。由于这些增强,IJB-C数据集比公共领域的其他数据集更具挑战性,并将推动无约束人脸识别的最新发展。
{"title":"IARPA Janus Benchmark - C: Face Dataset and Protocol","authors":"Brianna Maze, Jocelyn C. Adams, James A. Duncan, N. Kalka, Tim Miller, C. Otto, Anil K. Jain, W. T. Niggel, Janet Anderson, J. Cheney, P. Grother","doi":"10.1109/ICB2018.2018.00033","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00033","url":null,"abstract":"Although considerable work has been done in recent years to drive the state of the art in facial recognition towards operation on fully unconstrained imagery, research has always been restricted by a lack of datasets in the public domain. In addition, traditional biometrics experiments such as single image verification and closed set recognition do not adequately evaluate the ways in which unconstrained face recognition systems are used in practice. The IARPA Janus Benchmark–C (IJB-C) face dataset advances the goal of robust unconstrained face recognition, improving upon the previous public domain IJB-B dataset, by increasing dataset size and variability, and by introducing end-to-end protocols that more closely model operational face recognition use cases. IJB-C adds 1,661 new subjects to the 1,870 subjects released in IJB-B, with increased emphasis on occlusion and diversity of subject occupation and geographic origin with the goal of improving representation of the global population. Annotations on IJB-C imagery have been expanded to allow for further covariate analysis, including a spatial occlusion grid to standardize analysis of occlusion. Due to these enhancements, the IJB-C dataset is significantly more challenging than other datasets in the public domain and will advance the state of the art in unconstrained face recognition.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132629689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 452
The Impact of Age and Threshold Variation on Facial Recognition Algorithm Performance Using Images of Children 年龄和阈值变化对儿童图像人脸识别算法性能的影响
Pub Date : 2018-02-01 DOI: 10.1109/ICB2018.2018.00041
Dana Michalski, Sau Yee Yiu, C. Malec
Facial recognition across ageing and in particular with images of children remains a challenging problem in a wide of range of operational settings. Yet, research examining algorithm performance with images of children is limited with minimal understanding of how age and age variation (i.e., age difference between images being compared) impacts on performance. Operationally, a fixed threshold based on images of adults may be used without considering that this could impact on performance with children. Threshold variation based on age and age variation may be a better approach when comparing images of children. This paper evaluates the performance of a commercial off-the-shelf (COTS) facial recognition algorithm to determine the impact that age (0–17 years) and age variation (0–10 years) has on a controlled operational dataset of facial images using both a fixed threshold and threshold variation approach. This evaluation shows that performance of children differs considerably across age and age variation, and in some operational settings, threshold variation may be beneficial for conducting facial recognition with children.
在广泛的操作环境中,跨越年龄的面部识别,特别是儿童图像的面部识别仍然是一个具有挑战性的问题。然而,对儿童图像的算法性能的研究是有限的,对年龄和年龄变化(即被比较的图像之间的年龄差异)如何影响性能的理解很少。在操作上,可以使用基于成人图像的固定阈值,而不考虑这可能会影响儿童的表现。在比较儿童图像时,基于年龄和年龄变化的阈值变化可能是更好的方法。本文评估了商用现货(COTS)面部识别算法的性能,以确定年龄(0-17岁)和年龄变化(0-10岁)对使用固定阈值和阈值变化方法控制的面部图像操作数据集的影响。该评估表明,儿童的表现在不同年龄和年龄差异上有很大差异,在某些操作环境中,阈值变化可能有利于对儿童进行面部识别。
{"title":"The Impact of Age and Threshold Variation on Facial Recognition Algorithm Performance Using Images of Children","authors":"Dana Michalski, Sau Yee Yiu, C. Malec","doi":"10.1109/ICB2018.2018.00041","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00041","url":null,"abstract":"Facial recognition across ageing and in particular with images of children remains a challenging problem in a wide of range of operational settings. Yet, research examining algorithm performance with images of children is limited with minimal understanding of how age and age variation (i.e., age difference between images being compared) impacts on performance. Operationally, a fixed threshold based on images of adults may be used without considering that this could impact on performance with children. Threshold variation based on age and age variation may be a better approach when comparing images of children. This paper evaluates the performance of a commercial off-the-shelf (COTS) facial recognition algorithm to determine the impact that age (0–17 years) and age variation (0–10 years) has on a controlled operational dataset of facial images using both a fixed threshold and threshold variation approach. This evaluation shows that performance of children differs considerably across age and age variation, and in some operational settings, threshold variation may be beneficial for conducting facial recognition with children.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123481623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Fingerprint Synthesis: Evaluating Fingerprint Search at Scale 指纹合成:大规模评价指纹搜索
Pub Date : 2018-02-01 DOI: 10.1109/ICB2018.2018.00016
Kai Cao, Anil K. Jain
A database of a large number of fingerprint images is highly desired for designing and evaluating large scale fingerprint search algorithms. Compared to collecting a large number of real fingerprints, which is very costly in terms of time, effort and expense, and also involves stringent privacy issues, synthetic fingerprints can be generated at low cost and does not have any privacy issues to deal with. However, it is essential to show that the characteristics and appearance of real and synthetic fingerprint images are sufficiently similar. We propose a Generative Adversarial Network (GAN) to generate 512X512 rolled fingerprint images. Our generative model for rolled fingerprints is highly efficient (12ms/image) with characteristics of synthetic rolled prints close to real rolled images. Experimental results show that our model captures the properties of real rolled fingerprints in terms of (i) fingerprint image quality, (ii) distinctiveness and (iii) minutiae configuration. Our synthetic fingerprint images are more realistic than other approaches.
为了设计和评估大规模的指纹搜索算法,需要大量的指纹图像数据库。与采集大量真实指纹在时间、精力和费用上都非常昂贵,并且还涉及严格的隐私问题相比,合成指纹可以以低成本生成,并且没有任何隐私问题需要处理。然而,必须表明真实指纹图像和合成指纹图像的特征和外观足够相似。我们提出了一种生成对抗网络(GAN)来生成512X512卷指纹图像。我们的卷指纹生成模型效率高(12毫秒/张),并且具有接近真实卷指纹的特征。实验结果表明,我们的模型在(i)指纹图像质量,(ii)独特性和(iii)细节配置方面捕获了真实卷指纹的特性。我们的合成指纹图像比其他方法更真实。
{"title":"Fingerprint Synthesis: Evaluating Fingerprint Search at Scale","authors":"Kai Cao, Anil K. Jain","doi":"10.1109/ICB2018.2018.00016","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00016","url":null,"abstract":"A database of a large number of fingerprint images is highly desired for designing and evaluating large scale fingerprint search algorithms. Compared to collecting a large number of real fingerprints, which is very costly in terms of time, effort and expense, and also involves stringent privacy issues, synthetic fingerprints can be generated at low cost and does not have any privacy issues to deal with. However, it is essential to show that the characteristics and appearance of real and synthetic fingerprint images are sufficiently similar. We propose a Generative Adversarial Network (GAN) to generate 512X512 rolled fingerprint images. Our generative model for rolled fingerprints is highly efficient (12ms/image) with characteristics of synthetic rolled prints close to real rolled images. Experimental results show that our model captures the properties of real rolled fingerprints in terms of (i) fingerprint image quality, (ii) distinctiveness and (iii) minutiae configuration. Our synthetic fingerprint images are more realistic than other approaches.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124899604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Style Signatures to Combat Biometric Menagerie in Stylometry 风格签名对抗文体学中的生物识别动物园
Pub Date : 2018-02-01 DOI: 10.1109/ICB2018.2018.00047
Kalaivani Sundararajan, T. Neal, D. Woodard
In this paper, we investigate the challenges of using a person's writing style as a cognitive biometric modality by applying Doddington's idea of Biometric menagerie. To the best of our knowledge, biometric menagerie analysis has been on performed on a cognitive biometric modality for the first time. The presence of goats, wolves and lambs in this modality is demonstrated using two publicly available datasets - Blogs and IMDB1M. To combat this challenging problem, we further propose using person-specific features referred to as "Style signatures" which may be better at distinguishing different individuals. Experimental results show that using person-specific Style signatures improve verification by 3.6-5.5% on both datasets.
在本文中,我们通过应用Doddington的生物识别动物园的想法,研究了将一个人的写作风格作为认知生物识别模式的挑战。据我们所知,生物识别动物园分析首次在认知生物识别模式上进行。使用两个公开可用的数据集(Blogs和IMDB1M)演示了山羊、狼和羔羊在这种模式中的存在。为了解决这个具有挑战性的问题,我们进一步建议使用个人特定的特征,即“风格签名”,它可以更好地区分不同的个体。实验结果表明,在两个数据集上使用个人风格签名将验证效率提高了3.6-5.5%。
{"title":"Style Signatures to Combat Biometric Menagerie in Stylometry","authors":"Kalaivani Sundararajan, T. Neal, D. Woodard","doi":"10.1109/ICB2018.2018.00047","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00047","url":null,"abstract":"In this paper, we investigate the challenges of using a person's writing style as a cognitive biometric modality by applying Doddington's idea of Biometric menagerie. To the best of our knowledge, biometric menagerie analysis has been on performed on a cognitive biometric modality for the first time. The presence of goats, wolves and lambs in this modality is demonstrated using two publicly available datasets - Blogs and IMDB1M. To combat this challenging problem, we further propose using person-specific features referred to as \"Style signatures\" which may be better at distinguishing different individuals. Experimental results show that using person-specific Style signatures improve verification by 3.6-5.5% on both datasets.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"275 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114090464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Filter Design Based on Spectral Dictionary for Latent Fingerprint Pre-enhancement 基于谱字典的潜在指纹预增强滤波器设计
Pub Date : 2018-02-01 DOI: 10.1109/ICB2018.2018.00015
Watcharapong Chaidee, K. Horapong, V. Areekul
We introduce a pre-enhancement algorithm to improve efficiency of the automatic fingerprint identification systems (AFIS) for latent fingerprint search. The proposed algorithm employs learning to construct a spectral dictionary from spectral responses of a Gabor filter bank in the frequency domain. Given an input latent fingerprint, the spectral dictionary yields a set of appropriate filters for each partitioning window of the entire latent fingerprint image. The proposed set of spectral filters helps improve and preserve highly-curved ridges in region around the singular point, while the other methods fail. The proposed method outperforms state-of-the-art algorithms in identification accuracy with the good and bad cases of the NIST SD27 latent fingerprint database.
为了提高自动指纹识别系统对潜在指纹的搜索效率,提出了一种预增强算法。该算法采用学习的方法,从频域Gabor滤波器组的频谱响应中构建一个频谱字典。给定一个输入潜在指纹,光谱字典为整个潜在指纹图像的每个划分窗口产生一组适当的滤波器。所提出的一组光谱滤波器有助于改善和保留奇点周围区域的高弯曲脊,而其他方法则失败。在NIST SD27潜在指纹数据库的好坏情况下,所提出的方法在识别精度方面优于最先进的算法。
{"title":"Filter Design Based on Spectral Dictionary for Latent Fingerprint Pre-enhancement","authors":"Watcharapong Chaidee, K. Horapong, V. Areekul","doi":"10.1109/ICB2018.2018.00015","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00015","url":null,"abstract":"We introduce a pre-enhancement algorithm to improve efficiency of the automatic fingerprint identification systems (AFIS) for latent fingerprint search. The proposed algorithm employs learning to construct a spectral dictionary from spectral responses of a Gabor filter bank in the frequency domain. Given an input latent fingerprint, the spectral dictionary yields a set of appropriate filters for each partitioning window of the entire latent fingerprint image. The proposed set of spectral filters helps improve and preserve highly-curved ridges in region around the singular point, while the other methods fail. The proposed method outperforms state-of-the-art algorithms in identification accuracy with the good and bad cases of the NIST SD27 latent fingerprint database.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127446460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Metadata-Based Feature Aggregation Network for Face Recognition 基于元数据的人脸识别特征聚合网络
Pub Date : 2018-02-01 DOI: 10.1109/ICB2018.2018.00028
Nishant Sankaran, S. Tulyakov, S. Setlur, V. Govindaraju
This paper presents a novel approach to feature aggregation for template/set based face recognition by incorporating metadata regarding face images to evaluate the representativeness of a feature in the template. We propose using orthogonal data like yaw, pitch, face size, etc. to augment the capacity of deep neural networks to find stronger correlations between the relative quality of the face image in the set with the match performance. The approach presented employs a siamese architecture for training on features and metadata generated using other state-of-the-art CNNs and learns an effective feature fusion strategy for producing optimal face verification performance. We obtain substantial improvements in TAR of over 1.5% at 10^-4 FAR as compared to traditional pooling approaches and illustrate the efficacy of the quality assessment made by the network on the two challenging datasets IJB-A and IARPA Janus CS4.
本文提出了一种新的基于模板/集合的人脸识别特征聚合方法,通过结合人脸图像的元数据来评估模板中特征的代表性。我们建议使用偏航、俯仰、人脸大小等正交数据来增强深度神经网络的能力,以发现集合中人脸图像的相对质量与匹配性能之间更强的相关性。该方法采用siamese架构来训练使用其他最先进的cnn生成的特征和元数据,并学习有效的特征融合策略以产生最佳的人脸验证性能。与传统的池化方法相比,我们在10^-4 FAR下获得了超过1.5%的显著改进,并说明了网络在两个具有挑战性的数据集ij - a和IARPA Janus CS4上进行的质量评估的有效性。
{"title":"Metadata-Based Feature Aggregation Network for Face Recognition","authors":"Nishant Sankaran, S. Tulyakov, S. Setlur, V. Govindaraju","doi":"10.1109/ICB2018.2018.00028","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00028","url":null,"abstract":"This paper presents a novel approach to feature aggregation for template/set based face recognition by incorporating metadata regarding face images to evaluate the representativeness of a feature in the template. We propose using orthogonal data like yaw, pitch, face size, etc. to augment the capacity of deep neural networks to find stronger correlations between the relative quality of the face image in the set with the match performance. The approach presented employs a siamese architecture for training on features and metadata generated using other state-of-the-art CNNs and learns an effective feature fusion strategy for producing optimal face verification performance. We obtain substantial improvements in TAR of over 1.5% at 10^-4 FAR as compared to traditional pooling approaches and illustrate the efficacy of the quality assessment made by the network on the two challenging datasets IJB-A and IARPA Janus CS4.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134401157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Multi-spectral Iris Segmentation in Visible Wavelengths 可见光多光谱虹膜分割
Pub Date : 2018-02-01 DOI: 10.1109/ICB2018.2018.00037
Torsten Schlett, C. Rathgeb, C. Busch
While traditional iris recognition systems operate using near-infrared images, visible wavelength approaches have gained attention in recent years due to a variety of reasons, such as the deployment of iris recognition in consumer grade mobile devices. Iris segmentation, the process of localizing the iris part of an image, is a vital step in iris recognition. The segmentation of the iris usually involves a detection of inner and outer iris boundaries, a detection of eyelids, an exclusion of eyelashes as well as contact lens rings and a scrubbing of specular reflections. This work presents a comprehensive multi-spectral analysis to improve iris segmentation accuracy in visible wavelengths by transforming iris images before their segmentation, which is done by extracting spectral components in form of RGB color channels. The procedure is evaluated by utilizing the MobBIO dataset, open-source iris segmentation tools, and the NICE.I error measures. Additionally, a segmentation-level fusion procedure based on existing work is performed; an eye color analysis is examined, with no clear connection to the multi-spectral procedure being found; and another analysis highlights further potential improvement by assuming perfect selection within various multi-spectral segmentation result sets.
虽然传统的虹膜识别系统使用近红外图像操作,但由于各种原因,例如虹膜识别在消费级移动设备中的部署,可见波长方法近年来受到了关注。虹膜分割是虹膜识别的关键步骤,它是对图像中虹膜部分进行定位的过程。虹膜的分割通常包括虹膜内外边界的检测、眼睑的检测、睫毛和隐形眼镜环的排除以及镜面反射的剔除。本文提出了一种全面的多光谱分析方法,通过在分割前对虹膜图像进行变换,以RGB颜色通道的形式提取光谱成分,从而提高虹膜在可见光波段的分割精度。利用MobBIO数据集、开源虹膜分割工具和NICE对该过程进行评估。误差测量。此外,执行基于现有工作的分割级融合程序;检查了眼睛颜色分析,发现与多光谱程序没有明确的联系;另一项分析强调了进一步改进的潜力,假设在各种多光谱分割结果集中完美选择。
{"title":"Multi-spectral Iris Segmentation in Visible Wavelengths","authors":"Torsten Schlett, C. Rathgeb, C. Busch","doi":"10.1109/ICB2018.2018.00037","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00037","url":null,"abstract":"While traditional iris recognition systems operate using near-infrared images, visible wavelength approaches have gained attention in recent years due to a variety of reasons, such as the deployment of iris recognition in consumer grade mobile devices. Iris segmentation, the process of localizing the iris part of an image, is a vital step in iris recognition. The segmentation of the iris usually involves a detection of inner and outer iris boundaries, a detection of eyelids, an exclusion of eyelashes as well as contact lens rings and a scrubbing of specular reflections. This work presents a comprehensive multi-spectral analysis to improve iris segmentation accuracy in visible wavelengths by transforming iris images before their segmentation, which is done by extracting spectral components in form of RGB color channels. The procedure is evaluated by utilizing the MobBIO dataset, open-source iris segmentation tools, and the NICE.I error measures. Additionally, a segmentation-level fusion procedure based on existing work is performed; an eye color analysis is examined, with no clear connection to the multi-spectral procedure being found; and another analysis highlights further potential improvement by assuming perfect selection within various multi-spectral segmentation result sets.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130386328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Multi-sample Compression of Iris Images Using High Efficiency Video Coding 基于高效视频编码的虹膜图像多样本压缩
Pub Date : 2018-02-01 DOI: 10.1109/ICB2018.2018.00051
C. Rathgeb, Torsten Schlett, Nicolas Buchmann, Harald Baier, C. Busch
When multiple image samples of a single eye are captured during enrolment the accuracy of iris recognition systems can be substantially improved. However, storage requirement significantly increases if the system stores multiple iris images per enrolled eye. We consider this practical scenario and provide a comparative study on the usefulness of relevant image compression algorithms, i.e. JPEG, JPEG 2000 and the more recently introduced Better Portable Graphics (BPG) algorithm, which is based on a subset of the High Efficiency Video Coding (HEVC) standard. We propose a HEVC-based multi-sample compression which takes advantage of inter-frame prediction to achieve a more compact storage of iris images. Experiments on cropped iris images of the IITDv1 and the CASIAv4-Interval datasets confirm the usefulness of the presented approach. Compared to a separate storage of multiple BPG encoded images of size 2 to 3 KB the required storage space can be reduced by at least 30% if images are acquired in a single session. Similarly, at constant file sizes a relative enhancement of image quality of at least 5% in terms of PSNR is achieved. Compared to the widely recommended JPEG 2000 compression, obtained performance gains become even more pronounced. Gains with respect to image quality are also reflected in experiments on recognition performance.
当在注册过程中捕获单个眼睛的多个图像样本时,虹膜识别系统的准确性可以大大提高。但是,如果系统为每只注册的眼睛存储多个虹膜图像,则存储需求将显著增加。我们考虑了这一实际场景,并对相关图像压缩算法的有效性进行了比较研究,即JPEG, JPEG 2000和最近推出的基于高效视频编码(HEVC)标准子集的更好的便携式图形(BPG)算法。我们提出了一种基于hevc的多样本压缩方法,该方法利用帧间预测来实现更紧凑的虹膜图像存储。在IITDv1和CASIAv4-Interval数据集的裁剪虹膜图像上进行的实验证实了该方法的有效性。与单独存储大小为2到3 KB的多个BPG编码图像相比,如果在单个会话中获取图像,所需的存储空间可以减少至少30%。同样,在恒定的文件大小下,就PSNR而言,图像质量的相对增强至少达到5%。与广泛推荐的JPEG 2000压缩相比,获得的性能提升更加明显。图像质量方面的增益也反映在识别性能的实验中。
{"title":"Multi-sample Compression of Iris Images Using High Efficiency Video Coding","authors":"C. Rathgeb, Torsten Schlett, Nicolas Buchmann, Harald Baier, C. Busch","doi":"10.1109/ICB2018.2018.00051","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00051","url":null,"abstract":"When multiple image samples of a single eye are captured during enrolment the accuracy of iris recognition systems can be substantially improved. However, storage requirement significantly increases if the system stores multiple iris images per enrolled eye. We consider this practical scenario and provide a comparative study on the usefulness of relevant image compression algorithms, i.e. JPEG, JPEG 2000 and the more recently introduced Better Portable Graphics (BPG) algorithm, which is based on a subset of the High Efficiency Video Coding (HEVC) standard. We propose a HEVC-based multi-sample compression which takes advantage of inter-frame prediction to achieve a more compact storage of iris images. Experiments on cropped iris images of the IITDv1 and the CASIAv4-Interval datasets confirm the usefulness of the presented approach. Compared to a separate storage of multiple BPG encoded images of size 2 to 3 KB the required storage space can be reduced by at least 30% if images are acquired in a single session. Similarly, at constant file sizes a relative enhancement of image quality of at least 5% in terms of PSNR is achieved. Compared to the widely recommended JPEG 2000 compression, obtained performance gains become even more pronounced. Gains with respect to image quality are also reflected in experiments on recognition performance.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122117607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Dark Side of the Face: Exploring the Ultraviolet Spectrum for Face Biometrics 脸的阴暗面:探索面部生物识别的紫外线光谱
Pub Date : 2018-02-01 DOI: 10.1109/ICB2018.2018.00036
Timotheos Samartzidis, Dirk Siegmund, Michael Gödde, N. Damer, Andreas Braun, Arjan Kuijper
Facial recognition in the visible spectrum is a widely used application but it is also still a major field of research. In this paper we present melanin face pigmentation (MFP) as a new modality to be used to extend classical face biometrics. Melanin pigmentation are sun-damaged cells that occur as revealed and/or unrevealed pattern on human skin. Most MFP can be found in the faces of some people when using ultraviolet (UV) imaging. To proof the relevance of this feature for biometrics, we present a novel image dataset of 91 multiethnic subjects in both, the visible and the UV spectrum. We show a method to extract the MFP features from the UV images, using the well known SURF features and compare it with other techniques. In order to proof its benefits, we use weighted score-level fusion and evaluate the performance in an one against all comparison. As a result we observed a significant amplification of performance where traditional face recognition in the visible spectrum is extended with MFP from UV images. We conclude with a future perspective about the use of these features for future research and discuss observed issues and limitations.
人脸识别在可见光谱中有着广泛的应用,但也是一个重要的研究领域。在本文中,我们提出了黑色素面部色素沉着(MFP)作为一种新的模式,用于扩展经典的面部生物识别。黑色素色素沉着是太阳损伤的细胞,以显示和/或未显示的模式出现在人类皮肤上。当使用紫外线成像时,大多数MFP可以在一些人的脸上发现。为了证明这一特征与生物识别的相关性,我们提出了一个新的图像数据集,包括91个多民族受试者的可见和紫外光谱。我们提出了一种利用SURF特征从紫外图像中提取MFP特征的方法,并将其与其他技术进行了比较。为了证明它的优点,我们使用加权分数级融合,并在一比一比较中评估性能。因此,我们观察到一个显着的放大性能,其中传统的人脸识别在可见光谱扩展与MFP从紫外图像。最后,我们展望了这些特性在未来研究中的应用,并讨论了观察到的问题和局限性。
{"title":"The Dark Side of the Face: Exploring the Ultraviolet Spectrum for Face Biometrics","authors":"Timotheos Samartzidis, Dirk Siegmund, Michael Gödde, N. Damer, Andreas Braun, Arjan Kuijper","doi":"10.1109/ICB2018.2018.00036","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00036","url":null,"abstract":"Facial recognition in the visible spectrum is a widely used application but it is also still a major field of research. In this paper we present melanin face pigmentation (MFP) as a new modality to be used to extend classical face biometrics. Melanin pigmentation are sun-damaged cells that occur as revealed and/or unrevealed pattern on human skin. Most MFP can be found in the faces of some people when using ultraviolet (UV) imaging. To proof the relevance of this feature for biometrics, we present a novel image dataset of 91 multiethnic subjects in both, the visible and the UV spectrum. We show a method to extract the MFP features from the UV images, using the well known SURF features and compare it with other techniques. In order to proof its benefits, we use weighted score-level fusion and evaluate the performance in an one against all comparison. As a result we observed a significant amplification of performance where traditional face recognition in the visible spectrum is extended with MFP from UV images. We conclude with a future perspective about the use of these features for future research and discuss observed issues and limitations.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114784876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2018 International Conference on Biometrics (ICB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1