首页 > 最新文献

2019 International Conference on Biometrics (ICB)最新文献

英文 中文
Obtaining Stable Iris Codes Exploiting Low-Rank Tensor Space and Spatial Structure Aware Refinement for Better Iris Recognition 利用低秩张量空间和空间结构感知改进获得稳定的虹膜编码,提高虹膜识别效果
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987266
K. B. Raja, Ramachandra Raghavendra, C. Busch
The strength of iris recognition in terms of optimal biometric performance has been challenged by inevitable operational conditions in unconstrained scenarios. In this work we present a new approach for extracting stable iris weight maps to account for the noisy iris representation as a result of capture conditions and ineluctable segmentation errors. Traditional approaches to extract stable bits often ignore inter-code relations under the presence of multiple enrolment samples. Unlike previous works, we formulate the stable code extraction using tensor representation to exactly recover the low-rank non-noisy iris information using the multiple enrolment samples. Further, the proposed approach produces stable class specific (user specific) iris weight maps by eliminating the error bits due to sub-optimal segmentation or pupil dilation effects using spatial correspondence in a patch-wise manner. Through the set of experiments on two publicly available iris databases acquired under semi-constrained and unconstrained setting, we demonstrate the superiority for identification and verification performance over current state-ofthe-art algorithms. Rank−1 identification rate on CASIAv4 distance database is achieved at 93.3% and a verification accuracy of Genuine Match Rate (GMR) of 80% at False Match Rate(FMR) of 0.0001 indicating the applicability of proposed approach in operational scenarios.
在不受约束的情况下,虹膜识别在最佳生物识别性能方面的强度受到了不可避免的操作条件的挑战。在这项工作中,我们提出了一种新的方法来提取稳定的虹膜权重图,以解释由于捕获条件和不可避免的分割错误而导致的虹膜噪声表示。传统的稳定位提取方法往往忽略了存在多个登记样本时的码间关系。与以往的工作不同,我们使用张量表示来制定稳定的代码提取,以准确地恢复使用多个登记样本的低秩无噪声虹膜信息。此外,本文提出的方法通过消除由于次优分割或瞳孔扩张效应而产生的错误位,从而产生稳定的类特定(用户特定)虹膜权重图。通过在半约束和无约束设置下获得的两个公开可用的虹膜数据库上的一组实验,我们证明了识别和验证性能优于当前最先进的算法。在CASIAv4距离数据库上,Rank−1识别率达到93.3%,真实匹配率(GMR)为80%,虚假匹配率(FMR)为0.0001,表明该方法在操作场景中的适用性。
{"title":"Obtaining Stable Iris Codes Exploiting Low-Rank Tensor Space and Spatial Structure Aware Refinement for Better Iris Recognition","authors":"K. B. Raja, Ramachandra Raghavendra, C. Busch","doi":"10.1109/ICB45273.2019.8987266","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987266","url":null,"abstract":"The strength of iris recognition in terms of optimal biometric performance has been challenged by inevitable operational conditions in unconstrained scenarios. In this work we present a new approach for extracting stable iris weight maps to account for the noisy iris representation as a result of capture conditions and ineluctable segmentation errors. Traditional approaches to extract stable bits often ignore inter-code relations under the presence of multiple enrolment samples. Unlike previous works, we formulate the stable code extraction using tensor representation to exactly recover the low-rank non-noisy iris information using the multiple enrolment samples. Further, the proposed approach produces stable class specific (user specific) iris weight maps by eliminating the error bits due to sub-optimal segmentation or pupil dilation effects using spatial correspondence in a patch-wise manner. Through the set of experiments on two publicly available iris databases acquired under semi-constrained and unconstrained setting, we demonstrate the superiority for identification and verification performance over current state-ofthe-art algorithms. Rank−1 identification rate on CASIAv4 distance database is achieved at 93.3% and a verification accuracy of Genuine Match Rate (GMR) of 80% at False Match Rate(FMR) of 0.0001 indicating the applicability of proposed approach in operational scenarios.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132184107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Feasibility Study on Utilizing Toe Prints for Biometric Verification of Children 利用足趾指纹进行儿童生物特征验证的可行性研究
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987273
David Yambay, Morgan Johnson, Keivan Bahmani, S. Schuckers
Biometric recognition allows a person to be identified by comparing feature vectors derived from a person’s physiological characteristics. Recognition is dependent on the permanence of the biometric characteristics over long periods of time. There was been limited work evaluating the footprint as a potential biometric. This paper presents a longitudinal study of toe prints in children to understand if this biometric modality could be used reliably as a child grows. Data was collected and analyzed in children ages 4-13 years over five visits, spaced approximately six months apart, giving two years of data. This is the first footprint collection spanning this broad age range in children. Footprints were segmented into separate toe prints to examine whether current fingerprint recognition technology can provide accurate results on toe prints. Data was analyzed using two available fingerprint matchers, Verifinger and Bozorth3 from NIST Biometric Image Software (NBIS). Verifinger provides the best verification match scores using the toe prints, especially when using the hallux, the large toe. The hallux toe on Verifinger provides verification rates of 0% FAR and FRR for images collected on the same day and a FRR of 6.44% at a 1% FAR after two years have passed between collections. Additional longitudinal data is being collected to further these results.
生物特征识别允许通过比较从一个人的生理特征衍生的特征向量来识别一个人。识别依赖于生物特征在长时间内的持久性。将足迹作为一种潜在的生物特征进行评估的工作有限。本文提出了一个纵向研究的脚趾印在儿童中,以了解如果这种生物识别模式可以可靠地用于儿童成长。数据收集和分析了4-13岁儿童的5次访问,间隔大约6个月,给出了2年的数据。这是第一个跨越这么大年龄范围的儿童足迹收集。为了检验当前的指纹识别技术是否能够提供准确的足趾指纹结果,将脚印分割成不同的足趾指纹。数据分析使用两种可用的指纹匹配器,来自NIST生物特征图像软件(NBIS)的Verifinger和Bozorth3。Verifinger使用脚趾指纹提供最佳的验证匹配分数,特别是在使用拇趾时。Verifinger上的拇趾为同一天采集的图像提供0% FAR和FRR的验证率,在两次采集之间经过两年的1% FAR时,FRR为6.44%。正在收集更多的纵向数据以进一步证实这些结果。
{"title":"A Feasibility Study on Utilizing Toe Prints for Biometric Verification of Children","authors":"David Yambay, Morgan Johnson, Keivan Bahmani, S. Schuckers","doi":"10.1109/ICB45273.2019.8987273","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987273","url":null,"abstract":"Biometric recognition allows a person to be identified by comparing feature vectors derived from a person’s physiological characteristics. Recognition is dependent on the permanence of the biometric characteristics over long periods of time. There was been limited work evaluating the footprint as a potential biometric. This paper presents a longitudinal study of toe prints in children to understand if this biometric modality could be used reliably as a child grows. Data was collected and analyzed in children ages 4-13 years over five visits, spaced approximately six months apart, giving two years of data. This is the first footprint collection spanning this broad age range in children. Footprints were segmented into separate toe prints to examine whether current fingerprint recognition technology can provide accurate results on toe prints. Data was analyzed using two available fingerprint matchers, Verifinger and Bozorth3 from NIST Biometric Image Software (NBIS). Verifinger provides the best verification match scores using the toe prints, especially when using the hallux, the large toe. The hallux toe on Verifinger provides verification rates of 0% FAR and FRR for images collected on the same day and a FRR of 6.44% at a 1% FAR after two years have passed between collections. Additional longitudinal data is being collected to further these results.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130131627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
In Defense of Color Names for Small-Scale Person Re-Identification 小尺度人物再识别中的颜色名称辩护
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987338
Yang Yang, Zhen Lei, Jinqiao Wang, S. Li
In this paper, we propose an efficient image representation strategy for addressing the task of small-scale person re-identification. Taking advantages of being compact and intuitively understandable, we adopt color names descriptor (CND) as our color feature. To solve the inaccuracy by comparing color names with image pixels in Euclidean space, we propose a new approach – soft Gaussian mapping (SGM), which uses a Gaussian model to bridge their semantic gap. We further present a cross-view coupling learning method to build a common subspace where the learned features can contain the transition information among different cameras. Experiments on the challenging small-scale benchmark public datasets demonstrate the effectiveness of our proposed method.
在本文中,我们提出了一种有效的图像表示策略来解决小尺度人物再识别的任务。我们采用颜色名称描述符(CND)作为颜色特征,利用其紧凑和直观易懂的优点。为了解决颜色名称与欧几里得空间图像像素比较的不准确性,我们提出了一种新的方法-软高斯映射(SGM),该方法使用高斯模型来弥合它们的语义差距。我们进一步提出了一种跨视图耦合学习方法来建立一个公共子空间,其中学习到的特征可以包含不同相机之间的过渡信息。在具有挑战性的小规模基准公共数据集上的实验证明了我们提出的方法的有效性。
{"title":"In Defense of Color Names for Small-Scale Person Re-Identification","authors":"Yang Yang, Zhen Lei, Jinqiao Wang, S. Li","doi":"10.1109/ICB45273.2019.8987338","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987338","url":null,"abstract":"In this paper, we propose an efficient image representation strategy for addressing the task of small-scale person re-identification. Taking advantages of being compact and intuitively understandable, we adopt color names descriptor (CND) as our color feature. To solve the inaccuracy by comparing color names with image pixels in Euclidean space, we propose a new approach – soft Gaussian mapping (SGM), which uses a Gaussian model to bridge their semantic gap. We further present a cross-view coupling learning method to build a common subspace where the learned features can contain the transition information among different cameras. Experiments on the challenging small-scale benchmark public datasets demonstrate the effectiveness of our proposed method.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115568027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Sclera Segmentation Benchmarking Competition in Cross-resolution Environment 交叉分辨率环境下的巩膜分割基准竞争
Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987414
Abhijit Das, U. Pal, M. Blumenstein, Caiyong Wang, Yong He, Yuhao Zhu, Zhenan Sun
This paper summarizes the results of the Sclera Segmentation Benchmarking Competition (SSBC 2019). It was organized in the context of the 12th IAPR International Conference on Biometrics (ICB 2019). The aim of this competition was to record the developments on sclera segmentation in the cross-resolution environment (sclera trait captured using multiple acquiring sensors with different image resolutions). Additionally, the competition also aimed to gain the attention of researchers on this subject of research.For the purpose of benchmarking, we have employed two datasets of sclera images captured using different sensors. The first dataset was collected using a DSLR camera and the second one was collected using a mobile phone camera. The first dataset is the Multi-Angle Sclera Dataset (MASD version 1). The second dataset is the Mobile Sclera Dataset (MSD), and in this dataset, images were captured using .a mobile phone rear camera of 8-megapixels. Baseline manual segmentation masks of the sclera images from both the datasets were developed.Precision and recall-based measures were employed to evaluate the effectiveness and ranking of the submitted segmentation techniques. Four algorithms were submitted to address the segmentation task. In this paper we analyzed the results produced by these algorithms/systems, and we have defined a way forward for this problem. Both the datasets along with some of the accompanying ground truth/baseline masks will be freely available for research purposes.
本文总结了巩膜分割基准竞赛(SSBC 2019)的结果。它是在第12届IAPR生物识别国际会议(ICB 2019)的背景下组织的。本次比赛的目的是记录在交叉分辨率环境下巩膜分割的发展(使用多个不同图像分辨率的采集传感器捕获巩膜特征)。此外,比赛还旨在引起研究人员对这一研究课题的关注。为了进行基准测试,我们使用了使用不同传感器捕获的两个巩膜图像数据集。第一个数据集是用数码单反相机收集的,第二个数据集是用手机相机收集的。第一个数据集是多角度巩膜数据集(MASD版本1)。第二个数据集是移动巩膜数据集(MSD),在这个数据集中,图像是使用800万像素的手机后置摄像头捕获的。开发了来自两个数据集的巩膜图像的基线手动分割掩模。采用精度和召回率为基础的措施来评估提交的分割技术的有效性和排名。提出了四种算法来解决分割任务。在本文中,我们分析了这些算法/系统产生的结果,并为这个问题定义了一个前进的方向。这两个数据集以及一些附带的地面真相/基线掩码将免费用于研究目的。
{"title":"Sclera Segmentation Benchmarking Competition in Cross-resolution Environment","authors":"Abhijit Das, U. Pal, M. Blumenstein, Caiyong Wang, Yong He, Yuhao Zhu, Zhenan Sun","doi":"10.1109/ICB45273.2019.8987414","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987414","url":null,"abstract":"This paper summarizes the results of the Sclera Segmentation Benchmarking Competition (SSBC 2019). It was organized in the context of the 12th IAPR International Conference on Biometrics (ICB 2019). The aim of this competition was to record the developments on sclera segmentation in the cross-resolution environment (sclera trait captured using multiple acquiring sensors with different image resolutions). Additionally, the competition also aimed to gain the attention of researchers on this subject of research.For the purpose of benchmarking, we have employed two datasets of sclera images captured using different sensors. The first dataset was collected using a DSLR camera and the second one was collected using a mobile phone camera. The first dataset is the Multi-Angle Sclera Dataset (MASD version 1). The second dataset is the Mobile Sclera Dataset (MSD), and in this dataset, images were captured using .a mobile phone rear camera of 8-megapixels. Baseline manual segmentation masks of the sclera images from both the datasets were developed.Precision and recall-based measures were employed to evaluate the effectiveness and ranking of the submitted segmentation techniques. Four algorithms were submitted to address the segmentation task. In this paper we analyzed the results produced by these algorithms/systems, and we have defined a way forward for this problem. Both the datasets along with some of the accompanying ground truth/baseline masks will be freely available for research purposes.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114905458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
The Nipple-Areola Complex for Criminal Identification 用于刑事鉴定的乳头-乳晕复合体
Pub Date : 2019-05-28 DOI: 10.1109/ICB45273.2019.8987341
Wojciech Michal Matkowski, Krzysztof Matkowski, A. Kong, C. Hall
In digital and multimedia forensics, identification of child sexual offenders based on digital evidence images is highly challenging due to the fact that the offender’s face or other obvious characteristics such as tattoos are occluded, covered, or not visible at all. Nevertheless, other naked body parts, e.g., chest are still visible. Some researchers proposed skin marks, skin texture, vein or androgenic hair patterns for criminal and victim identification. There are no available studies of nipple-areola complex (NAC) for offender identification. In this paper, we present a study of offender identification based on the NAC, and we present NTU-Nipple-v1 dataset, which contains 2732 images of 428 different male nipple-areolae. Popular deep learning and hand-crafted recognition methods are evaluated on the provided dataset. The results indicate that the NAC can be a useful characteristic for offender identification.
在数字和多媒体取证中,基于数字证据图像识别儿童性犯罪者是非常具有挑战性的,因为犯罪者的面部或其他明显特征(如纹身)被遮挡、覆盖或根本看不见。然而,其他裸露的身体部位,如胸部仍然可见。一些研究人员提出了皮肤痕迹、皮肤纹理、静脉或雄激素毛发模式来识别罪犯和受害者。没有可用的研究乳头乳晕复合体(NAC)罪犯识别。本文以NTU-Nipple-v1数据集为基础,研究了基于NAC的犯罪者识别方法,该数据集包含428个不同男性乳头乳晕的2732张图像。在提供的数据集上评估了流行的深度学习和手工识别方法。结果表明,NAC可以作为罪犯识别的有用特征。
{"title":"The Nipple-Areola Complex for Criminal Identification","authors":"Wojciech Michal Matkowski, Krzysztof Matkowski, A. Kong, C. Hall","doi":"10.1109/ICB45273.2019.8987341","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987341","url":null,"abstract":"In digital and multimedia forensics, identification of child sexual offenders based on digital evidence images is highly challenging due to the fact that the offender’s face or other obvious characteristics such as tattoos are occluded, covered, or not visible at all. Nevertheless, other naked body parts, e.g., chest are still visible. Some researchers proposed skin marks, skin texture, vein or androgenic hair patterns for criminal and victim identification. There are no available studies of nipple-areola complex (NAC) for offender identification. In this paper, we present a study of offender identification based on the NAC, and we present NTU-Nipple-v1 dataset, which contains 2732 images of 428 different male nipple-areolae. Popular deep learning and hand-crafted recognition methods are evaluated on the provided dataset. The results indicate that the NAC can be a useful characteristic for offender identification.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134403375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Some Research Problems in Biometrics: The Future Beckons 生物识别的几个研究问题:未来的呼唤
Pub Date : 2019-05-12 DOI: 10.1109/ICB45273.2019.8987307
A. Ross, Sudipta Banerjee, Cunjian Chen, Anurag Chowdhury, Vahid Mirjalili, Renu Sharma, Thomas Swearingen, Shivangi Yadav
The need for reliably determining the identity of a person is critical in a number of different domains ranging from personal smartphones to border security; from autonomous vehicles to e-voting; from tracking child vaccinations to preventing human trafficking; from crime scene investigation to personalization of customer service. Biometrics, which entails the use of biological attributes such as face, fingerprints and voice for recognizing a person, is being increasingly used in several such applications. While biometric technology has made rapid strides over the past decade, there are several fundamental issues that are yet to be satisfactorily resolved. In this article, we will discuss some of these issues and enumerate some of the exciting challenges in this field.
从个人智能手机到边境安全,在许多不同的领域,可靠地确定一个人的身份是至关重要的;从自动驾驶汽车到电子投票;从跟踪儿童疫苗接种到防止人口贩运;从犯罪现场调查到个性化客户服务。生物识别技术需要使用诸如面部、指纹和声音等生物属性来识别一个人,这种技术正越来越多地应用于一些此类应用中。虽然生物识别技术在过去十年中取得了长足的进步,但仍有几个基本问题尚未得到令人满意的解决。在本文中,我们将讨论其中的一些问题,并列举该领域中一些令人兴奋的挑战。
{"title":"Some Research Problems in Biometrics: The Future Beckons","authors":"A. Ross, Sudipta Banerjee, Cunjian Chen, Anurag Chowdhury, Vahid Mirjalili, Renu Sharma, Thomas Swearingen, Shivangi Yadav","doi":"10.1109/ICB45273.2019.8987307","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987307","url":null,"abstract":"The need for reliably determining the identity of a person is critical in a number of different domains ranging from personal smartphones to border security; from autonomous vehicles to e-voting; from tracking child vaccinations to preventing human trafficking; from crime scene investigation to personalization of customer service. Biometrics, which entails the use of biological attributes such as face, fingerprints and voice for recognizing a person, is being increasingly used in several such applications. While biometric technology has made rapid strides over the past decade, there are several fundamental issues that are yet to be satisfactorily resolved. In this article, we will discuss some of these issues and enumerate some of the exciting challenges in this field.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123696551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
LivDet in Action - Fingerprint Liveness Detection Competition 2019 LivDet在行动-指纹活性检测大赛2019
Pub Date : 2019-05-02 DOI: 10.1109/ICB45273.2019.8987281
G. Orrú, Roberto Casula, Pierluigi Tuveri, C. Bazzoni, Giovanna Dessalvi, Marco Micheletto, Luca Ghiani, G. Marcialis
The International Fingerprint liveness Detection Competition (LivDet) is an open and well-acknowledged meeting point of academies and private companies that deal with the problem of distinguishing images coming from reproductions of fingerprints made of artificial materials and images relative to real fingerprints. In this edition of LivDet we invited the competitors to propose integrated algorithms with matching systems. The goal was to investigate at which extent this integration impact on the whole performance. Twelve algorithms were submitted to the competition, eight of which worked on integrated systems.
国际指纹活度检测大赛(LivDet)是一个公开的、公认的学术机构和私人公司的会聚点,它处理的问题是区分由人造材料制成的指纹复制品的图像和相对于真实指纹的图像。在这一期的LivDet中,我们邀请了竞争对手提出与匹配系统相结合的算法。我们的目标是调查这种集成对整体性能的影响程度。12个算法被提交给比赛,其中8个是在集成系统上工作的。
{"title":"LivDet in Action - Fingerprint Liveness Detection Competition 2019","authors":"G. Orrú, Roberto Casula, Pierluigi Tuveri, C. Bazzoni, Giovanna Dessalvi, Marco Micheletto, Luca Ghiani, G. Marcialis","doi":"10.1109/ICB45273.2019.8987281","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987281","url":null,"abstract":"The International Fingerprint liveness Detection Competition (LivDet) is an open and well-acknowledged meeting point of academies and private companies that deal with the problem of distinguishing images coming from reproductions of fingerprints made of artificial materials and images relative to real fingerprints. In this edition of LivDet we invited the competitors to propose integrated algorithms with matching systems. The goal was to investigate at which extent this integration impact on the whole performance. Twelve algorithms were submitted to the competition, eight of which worked on integrated systems.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116443138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Gender Classification from Iris Texture Images Using a New Set of Binary Statistical Image Features 基于二值统计特征的虹膜纹理图像性别分类
Pub Date : 2019-05-01 DOI: 10.1109/ICB45273.2019.8987245
Juan E. Tapia, Claudia Arellano
Soft biometric information such as gender can contribute to many applications like as identification and security. This paper explores the use of a Binary Statistical Features (BSIF) algorithm for classifying gender from iris texture images captured with NIR sensors. It uses the same pipeline for iris recognition systems consisting of iris segmentation, normalisation and then classification. Experiments show that applying BSIF is not straightforward since it can create artificial textures causing misclassification. In order to overcome this limitation, a new set of filters was trained from eye images and different sized filters with padding bands were tested on a subject-disjoint database. A Modified-BSIF (MBSIF) method was implemented. The latter achieved better gender classification results (94.6% and 91.33% for the left and right eye respectively). These results are competitive with the state of the art in gender classification. In an additional contribution, a novel gender labelled database was created and it will be available upon request.
诸如性别之类的软生物特征信息可以用于许多应用程序,如身份识别和安全。本文探讨了使用二进制统计特征(BSIF)算法对近红外传感器捕获的虹膜纹理图像进行性别分类。它使用与虹膜识别系统相同的流程,包括虹膜分割、归一化和分类。实验表明,应用BSIF并不简单,因为它会产生人工纹理,导致误分类。为了克服这一限制,从眼睛图像中训练了一组新的滤波器,并在主题不相交的数据库上测试了不同尺寸的带有填充带的滤波器。实现了一种改进的bsif (MBSIF)方法。后者的性别分类效果较好(左眼94.6%,右眼91.33%)。这些结果在性别分类方面具有竞争力。另一项贡献是建立了一个新的标记性别的数据库,可应要求提供。
{"title":"Gender Classification from Iris Texture Images Using a New Set of Binary Statistical Image Features","authors":"Juan E. Tapia, Claudia Arellano","doi":"10.1109/ICB45273.2019.8987245","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987245","url":null,"abstract":"Soft biometric information such as gender can contribute to many applications like as identification and security. This paper explores the use of a Binary Statistical Features (BSIF) algorithm for classifying gender from iris texture images captured with NIR sensors. It uses the same pipeline for iris recognition systems consisting of iris segmentation, normalisation and then classification. Experiments show that applying BSIF is not straightforward since it can create artificial textures causing misclassification. In order to overcome this limitation, a new set of filters was trained from eye images and different sized filters with padding bands were tested on a subject-disjoint database. A Modified-BSIF (MBSIF) method was implemented. The latter achieved better gender classification results (94.6% and 91.33% for the left and right eye respectively). These results are competitive with the state of the art in gender classification. In an additional contribution, a novel gender labelled database was created and it will be available upon request.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124877661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Polarimetric Thermal to Visible Face Verification via Self-Attention Guided Synthesis 基于自注意引导合成的极化热对可见人脸的验证
Pub Date : 2019-04-15 DOI: 10.1109/ICB45273.2019.8987329
Xing Di, B. Riggan, Shuowen Hu, Nathan J. Short, Vishal M. Patel
Polarimetric thermal to visible face verification entails matching two images that contain significant domain differences. Several recent approaches have attempted to synthesize visible faces from thermal images for cross-modal matching. In this paper, we take a different approach in which rather than focusing only on synthesizing visible faces from thermal faces, we also propose to synthesize thermal faces from visible faces. Our intuition is based on the fact that thermal images also contain some discriminative information about the person for verification. Deep features from a pre-trained Convolutional Neural Network (CNN) are extracted from the original as well as the synthesized images. These features are then fused to generate a template which is then used for verification. The proposed synthesis network is based on the self-attention generative adversarial network (SAGAN) which essentially allows efficient attention-guided image synthesis. Extensive experiments on the ARL polarimetric thermal face dataset demonstrate that the proposed method achieves state-of-the-art performance.
极化热到可见的人脸验证需要匹配两个图像,其中包含显著的域差异。最近有几种方法试图从热图像中合成可见人脸进行跨模态匹配。在本文中,我们采取了一种不同的方法,而不是仅仅专注于从热面合成可见面,我们还提出了从可见面合成热面。我们的直觉是基于这样一个事实,即热图像也包含了一些关于人的判别信息,以供验证。从原始图像和合成图像中提取预训练卷积神经网络(CNN)的深度特征。然后将这些特征融合成一个模板,然后用于验证。提出的合成网络是基于自注意生成对抗网络(SAGAN),本质上允许有效的注意力引导图像合成。在ARL极化热人脸数据集上的大量实验表明,该方法达到了最先进的性能。
{"title":"Polarimetric Thermal to Visible Face Verification via Self-Attention Guided Synthesis","authors":"Xing Di, B. Riggan, Shuowen Hu, Nathan J. Short, Vishal M. Patel","doi":"10.1109/ICB45273.2019.8987329","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987329","url":null,"abstract":"Polarimetric thermal to visible face verification entails matching two images that contain significant domain differences. Several recent approaches have attempted to synthesize visible faces from thermal images for cross-modal matching. In this paper, we take a different approach in which rather than focusing only on synthesizing visible faces from thermal faces, we also propose to synthesize thermal faces from visible faces. Our intuition is based on the fact that thermal images also contain some discriminative information about the person for verification. Deep features from a pre-trained Convolutional Neural Network (CNN) are extracted from the original as well as the synthesized images. These features are then fused to generate a template which is then used for verification. The proposed synthesis network is based on the self-attention generative adversarial network (SAGAN) which essentially allows efficient attention-guided image synthesis. Extensive experiments on the ARL polarimetric thermal face dataset demonstrate that the proposed method achieves state-of-the-art performance.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126916588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Generalized Presentation Attack Detection: a face anti-spoofing evaluation proposal 广义表示攻击检测:一种人脸防欺骗评估方案
Pub Date : 2019-04-12 DOI: 10.1109/ICB45273.2019.8987290
Artur Costa-Pazo, David Jiménez-Cabello, Esteban Vázquez-Fernández, J. Alba-Castro, R. López-Sastre
Over the past few years, Presentation Attack Detection (PAD) has become a fundamental part of facial recognition systems. Although much effort has been devoted to anti-spoofing research, generalization in real scenarios remains a challenge. In this paper we present a new open-source evaluation framework to study the generalization capacity of face PAD methods, coined here as face-GPAD. This framework facilitates the creation of new protocols focused on the generalization problem establishing fair procedures of evaluation and comparison between PAD solutions. We also introduce a large aggregated and categorized dataset to address the problem of incompatibility between publicly available datasets. Finally, we propose a benchmark adding two novel evaluation protocols: one for measuring the effect introduced by the variations in face resolution, and the second for evaluating the influence of adversarial operating conditions.
在过去的几年里,表示攻击检测(PAD)已经成为人脸识别系统的一个基本组成部分。尽管反欺骗研究已经投入了大量的精力,但在真实场景中的泛化仍然是一个挑战。在本文中,我们提出了一个新的开源评估框架来研究人脸PAD方法的泛化能力,这里称之为face- gpad。该框架有助于创建新的协议,重点关注泛化问题,在PAD解决方案之间建立公平的评估和比较程序。我们还引入了一个大型聚合和分类数据集,以解决公开可用数据集之间不兼容的问题。最后,我们提出了一个基准,增加了两个新的评估协议:一个用于测量人脸分辨率变化带来的影响,另一个用于评估敌对操作条件的影响。
{"title":"Generalized Presentation Attack Detection: a face anti-spoofing evaluation proposal","authors":"Artur Costa-Pazo, David Jiménez-Cabello, Esteban Vázquez-Fernández, J. Alba-Castro, R. López-Sastre","doi":"10.1109/ICB45273.2019.8987290","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987290","url":null,"abstract":"Over the past few years, Presentation Attack Detection (PAD) has become a fundamental part of facial recognition systems. Although much effort has been devoted to anti-spoofing research, generalization in real scenarios remains a challenge. In this paper we present a new open-source evaluation framework to study the generalization capacity of face PAD methods, coined here as face-GPAD. This framework facilitates the creation of new protocols focused on the generalization problem establishing fair procedures of evaluation and comparison between PAD solutions. We also introduce a large aggregated and categorized dataset to address the problem of incompatibility between publicly available datasets. Finally, we propose a benchmark adding two novel evaluation protocols: one for measuring the effect introduced by the variations in face resolution, and the second for evaluating the influence of adversarial operating conditions.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132295627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
期刊
2019 International Conference on Biometrics (ICB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1