首页 > 最新文献

2017 IEEE International Joint Conference on Biometrics (IJCB)最新文献

英文 中文
On the vulnerability of ECG verification to online presentation attacks 心电校验对在线表示攻击的脆弱性研究
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272692
Nima Karimian, D. Woodard, Domenic Forte
Electrocardiogram (ECG) has long been regarded as a biometric modality which is impractical to copy, clone, or spoof. However, it was recently shown that an ECG signal can be replayed from arbitrary waveform generators, computer sound cards, or off-the-shelf audio players. In this paper, we develop a novel presentation attack where a short template of the victim's ECG is captured by an attacker and used to map the attacker's ECG into the victim's, which can then be provided to the sensor using one of the above sources. Our approach involves exploiting ECG models, characterizing the differences between ECG signals, and developing mapping functions that transform any ECG into one that closely matches an authentic user's ECG. Our proposed approach, which can operate online or on-the-fly, is compared with a more ideal offline scenario where the attacker has more time and resources. In our experiments, the offline approach achieves average success rates of 97.43% and 94.17% for non-fiducial and fiducial based ECG authentication. In the online scenario, the performance is de-graded by 5.65% for non-fiducial based authentication, but is nearly unaffected for fiducial authentication.
长期以来,心电图一直被认为是一种无法复制、克隆或欺骗的生物识别方式。然而,最近的研究表明,心电信号可以从任意波形发生器、计算机声卡或现成的音频播放器中重放。在本文中,我们开发了一种新的呈现攻击,其中攻击者捕获受害者ECG的短模板并用于将攻击者的ECG映射到受害者的ECG,然后可以使用上述源之一提供给传感器。我们的方法包括利用ECG模型,表征ECG信号之间的差异,并开发映射函数,将任何ECG转换为与真实用户的ECG密切匹配的ECG。我们提出的方法可以在线或在线操作,与更理想的离线场景进行比较,攻击者有更多的时间和资源。在我们的实验中,离线方法对于非基准和基于基准的心电认证的平均成功率分别为97.43%和94.17%。在在线场景中,基于非基准身份验证的性能下降了5.65%,但基准身份验证几乎不受影响。
{"title":"On the vulnerability of ECG verification to online presentation attacks","authors":"Nima Karimian, D. Woodard, Domenic Forte","doi":"10.1109/BTAS.2017.8272692","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272692","url":null,"abstract":"Electrocardiogram (ECG) has long been regarded as a biometric modality which is impractical to copy, clone, or spoof. However, it was recently shown that an ECG signal can be replayed from arbitrary waveform generators, computer sound cards, or off-the-shelf audio players. In this paper, we develop a novel presentation attack where a short template of the victim's ECG is captured by an attacker and used to map the attacker's ECG into the victim's, which can then be provided to the sensor using one of the above sources. Our approach involves exploiting ECG models, characterizing the differences between ECG signals, and developing mapping functions that transform any ECG into one that closely matches an authentic user's ECG. Our proposed approach, which can operate online or on-the-fly, is compared with a more ideal offline scenario where the attacker has more time and resources. In our experiments, the offline approach achieves average success rates of 97.43% and 94.17% for non-fiducial and fiducial based ECG authentication. In the online scenario, the performance is de-graded by 5.65% for non-fiducial based authentication, but is nearly unaffected for fiducial authentication.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":" 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132075158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Handwriting watcher: A mechanism for smartwatch-driven handwriting authentication 手写监视器:一种智能手表驱动的手写验证机制
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272701
Isaac Griswold-Steiner, Richard Matovu, Abdul Serwadda
Despite decades of research on automated handwriting authentication, there is yet to emerge an automated handwriting authentication application that breaks into the mainstream. In this paper, we argue that the burgeoning wearables market holds the key to a practical handwriting authentication app. With potential applications in online education, standardized testing and mobile banking, we present Handwriting Watcher, a mechanism which leverages a wrist-worn sensor-enabled device to authenticate a user's free handwriting. Through experiments capturing a wide range of writing scenarios, we show Handwriting Watcher attains mean error rates as low as 6.56% across the population. Our work represents a promising step towards a market-ready, generalized handwriting authentication system.
尽管对自动手写身份验证进行了数十年的研究,但尚未出现一个进入主流的自动手写身份验证应用程序。在本文中,我们认为蓬勃发展的可穿戴设备市场掌握着实用手写认证应用程序的关键。随着在线教育,标准化测试和移动银行的潜在应用,我们提出了笔迹观察器,这是一种利用手腕上的传感器设备来验证用户自由手写的机制。通过捕获广泛的书写场景的实验,我们显示笔迹观察器在总体上的平均错误率低至6.56%。我们的工作代表了一个有希望的一步走向市场准备,通用的笔迹认证系统。
{"title":"Handwriting watcher: A mechanism for smartwatch-driven handwriting authentication","authors":"Isaac Griswold-Steiner, Richard Matovu, Abdul Serwadda","doi":"10.1109/BTAS.2017.8272701","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272701","url":null,"abstract":"Despite decades of research on automated handwriting authentication, there is yet to emerge an automated handwriting authentication application that breaks into the mainstream. In this paper, we argue that the burgeoning wearables market holds the key to a practical handwriting authentication app. With potential applications in online education, standardized testing and mobile banking, we present Handwriting Watcher, a mechanism which leverages a wrist-worn sensor-enabled device to authenticate a user's free handwriting. Through experiments capturing a wide range of writing scenarios, we show Handwriting Watcher attains mean error rates as low as 6.56% across the population. Our work represents a promising step towards a market-ready, generalized handwriting authentication system.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128450675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Full 3D touchless fingerprint recognition: Sensor, database and baseline performance 全3D非接触式指纹识别:传感器,数据库和基线性能
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272702
Javier Galbally, G. Boström, Laurent Beslay
One of the fields that still today remains largely unexplored in biometrics is 3D fingerprint recognition. This gap is mainly explained by the lack of scanners capable of acquiring on a touchless, fast, reliable and repeatable way, accurate fingerprint 3D spatial models. As such, full 3D fingerprint data with which to produce research and advance this field is almost nonexistent. If such acquisition process was possible, it could represent the beginning of a real paradigm shift in the way fingerprint recognition is performed. The present paper is a first promising step to address the fascinating challenge of 3D fingerprint acquisition and recognition. It presents a new full 3D touchless fingerprint scanner, a new database with 1,000 3D finger-print models, a new segmentation method based on the additional spatial information provided by the models, and initial baseline verification results.
3D指纹识别是目前生物识别技术中尚未开发的领域之一。这种差距主要是由于缺乏能够以非接触式、快速、可靠和可重复的方式获取准确指纹三维空间模型的扫描仪。因此,用于研究和推进这一领域的完整3D指纹数据几乎不存在。如果这样的获取过程是可能的,它可能代表着指纹识别执行方式的真正范式转变的开始。本文是解决3D指纹采集和识别难题的第一步。提出了一种新的全3D非接触式指纹扫描仪,一个包含1000个3D指纹模型的新数据库,一种基于模型提供的附加空间信息的新的分割方法,以及初始基线验证结果。
{"title":"Full 3D touchless fingerprint recognition: Sensor, database and baseline performance","authors":"Javier Galbally, G. Boström, Laurent Beslay","doi":"10.1109/BTAS.2017.8272702","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272702","url":null,"abstract":"One of the fields that still today remains largely unexplored in biometrics is 3D fingerprint recognition. This gap is mainly explained by the lack of scanners capable of acquiring on a touchless, fast, reliable and repeatable way, accurate fingerprint 3D spatial models. As such, full 3D fingerprint data with which to produce research and advance this field is almost nonexistent. If such acquisition process was possible, it could represent the beginning of a real paradigm shift in the way fingerprint recognition is performed. The present paper is a first promising step to address the fascinating challenge of 3D fingerprint acquisition and recognition. It presents a new full 3D touchless fingerprint scanner, a new database with 1,000 3D finger-print models, a new segmentation method based on the additional spatial information provided by the models, and initial baseline verification results.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117148819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Cross-eyed 2017: Cross-spectral iris/periocular recognition competition 斗鸡眼2017:跨光谱虹膜/眼周识别大赛
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272762
Ana F. Sequeira, Lulu Chen, J. Ferryman, Peter Wild, F. Alonso-Fernandez, J. Bigün, K. Raja, Ramachandra Raghavendra, C. Busch, Tiago de Freitas Pereira, S. Marcel, S. S. Behera, Mahesh Gour, Vivek Kanhangad
This work presents the 2nd Cross-Spectrum Iris/Periocular Recognition Competition (Cross-Eyed2017). The main goal of the competition is to promote and evaluate advances in cross-spectrum iris and periocular recognition. This second edition registered an increase in the participation numbers ranging from academia to industry: five teams submitted twelve methods for the periocular task and five for the iris task. The benchmark dataset is an enlarged version of the dual-spectrum database containing both iris and periocular images synchronously captured from a distance and within a realistic indoor environment. The evaluation was performed on an undisclosed test-set. Methodology, tested algorithms, and obtained results are reported in this paper identifying the remaining challenges in path forward.
本文介绍了第二届跨光谱虹膜/眼周识别比赛(Cross-Eyed2017)。比赛的主要目标是促进和评估跨光谱虹膜和眼周识别的进展。从学术界到工业界的参与人数都有所增加:5个团队提交了12种眼周任务方法和5种虹膜任务方法。基准数据集是双光谱数据库的放大版本,包含从远处和真实室内环境中同步捕获的虹膜和眼周图像。评估是在一个未公开的测试集上进行的。本文报告了方法、测试算法和获得的结果,确定了前进道路上的剩余挑战。
{"title":"Cross-eyed 2017: Cross-spectral iris/periocular recognition competition","authors":"Ana F. Sequeira, Lulu Chen, J. Ferryman, Peter Wild, F. Alonso-Fernandez, J. Bigün, K. Raja, Ramachandra Raghavendra, C. Busch, Tiago de Freitas Pereira, S. Marcel, S. S. Behera, Mahesh Gour, Vivek Kanhangad","doi":"10.1109/BTAS.2017.8272762","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272762","url":null,"abstract":"This work presents the 2nd Cross-Spectrum Iris/Periocular Recognition Competition (Cross-Eyed2017). The main goal of the competition is to promote and evaluate advances in cross-spectrum iris and periocular recognition. This second edition registered an increase in the participation numbers ranging from academia to industry: five teams submitted twelve methods for the periocular task and five for the iris task. The benchmark dataset is an enlarged version of the dual-spectrum database containing both iris and periocular images synchronously captured from a distance and within a realistic indoor environment. The evaluation was performed on an undisclosed test-set. Methodology, tested algorithms, and obtained results are reported in this paper identifying the remaining challenges in path forward.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115468939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Multimodal biometric recognition for toddlers and pre-school children 幼儿和学龄前儿童的多模态生物识别
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272750
P. Basak, Saurabh De, Mallika Agarwal, Aakarsh Malhotra, Mayank Vatsa, Richa Singh
In many applications such as law enforcement, attendance systems, and medical services, biometrics is utilized for identifying individuals. However, current systems, in general, do not enroll all possible age groups, particularly, toddlers and pre-school children. This research is the first of its kind attempt to prepare a multimodal biometric database for such potential users of biometric systems. In the proposed database, face, fingerprint, and iris modalities of over 100 children (age range of 18 months to 4 years) are captured in two different sessions, months apart. We also perform benchmarking evaluation of existing tools and algorithms to establish the baseline results for different unimodal and multimodal scenarios. Our experience and results suggest that while iris is highly accurate, it requires constant adult supervision to attain cooperation from children. On the other hand, face is the most easy-to-capture modality but yields very low verification performance. We assert that the availability of this database can instigate research in this important research problem.
在许多应用中,例如执法、考勤系统和医疗服务,生物识别技术被用于识别个人。然而,一般来说,目前的系统并没有招收所有可能的年龄组,特别是学步儿童和学龄前儿童。这项研究是首次尝试为生物识别系统的潜在用户准备一个多模式生物识别数据库。在提议的数据库中,100多名儿童(年龄在18个月到4岁之间)的面部、指纹和虹膜形态在两个不同的会议中被捕获,间隔几个月。我们还对现有工具和算法进行基准评估,以建立不同单峰和多峰场景的基线结果。我们的经验和结果表明,虽然虹膜是高度准确的,但它需要持续的成人监督才能获得儿童的合作。另一方面,人脸是最容易捕获的模态,但验证性能很低。我们断言,该数据库的可用性可以激发对这一重要研究问题的研究。
{"title":"Multimodal biometric recognition for toddlers and pre-school children","authors":"P. Basak, Saurabh De, Mallika Agarwal, Aakarsh Malhotra, Mayank Vatsa, Richa Singh","doi":"10.1109/BTAS.2017.8272750","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272750","url":null,"abstract":"In many applications such as law enforcement, attendance systems, and medical services, biometrics is utilized for identifying individuals. However, current systems, in general, do not enroll all possible age groups, particularly, toddlers and pre-school children. This research is the first of its kind attempt to prepare a multimodal biometric database for such potential users of biometric systems. In the proposed database, face, fingerprint, and iris modalities of over 100 children (age range of 18 months to 4 years) are captured in two different sessions, months apart. We also perform benchmarking evaluation of existing tools and algorithms to establish the baseline results for different unimodal and multimodal scenarios. Our experience and results suggest that while iris is highly accurate, it requires constant adult supervision to attain cooperation from children. On the other hand, face is the most easy-to-capture modality but yields very low verification performance. We assert that the availability of this database can instigate research in this important research problem.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115477195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Fast multi-view face alignment via multi-task auto-encoders 通过多任务自动编码器快速多视图面部对齐
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272740
Qi Li, Zhenan Sun, R. He
Face alignment is an important problem in computer vision. It is still an open problem due to the variations of facial attributes (e.g., head pose, facial expression, illumination variation). Many studies have shown that face alignment and facial attribute analysis are often correlated. This paper develops a two-stage multi-task Auto-encoders framework for fast face alignment by incorporating head pose information to handle large view variations. In the first and second stages, multi-task Auto-encoders are used to roughly locate and further refine facial landmark locations with related pose information, respectively. Besides, the shape constraint is naturally encoded into our two-stage face alignment framework to preserve facial structures. A coarse-to-fine strategy is adopted to refine the facial landmark results with the shape constraint. Furthermore, the computational cost of our method is much lower than its deep learning competitors. Experimental results on various challenging datasets show the effectiveness of the proposed method.
人脸对齐是计算机视觉中的一个重要问题。由于面部属性的变化(例如,头部姿势,面部表情,光照变化),这仍然是一个开放的问题。许多研究表明,面部对齐与面部属性分析经常是相关的。本文开发了一种两阶段多任务自动编码器框架,通过结合头部姿态信息来处理大的视图变化,实现了快速的人脸对齐。在第一阶段和第二阶段,分别使用多任务自编码器对面部地标位置进行粗略定位和进一步细化。此外,形状约束被自然地编码到我们的两阶段面部对齐框架中,以保持面部结构。采用从粗到精的策略,在形状约束下对人脸标记结果进行细化。此外,我们的方法的计算成本远低于其深度学习竞争对手。在各种具有挑战性的数据集上的实验结果表明了该方法的有效性。
{"title":"Fast multi-view face alignment via multi-task auto-encoders","authors":"Qi Li, Zhenan Sun, R. He","doi":"10.1109/BTAS.2017.8272740","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272740","url":null,"abstract":"Face alignment is an important problem in computer vision. It is still an open problem due to the variations of facial attributes (e.g., head pose, facial expression, illumination variation). Many studies have shown that face alignment and facial attribute analysis are often correlated. This paper develops a two-stage multi-task Auto-encoders framework for fast face alignment by incorporating head pose information to handle large view variations. In the first and second stages, multi-task Auto-encoders are used to roughly locate and further refine facial landmark locations with related pose information, respectively. Besides, the shape constraint is naturally encoded into our two-stage face alignment framework to preserve facial structures. A coarse-to-fine strategy is adopted to refine the facial landmark results with the shape constraint. Furthermore, the computational cost of our method is much lower than its deep learning competitors. Experimental results on various challenging datasets show the effectiveness of the proposed method.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130200832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Liveness detection on touchless fingerprint devices using texture descriptors and artificial neural networks 基于纹理描述符和人工神经网络的非接触式指纹检测
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272724
Caue Zaghetto, Mateus Mendelson, A. Zaghetto, F. Vidal
This paper presents a liveness detection method based on texture descriptors and artificial neural networks, whose objective is to identify potential attempts of spoofing attacks against touchless fingerprinting devices. First, a database was created. It comprises a set of 400 images, from which 200 represent real fingers and 200 represent fake fingers made of beeswax, corn flour play dough, latex, silicone and wood glue, 40 samples each. The artificial neural network classifier is trained and tested in 7 different scenarios. In Scenario 1, there are only two classes, “real finger” and “fake finger”. From Scenarios 2 to 6, six classes are used, but classification is done considering the “realfinger” class and each one of the five “fake finger” classes, separately. Finally, in Scenario 7, six classes are used and the classifier must indicate to which of the six classes the acquired sample belongs. Results show that the proposed method achieves its goal, since it correctly detects liveness in almost 100% of cases.
本文提出了一种基于纹理描述符和人工神经网络的动态检测方法,其目的是识别针对非接触式指纹设备的潜在欺骗攻击企图。首先,创建一个数据库。它包括一组400张图片,其中200张代表真手指,200张代表用蜂蜡、玉米粉橡皮泥、乳胶、硅胶和木胶制成的假手指,每个手指40张样本。人工神经网络分类器在7种不同的场景下进行了训练和测试。在场景1中,只有两个类,“真手指”和“假手指”。从场景2到场景6,使用了6个类,但分类是分别考虑“真实手指”类和五个“假手指”类中的每一个。最后,在场景7中,使用了六个类,分类器必须指出所获取的样本属于六个类中的哪一个。结果表明,该方法达到了预期目标,在几乎100%的病例中正确检测出了活体。
{"title":"Liveness detection on touchless fingerprint devices using texture descriptors and artificial neural networks","authors":"Caue Zaghetto, Mateus Mendelson, A. Zaghetto, F. Vidal","doi":"10.1109/BTAS.2017.8272724","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272724","url":null,"abstract":"This paper presents a liveness detection method based on texture descriptors and artificial neural networks, whose objective is to identify potential attempts of spoofing attacks against touchless fingerprinting devices. First, a database was created. It comprises a set of 400 images, from which 200 represent real fingers and 200 represent fake fingers made of beeswax, corn flour play dough, latex, silicone and wood glue, 40 samples each. The artificial neural network classifier is trained and tested in 7 different scenarios. In Scenario 1, there are only two classes, “real finger” and “fake finger”. From Scenarios 2 to 6, six classes are used, but classification is done considering the “realfinger” class and each one of the five “fake finger” classes, separately. Finally, in Scenario 7, six classes are used and the classifier must indicate to which of the six classes the acquired sample belongs. Results show that the proposed method achieves its goal, since it correctly detects liveness in almost 100% of cases.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"06 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123734577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
SSERBC 2017: Sclera segmentation and eye recognition benchmarking competition SSERBC 2017:巩膜分割与眼识别标杆竞赛
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272764
Abhijit Das, U. Pal, M. A. Ferrer-Ballester, M. Blumenstein, Dejan Štepec, Peter Rot, Ž. Emeršič, P. Peer, V. Štruc, S. V. A. Kumar, B. Harish
This paper summarises the results of the Sclera Segmentation and Eye Recognition Benchmarking Competition (SSERBC 2017). It was organised in the context of the International Joint Conference on Biometrics (IJCB 2017). The aim of this competition was to record the recent developments in sclera segmentation and eye recognition in the visible spectrum (using iris, sclera and peri-ocular, and their fusion), and also to gain the attention of researchers on this subject. In this regard, we have used the Multi-Angle Sclera Dataset (MASD version 1). It is comprised of2624 images taken from both the eyes of 82 identities. Therefore, it consists of images of 164 (82×2) eyes. A manual segmentation mask of these images was created to baseline both tasks. Precision and recall based statistical measures were employed to evaluate the effectiveness of the segmentation and the ranks of the segmentation task. Recognition accuracy measure has been employed to measure the recognition task. Manually segmented sclera, iris and peri-ocular regions were used in the recognition task. Sixteen teams registered for the competition, and among them, six teams submitted their algorithms or systems for the segmentation task and two of them submitted their recognition algorithm or systems. The results produced by these algorithms or systems reflect current developments in the literature of sclera segmentation and eye recognition, employing cutting edge techniques. The MASD version 1 dataset with some of the ground truth will be freely available for research purposes. The success of the competition also demonstrates the recent interests of researchers from academia as well as industry on this subject.
本文总结了巩膜分割和眼睛识别基准竞赛(SSERBC 2017)的结果。它是在国际生物识别联合会议(IJCB 2017)的背景下组织的。本次比赛的目的是记录可见光光谱中巩膜分割和眼睛识别的最新进展(利用虹膜、巩膜和眼周及其融合),并引起研究人员对这一主题的关注。在这方面,我们使用了多角度巩膜数据集(MASD版本1)。它由来自82个身份的双眼的2624张图像组成。因此,它由164只眼睛的图像组成(82×2)。创建了这些图像的手动分割掩码,以作为这两个任务的基线。采用基于查全率和查全率的统计度量来评价分割的有效性和分割任务的等级。采用识别精度度量来衡量识别任务。人工分割巩膜、虹膜和眼周区域用于识别任务。16支队伍报名参赛,其中6支队伍提交了分割任务的算法或系统,2支队伍提交了识别算法或系统。这些算法或系统产生的结果反映了当前巩膜分割和眼睛识别文献的发展,采用了尖端技术。MASD第1版数据集将免费提供一些基础事实,用于研究目的。比赛的成功也显示了学术界和工业界研究人员最近对这一主题的兴趣。
{"title":"SSERBC 2017: Sclera segmentation and eye recognition benchmarking competition","authors":"Abhijit Das, U. Pal, M. A. Ferrer-Ballester, M. Blumenstein, Dejan Štepec, Peter Rot, Ž. Emeršič, P. Peer, V. Štruc, S. V. A. Kumar, B. Harish","doi":"10.1109/BTAS.2017.8272764","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272764","url":null,"abstract":"This paper summarises the results of the Sclera Segmentation and Eye Recognition Benchmarking Competition (SSERBC 2017). It was organised in the context of the International Joint Conference on Biometrics (IJCB 2017). The aim of this competition was to record the recent developments in sclera segmentation and eye recognition in the visible spectrum (using iris, sclera and peri-ocular, and their fusion), and also to gain the attention of researchers on this subject. In this regard, we have used the Multi-Angle Sclera Dataset (MASD version 1). It is comprised of2624 images taken from both the eyes of 82 identities. Therefore, it consists of images of 164 (82×2) eyes. A manual segmentation mask of these images was created to baseline both tasks. Precision and recall based statistical measures were employed to evaluate the effectiveness of the segmentation and the ranks of the segmentation task. Recognition accuracy measure has been employed to measure the recognition task. Manually segmented sclera, iris and peri-ocular regions were used in the recognition task. Sixteen teams registered for the competition, and among them, six teams submitted their algorithms or systems for the segmentation task and two of them submitted their recognition algorithm or systems. The results produced by these algorithms or systems reflect current developments in the literature of sclera segmentation and eye recognition, employing cutting edge techniques. The MASD version 1 dataset with some of the ground truth will be freely available for research purposes. The success of the competition also demonstrates the recent interests of researchers from academia as well as industry on this subject.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114313022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Gender classification from multispectral periocular images 基于多光谱眼周图像的性别分类
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272774
Juan E. Tapia, Ignacio A. Viedma
Gender classification from multispectral periocular and iris images is a new topic on soft-biometric research. The feature extracted from RGB images and Near Infrared Images shows complementary information independent of the spectrum of the images. This paper shows that we canfusion these information improving the accuracy of gender classification. Most gender classification methods reported in the literature has used images from face databases and all the features for classification purposes. Experimental results suggest: (a) Features extracted in different scales can perform better than using only one feature in a single scale; (b) The periocular images performed better than iris images on VIS and NIR; c) The fusion of features on different spectral images NIR and VIS allows improve the accuracy; (c) The feature selection applied to NIR and VIS allows select relevant features and d) Our accuracy 90% is competitive with the state of the art.
基于多光谱眼周和虹膜图像的性别分类是软生物识别研究的一个新课题。从RGB图像和近红外图像中提取的特征显示出独立于图像光谱的互补信息。本文表明,我们可以融合这些信息,提高性别分类的准确性。文献报道的大多数性别分类方法都是使用人脸数据库中的图像和所有特征进行分类。实验结果表明:(a)在不同尺度上提取的特征比在单一尺度上只提取一个特征的效果更好;(b)眼周图像在VIS和NIR上优于虹膜图像;c) NIR和VIS不同光谱图像的特征融合,提高了精度;(c)应用于近红外和VIS的特征选择允许选择相关特征;d)我们90%的准确率与最先进的技术相竞争。
{"title":"Gender classification from multispectral periocular images","authors":"Juan E. Tapia, Ignacio A. Viedma","doi":"10.1109/BTAS.2017.8272774","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272774","url":null,"abstract":"Gender classification from multispectral periocular and iris images is a new topic on soft-biometric research. The feature extracted from RGB images and Near Infrared Images shows complementary information independent of the spectrum of the images. This paper shows that we canfusion these information improving the accuracy of gender classification. Most gender classification methods reported in the literature has used images from face databases and all the features for classification purposes. Experimental results suggest: (a) Features extracted in different scales can perform better than using only one feature in a single scale; (b) The periocular images performed better than iris images on VIS and NIR; c) The fusion of features on different spectral images NIR and VIS allows improve the accuracy; (c) The feature selection applied to NIR and VIS allows select relevant features and d) Our accuracy 90% is competitive with the state of the art.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126731730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Linking face images captured from the optical phenomenon in the wild for forensic science 连接从野外光学现象中捕获的面部图像,用于法医科学
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272770
Abhijit Das, Abira Sengupta, M. A. Ferrer-Ballester, U. Pal, M. Blumenstein
This paper discusses the possibility of use of some challenging face images scenario captured from optical phenomenon in the wild for forensic purpose towards individual identification. Occluded and under cover face images in surveillance scenario can be collected from its reflection on a surrounding glass or on a smooth wall that is under the coverage of the surveillance camera and such scenario of face images can be linked for forensic purposes. Another similar scenario that can also be used for forensic is the face images of an individual standing behind a transparent glass wall. To investigate the capability of these images for personal identification this study is conducted. This work investigated different types of features employed in the literature to establish individual identification by such degraded face images. Among them, local region based featured worked best. To achieve higher accuracy and better facial features face image were cropped manually along its close bounding box and noise removal was performed (reflection, etc.). In order to experiment we have developed a database considering the above mentioned scenario, which will be publicly available for academic research. Initial investigation substantiates the possibility of using such face images for forensic purpose.
本文讨论了使用从野外光学现象中捕获的一些具有挑战性的面部图像场景用于法医目的的可能性,以实现个人识别。在监控场景中,被遮挡和被遮挡的人脸图像可以通过其在监控摄像头覆盖的周围玻璃或光滑墙壁上的反射来收集,并且可以将这种人脸图像场景链接起来用于法医目的。另一个类似的场景也可以用于法医,那就是一个人站在透明玻璃墙后面的面部图像。为了研究这些图像的个人识别能力,本研究进行了。这项工作调查了文献中使用的不同类型的特征,以建立这种退化的人脸图像的个人识别。其中,基于局部区域的特色效果最好。为了获得更高的精度和更好的面部特征,人脸图像沿着其封闭的边界框进行手动裁剪,并进行噪声去除(反射等)。为了进行实验,我们开发了一个考虑到上述场景的数据库,该数据库将公开供学术研究使用。初步调查证实了将这些人脸图像用于法医目的的可能性。
{"title":"Linking face images captured from the optical phenomenon in the wild for forensic science","authors":"Abhijit Das, Abira Sengupta, M. A. Ferrer-Ballester, U. Pal, M. Blumenstein","doi":"10.1109/BTAS.2017.8272770","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272770","url":null,"abstract":"This paper discusses the possibility of use of some challenging face images scenario captured from optical phenomenon in the wild for forensic purpose towards individual identification. Occluded and under cover face images in surveillance scenario can be collected from its reflection on a surrounding glass or on a smooth wall that is under the coverage of the surveillance camera and such scenario of face images can be linked for forensic purposes. Another similar scenario that can also be used for forensic is the face images of an individual standing behind a transparent glass wall. To investigate the capability of these images for personal identification this study is conducted. This work investigated different types of features employed in the literature to establish individual identification by such degraded face images. Among them, local region based featured worked best. To achieve higher accuracy and better facial features face image were cropped manually along its close bounding box and noise removal was performed (reflection, etc.). In order to experiment we have developed a database considering the above mentioned scenario, which will be publicly available for academic research. Initial investigation substantiates the possibility of using such face images for forensic purpose.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123653686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2017 IEEE International Joint Conference on Biometrics (IJCB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1