跨模态人脸识别的大裕度耦合特征学习

Yi Jin, Jiwen Lu, Q. Ruan
{"title":"跨模态人脸识别的大裕度耦合特征学习","authors":"Yi Jin, Jiwen Lu, Q. Ruan","doi":"10.1109/ICB.2015.7139097","DOIUrl":null,"url":null,"abstract":"This paper presents a Large Margin Coupled Feature Learning (LMCFL) method for cross-modal face recognition, which recognizes persons from facial images captured from different modalities. Most previous cross-modal face recognition methods utilize hand-crafted feature descriptors for face representation, which require strong prior knowledge to engineer and cannot exploit data-adaptive characteristics in feature extraction. In this work, we propose a new LMCFL method to learn coupled face representation at the image pixel level by jointly utilizing the discriminative information of face images in each modality and the correlation information of face images from different modalities. Thus, LMCFL can maximize the margin between positive face pairs and negative face pairs in each modality, and maximize the correlation of face images from different modalities, where discriminative face features can be automatically learned in a discriminative and data-driven way. Our LMCFL is validated on two different cross-modal face recognition applications, and the experimental results demonstrate the effectiveness of our proposed approach.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Large Margin Coupled Feature Learning for cross-modal face recognition\",\"authors\":\"Yi Jin, Jiwen Lu, Q. Ruan\",\"doi\":\"10.1109/ICB.2015.7139097\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a Large Margin Coupled Feature Learning (LMCFL) method for cross-modal face recognition, which recognizes persons from facial images captured from different modalities. Most previous cross-modal face recognition methods utilize hand-crafted feature descriptors for face representation, which require strong prior knowledge to engineer and cannot exploit data-adaptive characteristics in feature extraction. In this work, we propose a new LMCFL method to learn coupled face representation at the image pixel level by jointly utilizing the discriminative information of face images in each modality and the correlation information of face images from different modalities. Thus, LMCFL can maximize the margin between positive face pairs and negative face pairs in each modality, and maximize the correlation of face images from different modalities, where discriminative face features can be automatically learned in a discriminative and data-driven way. Our LMCFL is validated on two different cross-modal face recognition applications, and the experimental results demonstrate the effectiveness of our proposed approach.\",\"PeriodicalId\":237372,\"journal\":{\"name\":\"2015 International Conference on Biometrics (ICB)\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-05-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 International Conference on Biometrics (ICB)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICB.2015.7139097\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Biometrics (ICB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICB.2015.7139097","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13

摘要

提出了一种基于大余量耦合特征学习(LMCFL)的跨模态人脸识别方法,从不同模态采集的人脸图像中识别人脸。以往的跨模态人脸识别方法大多使用手工制作的特征描述符来表示人脸,这需要很强的先验知识来进行工程设计,并且不能在特征提取中利用数据自适应特征。在这项工作中,我们提出了一种新的LMCFL方法,通过联合利用每个模态的人脸图像的判别信息和不同模态的人脸图像的相关信息来学习图像像素级的耦合人脸表示。因此,LMCFL可以最大化每个模态下正面人脸对和负面人脸对之间的差值,并最大化不同模态下人脸图像的相关性,从而以判别和数据驱动的方式自动学习具有判别性的人脸特征。我们的LMCFL在两种不同的跨模态人脸识别应用中进行了验证,实验结果证明了我们提出的方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Large Margin Coupled Feature Learning for cross-modal face recognition
This paper presents a Large Margin Coupled Feature Learning (LMCFL) method for cross-modal face recognition, which recognizes persons from facial images captured from different modalities. Most previous cross-modal face recognition methods utilize hand-crafted feature descriptors for face representation, which require strong prior knowledge to engineer and cannot exploit data-adaptive characteristics in feature extraction. In this work, we propose a new LMCFL method to learn coupled face representation at the image pixel level by jointly utilizing the discriminative information of face images in each modality and the correlation information of face images from different modalities. Thus, LMCFL can maximize the margin between positive face pairs and negative face pairs in each modality, and maximize the correlation of face images from different modalities, where discriminative face features can be automatically learned in a discriminative and data-driven way. Our LMCFL is validated on two different cross-modal face recognition applications, and the experimental results demonstrate the effectiveness of our proposed approach.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Fast and robust self-training beard/moustache detection and segmentation Composite sketch recognition via deep network - a transfer learning approach Cross-sensor iris verification applying robust fused segmentation algorithms Multi-modal authentication system for smartphones using face, iris and periocular An efficient approach for clustering face images
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1