{"title":"结合基于视图的姿态归一化和特征变换的交叉姿态人脸识别","authors":"Hua Gao, H. K. Ekenel, R. Stiefelhagen","doi":"10.1109/ICB.2015.7139114","DOIUrl":null,"url":null,"abstract":"Automatic face recognition across large pose changes is still a challenging problem. Previous solutions apply a transform in image space or feature space for normalizing the pose mismatch. For feature transform, the feature vector extracted on a probe facial image is transferred to match the gallery condition with regression models. Usually, the regression models are learned from paired gallery-probe conditions, in which pose angles are known or accurately estimated. The solution based on image transform is able to handle continuous pose changes, yet the approach suffers from warping artifacts due to misalignment and self-occlusion. In this work, we propose a novel approach, which combines the advantage of both methods. The algorithm is able to handle continuous pose mismatch in gallery and probe set, mitigating the impact of inaccurate pose estimation in feature-transform-based method. We evaluate the proposed algorithm on the FERET face database, where the pose angles are roughly annotated. Experimental results show that our proposed method is superior to solely image/feature transform methods, especially when the pose angle difference is large.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"295 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Combining view-based pose normalization and feature transform for cross-pose face recognition\",\"authors\":\"Hua Gao, H. K. Ekenel, R. Stiefelhagen\",\"doi\":\"10.1109/ICB.2015.7139114\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automatic face recognition across large pose changes is still a challenging problem. Previous solutions apply a transform in image space or feature space for normalizing the pose mismatch. For feature transform, the feature vector extracted on a probe facial image is transferred to match the gallery condition with regression models. Usually, the regression models are learned from paired gallery-probe conditions, in which pose angles are known or accurately estimated. The solution based on image transform is able to handle continuous pose changes, yet the approach suffers from warping artifacts due to misalignment and self-occlusion. In this work, we propose a novel approach, which combines the advantage of both methods. The algorithm is able to handle continuous pose mismatch in gallery and probe set, mitigating the impact of inaccurate pose estimation in feature-transform-based method. We evaluate the proposed algorithm on the FERET face database, where the pose angles are roughly annotated. Experimental results show that our proposed method is superior to solely image/feature transform methods, especially when the pose angle difference is large.\",\"PeriodicalId\":237372,\"journal\":{\"name\":\"2015 International Conference on Biometrics (ICB)\",\"volume\":\"295 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-05-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 International Conference on Biometrics (ICB)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICB.2015.7139114\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Biometrics (ICB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICB.2015.7139114","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Combining view-based pose normalization and feature transform for cross-pose face recognition
Automatic face recognition across large pose changes is still a challenging problem. Previous solutions apply a transform in image space or feature space for normalizing the pose mismatch. For feature transform, the feature vector extracted on a probe facial image is transferred to match the gallery condition with regression models. Usually, the regression models are learned from paired gallery-probe conditions, in which pose angles are known or accurately estimated. The solution based on image transform is able to handle continuous pose changes, yet the approach suffers from warping artifacts due to misalignment and self-occlusion. In this work, we propose a novel approach, which combines the advantage of both methods. The algorithm is able to handle continuous pose mismatch in gallery and probe set, mitigating the impact of inaccurate pose estimation in feature-transform-based method. We evaluate the proposed algorithm on the FERET face database, where the pose angles are roughly annotated. Experimental results show that our proposed method is superior to solely image/feature transform methods, especially when the pose angle difference is large.