{"title":"Example-based 3D face reconstruction from uncalibrated frontal and profile images","authors":"Jing Li, Shuqin Long, Dan Zeng, Qijun Zhao","doi":"10.1109/ICB.2015.7139051","DOIUrl":null,"url":null,"abstract":"Reconstructing 3D face models from multiple uncalibrated 2D face images is usually done by using a single reference 3D face model or some gender/ethnicity-specific 3D face models. However, different persons, even those of the same gender or ethnicity, usually have significantly different faces in terms of their overall appearance, which forms the base of person recognition using faces. Consequently, existing 3D reference model based methods have limited capability of reconstructing 3D face models for a large variety of persons. In this paper, we propose to explore a reservoir of diverse reference models to improve the 3D face reconstruction performance. Specifically, we convert the face reconstruction problem into a multi-label segmentation problem. Its energy function is formulated from different cues, including 1) similarity between the desired output and the initial model, 2) color consistency between different views, 3) smoothness constraint on adjacent pixels, and 4) model consistency within local neighborhood. Experimental results on challenging datasets demonstrate that the proposed algorithm is capable of recovering high quality face models in both qualitative and quantitative evaluations.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Biometrics (ICB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICB.2015.7139051","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Reconstructing 3D face models from multiple uncalibrated 2D face images is usually done by using a single reference 3D face model or some gender/ethnicity-specific 3D face models. However, different persons, even those of the same gender or ethnicity, usually have significantly different faces in terms of their overall appearance, which forms the base of person recognition using faces. Consequently, existing 3D reference model based methods have limited capability of reconstructing 3D face models for a large variety of persons. In this paper, we propose to explore a reservoir of diverse reference models to improve the 3D face reconstruction performance. Specifically, we convert the face reconstruction problem into a multi-label segmentation problem. Its energy function is formulated from different cues, including 1) similarity between the desired output and the initial model, 2) color consistency between different views, 3) smoothness constraint on adjacent pixels, and 4) model consistency within local neighborhood. Experimental results on challenging datasets demonstrate that the proposed algorithm is capable of recovering high quality face models in both qualitative and quantitative evaluations.