X. Burgos-Artizzu, J. Fleureau, Olivier Dumas, Thierry Tapie, F. Clerc, N. Mollet
{"title":"Real-time expression-sensitive HMD face reconstruction","authors":"X. Burgos-Artizzu, J. Fleureau, Olivier Dumas, Thierry Tapie, F. Clerc, N. Mollet","doi":"10.1145/2820903.2820910","DOIUrl":null,"url":null,"abstract":"One of the main issues of current Head-Mounted Displays (HMD) is that they hide completely the wearer's face. This can be an issue in social experiences where two or more users want to share the 3D immersive experience. We propose a novel method to recover the face of the user in real-time. First, we learn the user appearance offline by building a 3D textured model of his head from a series of pictures. Then, by calibrating the camera and tracking the HMD's position in real-time we reproject the model on top of the video frames mimicking exactly the user's head pose. Finally, we remove the HMD and replace the occluded part of the face in a seamingless manner by performing image in-painting with the background. We further propose an extension to detect facial expressions on the visible part of the face and use it to change the upper face model accordingly. We show the promise of our method via some qualitative results on a variety of users.","PeriodicalId":21720,"journal":{"name":"SIGGRAPH Asia 2015 Technical Briefs","volume":"105 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIGGRAPH Asia 2015 Technical Briefs","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2820903.2820910","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 28
Abstract
One of the main issues of current Head-Mounted Displays (HMD) is that they hide completely the wearer's face. This can be an issue in social experiences where two or more users want to share the 3D immersive experience. We propose a novel method to recover the face of the user in real-time. First, we learn the user appearance offline by building a 3D textured model of his head from a series of pictures. Then, by calibrating the camera and tracking the HMD's position in real-time we reproject the model on top of the video frames mimicking exactly the user's head pose. Finally, we remove the HMD and replace the occluded part of the face in a seamingless manner by performing image in-painting with the background. We further propose an extension to detect facial expressions on the visible part of the face and use it to change the upper face model accordingly. We show the promise of our method via some qualitative results on a variety of users.