Wei Yong Eng, Dongbo Min, V. Nguyen, Jiangbo Lu, M. Do
{"title":"Gaze correction for 3D tele-immersive communication system","authors":"Wei Yong Eng, Dongbo Min, V. Nguyen, Jiangbo Lu, M. Do","doi":"10.1109/IVMSPW.2013.6611942","DOIUrl":null,"url":null,"abstract":"The lack of eye contact between participants in a tele-conferencing makes nonverbal communication unnatural and ineffective. A lot of research has focused on correcting the user gaze for a natural communication. Most of prior solutions require expensive and bulky hardware, or incorporate a complicated algorithm causing inefficiency and deployment. In this paper, we propose an effective and efficient gaze correction solution for a 3D tele-conferencing system in a single color/depth camera set-up. A raw depth map is first refined using the corresponding color image. Then, both color and depth data of the participant are accurately segmented. A novel view is synthesized in the location of the display screen which coincides with the user gaze. Stereoscopic views, i.e. virtual left and right images, can also be generated for 3D immersive conferencing, and are displayed in a 3D monitor with 3D virtual background scenes. Finally, to handle large hole regions that often occur in the view synthesized with a single color camera, we propose a simple yet robust hole filling technique that works in real-time. This novel inpainting method can effectively reconstruct missing parts of the synthesized image under various challenging situations. Our proposed system works in real-time on a single core CPU without requiring dedicated hardware, including data acquisition, post-processing, rendering, and so on.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IVMSP 2013","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IVMSPW.2013.6611942","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
The lack of eye contact between participants in a tele-conferencing makes nonverbal communication unnatural and ineffective. A lot of research has focused on correcting the user gaze for a natural communication. Most of prior solutions require expensive and bulky hardware, or incorporate a complicated algorithm causing inefficiency and deployment. In this paper, we propose an effective and efficient gaze correction solution for a 3D tele-conferencing system in a single color/depth camera set-up. A raw depth map is first refined using the corresponding color image. Then, both color and depth data of the participant are accurately segmented. A novel view is synthesized in the location of the display screen which coincides with the user gaze. Stereoscopic views, i.e. virtual left and right images, can also be generated for 3D immersive conferencing, and are displayed in a 3D monitor with 3D virtual background scenes. Finally, to handle large hole regions that often occur in the view synthesized with a single color camera, we propose a simple yet robust hole filling technique that works in real-time. This novel inpainting method can effectively reconstruct missing parts of the synthesized image under various challenging situations. Our proposed system works in real-time on a single core CPU without requiring dedicated hardware, including data acquisition, post-processing, rendering, and so on.