{"title":"3D video coding using motion information and depth map","authors":"Fei Cheng, Jimin Xiao, T. Tillo","doi":"10.1109/ICME.2015.7177431","DOIUrl":null,"url":null,"abstract":"In this paper, a motion-information-based 3D video coding method is proposed for the texture plus depth 3D video format. The synchronized global motion information of camcorder is sampled to assist the encoder to improve its rate-distortion performance. This approach works by projecting temporal previous frames into the position of the current frame using the depth and motion information. These projected frames are added in the reference buffer as virtual reference frames. As these virtual reference frames are more similar to the current frame than the conventional reference frames, the required residual information is reduced. The experimental results demonstrate that the proposed scheme enhances the coding performance in various motion conditions including rotational and translational motions.","PeriodicalId":146271,"journal":{"name":"2015 IEEE International Conference on Multimedia and Expo (ICME)","volume":"42 3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE International Conference on Multimedia and Expo (ICME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME.2015.7177431","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In this paper, a motion-information-based 3D video coding method is proposed for the texture plus depth 3D video format. The synchronized global motion information of camcorder is sampled to assist the encoder to improve its rate-distortion performance. This approach works by projecting temporal previous frames into the position of the current frame using the depth and motion information. These projected frames are added in the reference buffer as virtual reference frames. As these virtual reference frames are more similar to the current frame than the conventional reference frames, the required residual information is reduced. The experimental results demonstrate that the proposed scheme enhances the coding performance in various motion conditions including rotational and translational motions.