Li Zhang, Jewon Kang, Xin Zhao, Ying Chen, R. Joshi
{"title":"Neighboring block based disparity vector derivation for 3D-AVC","authors":"Li Zhang, Jewon Kang, Xin Zhao, Ying Chen, R. Joshi","doi":"10.1109/VCIP.2013.6706401","DOIUrl":null,"url":null,"abstract":"3D-AVC, being developed under Joint Collaborative Team on 3D Video Coding (JCT-3V), significantly outperforms the Multiview Video Coding plus Depth (MVC+D) which has no new macroblock level coding tools compared to Multiview video coding extension of H.264/AVC (MVC). However, for multiview compatible configuration, i.e., when texture views are decoded without accessing depth information, the performance of the current 3D-AVC is only marginally better than MVC+D. The problem is caused by the lack of disparity vectors which can be obtained only from the coded depth views in 3D-AVC. In this paper, a disparity vector derivation method is proposed by using the motion information of neighboring blocks and applied along with existing coding tools in 3D-AVC. The proposed method improves 3D-AVC in the multiview compatible mode substantially, resulting in about 20% bitrate reduction for texture coding. When enabling the so-called view synthesis prediction to further refine the disparity vectors, the performance of the proposed method is 31% better than MVC+D and even better than 3D-AVC under the best performing 3D-AVC configuration.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 Visual Communications and Image Processing (VCIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP.2013.6706401","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
3D-AVC, being developed under Joint Collaborative Team on 3D Video Coding (JCT-3V), significantly outperforms the Multiview Video Coding plus Depth (MVC+D) which has no new macroblock level coding tools compared to Multiview video coding extension of H.264/AVC (MVC). However, for multiview compatible configuration, i.e., when texture views are decoded without accessing depth information, the performance of the current 3D-AVC is only marginally better than MVC+D. The problem is caused by the lack of disparity vectors which can be obtained only from the coded depth views in 3D-AVC. In this paper, a disparity vector derivation method is proposed by using the motion information of neighboring blocks and applied along with existing coding tools in 3D-AVC. The proposed method improves 3D-AVC in the multiview compatible mode substantially, resulting in about 20% bitrate reduction for texture coding. When enabling the so-called view synthesis prediction to further refine the disparity vectors, the performance of the proposed method is 31% better than MVC+D and even better than 3D-AVC under the best performing 3D-AVC configuration.