{"title":"基于立体视频的多视点视频合成迭代深度恢复","authors":"Chen-Hao Wei, Chen-Kuo Chiang, S. Lai","doi":"10.1109/APSIPA.2014.7041695","DOIUrl":null,"url":null,"abstract":"We propose a novel depth maps refinement algorithm and generate multi-view video sequences from two-view video sequences for modern autostereoscopic display. In order to generate realistic contents for virtual views, high-quality depth maps are very critical to the view synthesis results. We propose an iterative depth refinement approach of a joint error detection and correction algorithm to refine the depth maps that can be estimated by an existing stereo matching method or provided by a depth capturing device. Error detection aims at two types of error: across-view color-depth-inconsistency error and local color-depth-inconsistency error. Subsequently, the detected error pixels are corrected by searching appropriate candidates under several constraints to amend the depth errors. A trilateral filter is included in the refining process that considers intensity, spatial and temporal terms into the filter weighting to enhance the consistency across frames. In the proposed view synthesis framework, it features a disparity-based view interpolation method to alleviate the translucent artifacts and a directional filter to reduce the aliasing around the object boundaries. Experimental results show that the proposed algorithm effectively fixes errors in the depth maps. In addition, we also show the refined depth maps along with the proposed view synthesis framework significantly improve the novel view synthesis on several benchmark datasets.","PeriodicalId":231382,"journal":{"name":"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Iterative depth recovery for multi-view video synthesis from stereo videos\",\"authors\":\"Chen-Hao Wei, Chen-Kuo Chiang, S. Lai\",\"doi\":\"10.1109/APSIPA.2014.7041695\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a novel depth maps refinement algorithm and generate multi-view video sequences from two-view video sequences for modern autostereoscopic display. In order to generate realistic contents for virtual views, high-quality depth maps are very critical to the view synthesis results. We propose an iterative depth refinement approach of a joint error detection and correction algorithm to refine the depth maps that can be estimated by an existing stereo matching method or provided by a depth capturing device. Error detection aims at two types of error: across-view color-depth-inconsistency error and local color-depth-inconsistency error. Subsequently, the detected error pixels are corrected by searching appropriate candidates under several constraints to amend the depth errors. A trilateral filter is included in the refining process that considers intensity, spatial and temporal terms into the filter weighting to enhance the consistency across frames. In the proposed view synthesis framework, it features a disparity-based view interpolation method to alleviate the translucent artifacts and a directional filter to reduce the aliasing around the object boundaries. Experimental results show that the proposed algorithm effectively fixes errors in the depth maps. In addition, we also show the refined depth maps along with the proposed view synthesis framework significantly improve the novel view synthesis on several benchmark datasets.\",\"PeriodicalId\":231382,\"journal\":{\"name\":\"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/APSIPA.2014.7041695\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/APSIPA.2014.7041695","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Iterative depth recovery for multi-view video synthesis from stereo videos
We propose a novel depth maps refinement algorithm and generate multi-view video sequences from two-view video sequences for modern autostereoscopic display. In order to generate realistic contents for virtual views, high-quality depth maps are very critical to the view synthesis results. We propose an iterative depth refinement approach of a joint error detection and correction algorithm to refine the depth maps that can be estimated by an existing stereo matching method or provided by a depth capturing device. Error detection aims at two types of error: across-view color-depth-inconsistency error and local color-depth-inconsistency error. Subsequently, the detected error pixels are corrected by searching appropriate candidates under several constraints to amend the depth errors. A trilateral filter is included in the refining process that considers intensity, spatial and temporal terms into the filter weighting to enhance the consistency across frames. In the proposed view synthesis framework, it features a disparity-based view interpolation method to alleviate the translucent artifacts and a directional filter to reduce the aliasing around the object boundaries. Experimental results show that the proposed algorithm effectively fixes errors in the depth maps. In addition, we also show the refined depth maps along with the proposed view synthesis framework significantly improve the novel view synthesis on several benchmark datasets.