基于立体视频的多视点视频合成迭代深度恢复

Chen-Hao Wei, Chen-Kuo Chiang, S. Lai
{"title":"基于立体视频的多视点视频合成迭代深度恢复","authors":"Chen-Hao Wei, Chen-Kuo Chiang, S. Lai","doi":"10.1109/APSIPA.2014.7041695","DOIUrl":null,"url":null,"abstract":"We propose a novel depth maps refinement algorithm and generate multi-view video sequences from two-view video sequences for modern autostereoscopic display. In order to generate realistic contents for virtual views, high-quality depth maps are very critical to the view synthesis results. We propose an iterative depth refinement approach of a joint error detection and correction algorithm to refine the depth maps that can be estimated by an existing stereo matching method or provided by a depth capturing device. Error detection aims at two types of error: across-view color-depth-inconsistency error and local color-depth-inconsistency error. Subsequently, the detected error pixels are corrected by searching appropriate candidates under several constraints to amend the depth errors. A trilateral filter is included in the refining process that considers intensity, spatial and temporal terms into the filter weighting to enhance the consistency across frames. In the proposed view synthesis framework, it features a disparity-based view interpolation method to alleviate the translucent artifacts and a directional filter to reduce the aliasing around the object boundaries. Experimental results show that the proposed algorithm effectively fixes errors in the depth maps. In addition, we also show the refined depth maps along with the proposed view synthesis framework significantly improve the novel view synthesis on several benchmark datasets.","PeriodicalId":231382,"journal":{"name":"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Iterative depth recovery for multi-view video synthesis from stereo videos\",\"authors\":\"Chen-Hao Wei, Chen-Kuo Chiang, S. Lai\",\"doi\":\"10.1109/APSIPA.2014.7041695\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a novel depth maps refinement algorithm and generate multi-view video sequences from two-view video sequences for modern autostereoscopic display. In order to generate realistic contents for virtual views, high-quality depth maps are very critical to the view synthesis results. We propose an iterative depth refinement approach of a joint error detection and correction algorithm to refine the depth maps that can be estimated by an existing stereo matching method or provided by a depth capturing device. Error detection aims at two types of error: across-view color-depth-inconsistency error and local color-depth-inconsistency error. Subsequently, the detected error pixels are corrected by searching appropriate candidates under several constraints to amend the depth errors. A trilateral filter is included in the refining process that considers intensity, spatial and temporal terms into the filter weighting to enhance the consistency across frames. In the proposed view synthesis framework, it features a disparity-based view interpolation method to alleviate the translucent artifacts and a directional filter to reduce the aliasing around the object boundaries. Experimental results show that the proposed algorithm effectively fixes errors in the depth maps. In addition, we also show the refined depth maps along with the proposed view synthesis framework significantly improve the novel view synthesis on several benchmark datasets.\",\"PeriodicalId\":231382,\"journal\":{\"name\":\"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/APSIPA.2014.7041695\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/APSIPA.2014.7041695","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

提出了一种新的深度图优化算法,并从双视点视频序列生成多视点视频序列,用于现代自动立体显示。为了为虚拟视图生成逼真的内容,高质量的深度图对视图合成结果至关重要。我们提出了一种联合误差检测和校正算法的迭代深度细化方法,以细化可由现有立体匹配方法估计或由深度捕获设备提供的深度图。错误检测主要针对两种类型的错误:跨视图颜色深度不一致错误和局部颜色深度不一致错误。随后,通过在若干约束条件下搜索合适的候选像素来校正检测到的误差像素,以修正深度误差。在精炼过程中包含一个三边滤波器,该滤波器将强度、空间和时间项考虑到滤波器权重中,以增强帧间的一致性。在提出的视图合成框架中,采用基于视差的视图插值方法来减轻半透明伪影,并采用方向滤波器来减少物体边界周围的混叠。实验结果表明,该算法能有效地修正深度图中的误差。此外,我们还展示了改进的深度图以及所提出的视图合成框架在几个基准数据集上显著改善了新的视图合成。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Iterative depth recovery for multi-view video synthesis from stereo videos
We propose a novel depth maps refinement algorithm and generate multi-view video sequences from two-view video sequences for modern autostereoscopic display. In order to generate realistic contents for virtual views, high-quality depth maps are very critical to the view synthesis results. We propose an iterative depth refinement approach of a joint error detection and correction algorithm to refine the depth maps that can be estimated by an existing stereo matching method or provided by a depth capturing device. Error detection aims at two types of error: across-view color-depth-inconsistency error and local color-depth-inconsistency error. Subsequently, the detected error pixels are corrected by searching appropriate candidates under several constraints to amend the depth errors. A trilateral filter is included in the refining process that considers intensity, spatial and temporal terms into the filter weighting to enhance the consistency across frames. In the proposed view synthesis framework, it features a disparity-based view interpolation method to alleviate the translucent artifacts and a directional filter to reduce the aliasing around the object boundaries. Experimental results show that the proposed algorithm effectively fixes errors in the depth maps. In addition, we also show the refined depth maps along with the proposed view synthesis framework significantly improve the novel view synthesis on several benchmark datasets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Smoothing of spatial filter by graph Fourier transform for EEG signals Intra line copy for HEVC screen content coding Design of FPGA-based rapid prototype spectral subtraction for hands-free speech applications Fetal ECG extraction using adaptive functional link artificial neural network Opened Pins Recommendation System to promote tourism sector in Chiang Rai Thailand
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1