High-Quality Depth Recovery via Interactive Multi-view Stereo

Weifeng Chen, Guofeng Zhang, Xiaojun Xiang, Jiaya Jia, H. Bao
{"title":"High-Quality Depth Recovery via Interactive Multi-view Stereo","authors":"Weifeng Chen, Guofeng Zhang, Xiaojun Xiang, Jiaya Jia, H. Bao","doi":"10.1109/3DV.2014.55","DOIUrl":null,"url":null,"abstract":"Although multi-view stereo has been extensively studied during the past decades, automatically computing high-quality dense depth information from captured images/videos is still quite difficult. Many factors, such as serious occlusion, large texture less regions and strong reflection, easily cause erroneous depth recovery. In this paper, we present a novel semi-automatic multi-view stereo system, which can quickly create and repair depth from a monocular sequence taken by a freely moving camera. One of our main contributions is that we propose a novel multi-view stereo model incorporating prior constraints indicated by user interaction, which makes it possible to even handle Non-Lambertian surface that surely violates the photo-consistency constraint. Users only need to provide a coarse segmentation and a few user interactions, our system can automatically correct depth and refine boundary. With other priors and occlusion handling, the erroneous depth can be effectively corrected even for very challenging examples that are difficult for state-of-the-art methods.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 2nd International Conference on 3D Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DV.2014.55","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Although multi-view stereo has been extensively studied during the past decades, automatically computing high-quality dense depth information from captured images/videos is still quite difficult. Many factors, such as serious occlusion, large texture less regions and strong reflection, easily cause erroneous depth recovery. In this paper, we present a novel semi-automatic multi-view stereo system, which can quickly create and repair depth from a monocular sequence taken by a freely moving camera. One of our main contributions is that we propose a novel multi-view stereo model incorporating prior constraints indicated by user interaction, which makes it possible to even handle Non-Lambertian surface that surely violates the photo-consistency constraint. Users only need to provide a coarse segmentation and a few user interactions, our system can automatically correct depth and refine boundary. With other priors and occlusion handling, the erroneous depth can be effectively corrected even for very challenging examples that are difficult for state-of-the-art methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过交互式多视图立体高质量深度恢复
尽管在过去的几十年里,人们对多视点立体图像进行了广泛的研究,但从捕获的图像/视频中自动计算高质量的密集深度信息仍然非常困难。严重的遮挡、大的纹理少的区域、强的反射等因素容易导致深度恢复错误。在本文中,我们提出了一种新的半自动多视点立体系统,该系统可以从自由移动的相机拍摄的单目序列中快速创建和修复深度。我们的主要贡献之一是我们提出了一种新的多视图立体模型,该模型结合了由用户交互指示的先验约束,这使得处理违反光一致性约束的非兰伯曲面成为可能。用户只需要提供一个粗略的分割和少量的用户交互,我们的系统就可以自动校正深度和细化边界。与其他先验和遮挡处理,错误的深度可以有效地纠正,即使是非常具有挑战性的例子,是最先进的方法是困难的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Querying 3D Mesh Sequences for Human Action Retrieval Temporal Octrees for Compressing Dynamic Point Cloud Streams High-Quality Depth Recovery via Interactive Multi-view Stereo Iterative Closest Spectral Kernel Maps Close-Range Photometric Stereo with Point Light Sources
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1