Monocular, Boundary-Preserving Joint Recovery of Scene Flow and Depth

Q1 Computer Science Frontiers in ICT Pub Date : 2016-09-30 DOI:10.3389/fict.2016.00021
Y. Mathlouthi, A. Mitiche, Ismail Ben Ayed
{"title":"Monocular, Boundary-Preserving Joint Recovery of Scene Flow and Depth","authors":"Y. Mathlouthi, A. Mitiche, Ismail Ben Ayed","doi":"10.3389/fict.2016.00021","DOIUrl":null,"url":null,"abstract":"Variational joint recovery of scene flow and depth from a single image sequence, rather than from a stereo sequence as others required, was investigated in Mitiche et al. (2015) using an integral functional with a term of conformity of scene flow and depth to the image sequence spatiotemporal variations, and L2 regularization terms for smooth depth field and scene flow. The resulting scheme was analogous to the Horn and Schunck optical flow estimation method except that the unknowns were depth and scene flow rather than optical flow. Several examples were given to show the basic potency of the method: It was able to recover good depth and motion, except at their boundaries because L2 regularization is blind to discontinuities which it smooths indiscriminately. The method we study in this paper generalizes to L1 regularization the formulation of Mitiche et al. (2015) so that it computes boundary preserving estimates of both depth and scene flow. The image derivatives, which appear as data in the functional, are computed from the recorded image sequence also by a variational method which uses L1 regularization to preserve their discontinuities. Although L1 regularization yields nonlinear Euler-Lagrange equations for the minimization of the objective functional, these can be solved efficiently. The advantages of the generalization, namely sharper computed depth and three-dimensional motion, are put in evidence in experimentation with real and synthetic images which shows the results of L1 versus L2 regularization of depth and motion, as well as the results using L1 rather than L2 regularization of image derivatives.","PeriodicalId":37157,"journal":{"name":"Frontiers in ICT","volume":"68 1","pages":"21"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in ICT","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fict.2016.00021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0

Abstract

Variational joint recovery of scene flow and depth from a single image sequence, rather than from a stereo sequence as others required, was investigated in Mitiche et al. (2015) using an integral functional with a term of conformity of scene flow and depth to the image sequence spatiotemporal variations, and L2 regularization terms for smooth depth field and scene flow. The resulting scheme was analogous to the Horn and Schunck optical flow estimation method except that the unknowns were depth and scene flow rather than optical flow. Several examples were given to show the basic potency of the method: It was able to recover good depth and motion, except at their boundaries because L2 regularization is blind to discontinuities which it smooths indiscriminately. The method we study in this paper generalizes to L1 regularization the formulation of Mitiche et al. (2015) so that it computes boundary preserving estimates of both depth and scene flow. The image derivatives, which appear as data in the functional, are computed from the recorded image sequence also by a variational method which uses L1 regularization to preserve their discontinuities. Although L1 regularization yields nonlinear Euler-Lagrange equations for the minimization of the objective functional, these can be solved efficiently. The advantages of the generalization, namely sharper computed depth and three-dimensional motion, are put in evidence in experimentation with real and synthetic images which shows the results of L1 versus L2 regularization of depth and motion, as well as the results using L1 rather than L2 regularization of image derivatives.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
场景流和深度的单目、保边界联合恢复
Mitiche et al.(2015)研究了场景流和深度从单个图像序列(而不是像其他人要求的那样从立体序列)中进行变分联合恢复,使用了包含场景流和深度与图像序列时空变化一致性项的积分泛函,以及用于平滑深度场和场景流的L2正则化项。所得到的方案与Horn和Schunck光流估计方法类似,只是未知因素是深度和场景流而不是光流。给出了几个例子来显示该方法的基本效力:它能够恢复良好的深度和运动,除了在它们的边界,因为L2正则化对它不加选择地平滑的不连续是盲目的。我们在本文中研究的方法将Mitiche等人(2015)的公式推广到L1正则化,以便计算深度和场景流的边界保持估计。作为函数中数据的图像导数也通过变分方法从记录的图像序列中计算,该变分方法使用L1正则化来保持它们的不连续性。虽然L1正则化为目标泛函的最小化产生非线性欧拉-拉格朗日方程,但这些方程可以有效地求解。在真实图像和合成图像的实验中,证明了这种泛化的优点,即更清晰的计算深度和三维运动,这些实验显示了L1与L2正则化深度和运动的结果,以及使用L1而不是L2正则化图像导数的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Frontiers in ICT
Frontiers in ICT Computer Science-Computer Networks and Communications
自引率
0.00%
发文量
0
期刊最新文献
Project Westdrive: Unity City With Self-Driving Cars and Pedestrians for Virtual Reality Studies The Syncopated Energy Algorithm for Rendering Real-Time Tactile Interactions Dyadic Interference Leads to Area of Uncertainty During Face-to-Face Cooperative Interception Task Eyelid and Pupil Landmark Detection and Blink Estimation Based on Deformable Shape Models for Near-Field Infrared Video Toward Industry 4.0 With IoT: Optimizing Business Processes in an Evolving Manufacturing Factory
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1