Multi-view stereo with recurrent neural networks for spatio-temporal consistent depth maps

Hosung Son, Suk-ju Kang
{"title":"Multi-view stereo with recurrent neural networks for spatio-temporal consistent depth maps","authors":"Hosung Son, Suk-ju Kang","doi":"10.1109/ICEIC57457.2023.10049937","DOIUrl":null,"url":null,"abstract":"Depth estimation methods based on deep learning have been studied to improve depth estimation accuracy. However, obtaining inter-frame consistency in depth maps in video depth estimation remains a challenge. Therefore, we proposed an application methodology for spatio-temporal consistency enhancement in video depth estimation based on convolutional neural networks (CNNs) and recurrent neural networks (RNNs). In other words, the convolutional long-short term memory (ConvLSTM) module was added to the decoder of depth estimation network to enable the use of the information from the previous frames. Additionally, the one-stage learning process was implemented to ensure ease of training. In conclusion, we experimentally show that the proposed method can achieve not only improved accuracy also consistency between depth map frames.","PeriodicalId":373752,"journal":{"name":"2023 International Conference on Electronics, Information, and Communication (ICEIC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Electronics, Information, and Communication (ICEIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEIC57457.2023.10049937","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Depth estimation methods based on deep learning have been studied to improve depth estimation accuracy. However, obtaining inter-frame consistency in depth maps in video depth estimation remains a challenge. Therefore, we proposed an application methodology for spatio-temporal consistency enhancement in video depth estimation based on convolutional neural networks (CNNs) and recurrent neural networks (RNNs). In other words, the convolutional long-short term memory (ConvLSTM) module was added to the decoder of depth estimation network to enable the use of the information from the previous frames. Additionally, the one-stage learning process was implemented to ensure ease of training. In conclusion, we experimentally show that the proposed method can achieve not only improved accuracy also consistency between depth map frames.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于递归神经网络的时空一致深度图多视点立体
为了提高深度估计的精度,研究了基于深度学习的深度估计方法。然而,在视频深度估计中,如何获得深度图的帧间一致性仍然是一个难题。因此,我们提出了一种基于卷积神经网络(cnn)和递归神经网络(rnn)的视频深度估计时空一致性增强应用方法。换句话说,在深度估计网络的解码器中加入卷积长短期记忆(ConvLSTM)模块,使前一帧的信息能够被使用。此外,还实施了一阶段学习过程,以确保培训的便利性。实验结果表明,该方法不仅提高了深度图的精度,而且提高了深度图帧间的一致性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
DWT+DWT: Deep Learning Domain Generalization Techniques Using Discrete Wavelet Transform with Deep Whitening Transform Fast Virtual Keyboard Typing Using Vowel Hand Gesture Recognition A Study on Edge Computing-Based Microservices Architecture Supporting IoT Device Management and Artificial Intelligence Inference Efficient Pavement Crack Detection in Drone Images using Deep Neural Networks High Performance 3.3KV 4H-SiC MOSFET with a Floating Island and Hetero Junction Diode
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1