Self-Aligned Video Deraining with Transmission-Depth Consistency

Wending Yan, R. Tan, Wenhan Yang, Dengxin Dai
{"title":"Self-Aligned Video Deraining with Transmission-Depth Consistency","authors":"Wending Yan, R. Tan, Wenhan Yang, Dengxin Dai","doi":"10.1109/CVPR46437.2021.01179","DOIUrl":null,"url":null,"abstract":"In this paper, we address the problem of rain streaks and rain accumulation removal in video, by developing a self-alignment network with transmission-depth consistency. Existing video based deraining methods focus only on rain streak removal, and commonly use optical flow to align the rain video frames. However, besides rain streaks, rain accummulation can considerably degrade visibility; and, optical flow estimation in a rain video is still erroneous, making the deraining performance tend to be inaccurate. Our method employs deformable convolution layers in our encoder to achieve feature-level frame alignment, and hence avoids using optical flow. For rain streaks, our method predicts the current frame from its adjacent frames, such that rain streaks that appear randomly in the temporal domain can be removed. For rain accumulation, our method employs a transmission-depth consistency loss to resolve the ambiguity between the depth and water-droplet density. Our network estimates the depth from consecutive rain-accumulation-removal outputs, and calculates the transmission map using a commonly used physics model. To ensure photometric-temporal and depth-temporal consistencies, our method estimates the camera poses, so that it can warp one frame to its adjacent frames. Experimental results show that our method is effective in removing both rain streaks and rain accumulation, outperforming those of state-of-the-art methods quantitatively and qualitatively.","PeriodicalId":339646,"journal":{"name":"2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPR46437.2021.01179","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 19

Abstract

In this paper, we address the problem of rain streaks and rain accumulation removal in video, by developing a self-alignment network with transmission-depth consistency. Existing video based deraining methods focus only on rain streak removal, and commonly use optical flow to align the rain video frames. However, besides rain streaks, rain accummulation can considerably degrade visibility; and, optical flow estimation in a rain video is still erroneous, making the deraining performance tend to be inaccurate. Our method employs deformable convolution layers in our encoder to achieve feature-level frame alignment, and hence avoids using optical flow. For rain streaks, our method predicts the current frame from its adjacent frames, such that rain streaks that appear randomly in the temporal domain can be removed. For rain accumulation, our method employs a transmission-depth consistency loss to resolve the ambiguity between the depth and water-droplet density. Our network estimates the depth from consecutive rain-accumulation-removal outputs, and calculates the transmission map using a commonly used physics model. To ensure photometric-temporal and depth-temporal consistencies, our method estimates the camera poses, so that it can warp one frame to its adjacent frames. Experimental results show that our method is effective in removing both rain streaks and rain accumulation, outperforming those of state-of-the-art methods quantitatively and qualitatively.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
具有传输深度一致性的自对齐视频训练
在本文中,我们通过开发具有传输深度一致性的自对准网络来解决视频中的雨纹和雨积去除问题。现有的基于视频的脱轨方法只关注雨条纹的去除,通常使用光流对雨视频帧进行对齐。然而,除了雨带外,雨水的积累也会大大降低能见度;并且,雨视频中的光流估计仍然存在误差,使得脱机性能趋于不准确。我们的方法在编码器中使用可变形的卷积层来实现特征级帧对齐,从而避免了使用光流。对于雨条纹,我们的方法从相邻的帧中预测当前帧,这样可以去除在时域随机出现的雨条纹。对于雨水积累,我们的方法采用传输深度一致性损失来解决深度和水滴密度之间的模糊性。我们的网络从连续的降雨-累积-去除输出中估计深度,并使用常用的物理模型计算传输图。为了确保光度-时间和深度-时间的一致性,我们的方法估计相机的姿势,以便它可以将一帧扭曲到相邻的帧。实验结果表明,该方法在去除雨纹和雨积方面都是有效的,在定量和定性上都优于目前最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Multi-Label Learning from Single Positive Labels Panoramic Image Reflection Removal Self-Aligned Video Deraining with Transmission-Depth Consistency PSD: Principled Synthetic-to-Real Dehazing Guided by Physical Priors Ultra-High-Definition Image Dehazing via Multi-Guided Bilateral Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1