推出用于单图像降阶的雨水引导细节恢复网络

Q1 Computer Science Virtual Reality Intelligent Hardware Pub Date : 2023-02-01 DOI:10.1016/j.vrih.2022.06.002
Kailong Lin, Shaowei Zhang, Yu Luo, Jie Ling
{"title":"推出用于单图像降阶的雨水引导细节恢复网络","authors":"Kailong Lin,&nbsp;Shaowei Zhang,&nbsp;Yu Luo,&nbsp;Jie Ling","doi":"10.1016/j.vrih.2022.06.002","DOIUrl":null,"url":null,"abstract":"<div><p>Owing to the rapid development of deep networks, single image deraining tasks have achieved significant progress. Various architectures have been designed to recursively or directly remove rain, and most rain streaks can be removed by existing deraining methods. However, many of them cause a loss of details during deraining, resulting in visual artifacts. To resolve the detail-losing issue, we propose a novel unrolling rain-guided detail recovery network (URDRN) for single image deraining based on the observation that the most degraded areas of the background image tend to be the most rain-corrupted regions. Furthermore, to address the problem that most existing deep-learning-based methods trivialize the observation model and simply learn an end-to-end mapping, the proposed URDRN unrolls the single image deraining task into two subproblems: rain extraction and detail recovery. Specifically, first, a context aggregation attention network is introduced to effectively extract rain streaks, and then, a rain attention map is generated as an indicator to guide the detail-recovery process. For a detail-recovery sub-network, with the guidance of the rain attention map, a simple encoder–decoder model is sufficient to recover the lost details. Experiments on several well-known benchmark datasets show that the proposed approach can achieve a competitive performance in comparison with other state-of-the-art methods.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 1","pages":"Pages 11-23"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Unrolling Rain-guided Detail Recovery Network for Single Image Deraining\",\"authors\":\"Kailong Lin,&nbsp;Shaowei Zhang,&nbsp;Yu Luo,&nbsp;Jie Ling\",\"doi\":\"10.1016/j.vrih.2022.06.002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Owing to the rapid development of deep networks, single image deraining tasks have achieved significant progress. Various architectures have been designed to recursively or directly remove rain, and most rain streaks can be removed by existing deraining methods. However, many of them cause a loss of details during deraining, resulting in visual artifacts. To resolve the detail-losing issue, we propose a novel unrolling rain-guided detail recovery network (URDRN) for single image deraining based on the observation that the most degraded areas of the background image tend to be the most rain-corrupted regions. Furthermore, to address the problem that most existing deep-learning-based methods trivialize the observation model and simply learn an end-to-end mapping, the proposed URDRN unrolls the single image deraining task into two subproblems: rain extraction and detail recovery. Specifically, first, a context aggregation attention network is introduced to effectively extract rain streaks, and then, a rain attention map is generated as an indicator to guide the detail-recovery process. For a detail-recovery sub-network, with the guidance of the rain attention map, a simple encoder–decoder model is sufficient to recover the lost details. Experiments on several well-known benchmark datasets show that the proposed approach can achieve a competitive performance in comparison with other state-of-the-art methods.</p></div>\",\"PeriodicalId\":33538,\"journal\":{\"name\":\"Virtual Reality Intelligent Hardware\",\"volume\":\"5 1\",\"pages\":\"Pages 11-23\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Virtual Reality Intelligent Hardware\",\"FirstCategoryId\":\"1093\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S209657962200047X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Virtual Reality Intelligent Hardware","FirstCategoryId":"1093","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S209657962200047X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 1

摘要

由于深度网络的快速发展,单图像去噪任务取得了重大进展。已经设计了各种架构来递归地或直接地去除雨水,并且大多数雨条纹可以通过现有的去噪方法来去除。然而,它们中的许多会在去噪过程中导致细节丢失,从而导致视觉伪影。为了解决细节丢失问题,我们提出了一种新的展开雨水引导细节恢复网络(URDRN),用于单图像去噪,该网络基于对背景图像中退化程度最高的区域往往是雨水破坏程度最高的地区的观察。此外,为了解决大多数现有的基于深度学习的方法轻视观测模型并简单地学习端到端映射的问题,所提出的URDRN将单个图像去噪任务分解为两个子问题:雨水提取和细节恢复。具体来说,首先引入上下文聚合注意力网络来有效地提取雨带,然后生成雨带注意力图作为指标来指导细节恢复过程。对于细节恢复子网络,在雨水注意力图的指导下,一个简单的编码器-解码器模型就足以恢复丢失的细节。在几个著名的基准数据集上的实验表明,与其他最先进的方法相比,所提出的方法可以获得有竞争力的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Unrolling Rain-guided Detail Recovery Network for Single Image Deraining

Owing to the rapid development of deep networks, single image deraining tasks have achieved significant progress. Various architectures have been designed to recursively or directly remove rain, and most rain streaks can be removed by existing deraining methods. However, many of them cause a loss of details during deraining, resulting in visual artifacts. To resolve the detail-losing issue, we propose a novel unrolling rain-guided detail recovery network (URDRN) for single image deraining based on the observation that the most degraded areas of the background image tend to be the most rain-corrupted regions. Furthermore, to address the problem that most existing deep-learning-based methods trivialize the observation model and simply learn an end-to-end mapping, the proposed URDRN unrolls the single image deraining task into two subproblems: rain extraction and detail recovery. Specifically, first, a context aggregation attention network is introduced to effectively extract rain streaks, and then, a rain attention map is generated as an indicator to guide the detail-recovery process. For a detail-recovery sub-network, with the guidance of the rain attention map, a simple encoder–decoder model is sufficient to recover the lost details. Experiments on several well-known benchmark datasets show that the proposed approach can achieve a competitive performance in comparison with other state-of-the-art methods.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Virtual Reality  Intelligent Hardware
Virtual Reality Intelligent Hardware Computer Science-Computer Graphics and Computer-Aided Design
CiteScore
6.40
自引率
0.00%
发文量
35
审稿时长
12 weeks
期刊最新文献
Co-salient object detection with iterative purification and predictive optimization CURDIS: A template for incremental curve discretization algorithms and its application to conics Mesh representation matters: investigating the influence of different mesh features on perceptual and spatial fidelity of deep 3D morphable models Music-stylized hierarchical dance synthesis with user control Pre-training transformer with dual-branch context content module for table detection in document images
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1