Temporal As a Plugin: Unsupervised Video Denoising with Pre-Trained Image Denoisers

Zixuan Fu, Lanqing Guo, Chong Wang, Yufei Wang, Zhihao Li, Bihan Wen
{"title":"Temporal As a Plugin: Unsupervised Video Denoising with Pre-Trained Image Denoisers","authors":"Zixuan Fu, Lanqing Guo, Chong Wang, Yufei Wang, Zhihao Li, Bihan Wen","doi":"arxiv-2409.11256","DOIUrl":null,"url":null,"abstract":"Recent advancements in deep learning have shown impressive results in image\nand video denoising, leveraging extensive pairs of noisy and noise-free data\nfor supervision. However, the challenge of acquiring paired videos for dynamic\nscenes hampers the practical deployment of deep video denoising techniques. In\ncontrast, this obstacle is less pronounced in image denoising, where paired\ndata is more readily available. Thus, a well-trained image denoiser could serve\nas a reliable spatial prior for video denoising. In this paper, we propose a\nnovel unsupervised video denoising framework, named ``Temporal As a Plugin''\n(TAP), which integrates tunable temporal modules into a pre-trained image\ndenoiser. By incorporating temporal modules, our method can harness temporal\ninformation across noisy frames, complementing its power of spatial denoising.\nFurthermore, we introduce a progressive fine-tuning strategy that refines each\ntemporal module using the generated pseudo clean video frames, progressively\nenhancing the network's denoising performance. Compared to other unsupervised\nvideo denoising methods, our framework demonstrates superior performance on\nboth sRGB and raw video denoising datasets.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11256","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recent advancements in deep learning have shown impressive results in image and video denoising, leveraging extensive pairs of noisy and noise-free data for supervision. However, the challenge of acquiring paired videos for dynamic scenes hampers the practical deployment of deep video denoising techniques. In contrast, this obstacle is less pronounced in image denoising, where paired data is more readily available. Thus, a well-trained image denoiser could serve as a reliable spatial prior for video denoising. In this paper, we propose a novel unsupervised video denoising framework, named ``Temporal As a Plugin'' (TAP), which integrates tunable temporal modules into a pre-trained image denoiser. By incorporating temporal modules, our method can harness temporal information across noisy frames, complementing its power of spatial denoising. Furthermore, we introduce a progressive fine-tuning strategy that refines each temporal module using the generated pseudo clean video frames, progressively enhancing the network's denoising performance. Compared to other unsupervised video denoising methods, our framework demonstrates superior performance on both sRGB and raw video denoising datasets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
时间插件:使用预训练图像去噪器进行无监督视频去噪
最近,深度学习在图像和视频去噪方面取得了令人瞩目的进展,利用大量成对的有噪和无噪数据进行监督。然而,获取动态场景配对视频的挑战阻碍了深度视频去噪技术的实际应用。相比之下,这一障碍在图像去噪中不那么明显,因为配对数据更容易获得。因此,训练有素的图像去噪器可以作为视频去噪的可靠空间先验。在本文中,我们提出了一种新的无监督视频去噪框架,名为 "时态插件"(TAP),它将可调整的时态模块集成到预先训练好的图像去噪器中。此外,我们还引入了渐进微调策略,利用生成的伪干净视频帧完善每个时态模块,逐步提高网络的去噪性能。与其他无监督视频去噪方法相比,我们的框架在 sRGB 和原始视频去噪数据集上都表现出卓越的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
multiPI-TransBTS: A Multi-Path Learning Framework for Brain Tumor Image Segmentation Based on Multi-Physical Information Autopet III challenge: Incorporating anatomical knowledge into nnUNet for lesion segmentation in PET/CT Denoising diffusion models for high-resolution microscopy image restoration Tumor aware recurrent inter-patient deformable image registration of computed tomography scans with lung cancer Cross-Organ and Cross-Scanner Adenocarcinoma Segmentation using Rein to Fine-tune Vision Foundation Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1