Exploring Fast and Flexible Zero-Shot Low-Light Image/Video Enhancement

IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Computer Graphics Forum Pub Date : 2024-10-24 DOI:10.1111/cgf.15210
Xianjun Han, Taoli Bao, Hongyu Yang
{"title":"Exploring Fast and Flexible Zero-Shot Low-Light Image/Video Enhancement","authors":"Xianjun Han,&nbsp;Taoli Bao,&nbsp;Hongyu Yang","doi":"10.1111/cgf.15210","DOIUrl":null,"url":null,"abstract":"<p>Low-light image/video enhancement is a challenging task when images or video are captured under harsh lighting conditions. Existing methods mostly formulate this task as an image-to-image conversion task via supervised or unsupervised learning. However, such conversion methods require an extremely large amount of data for training, whether paired or unpaired. In addition, these methods are restricted to specific training data, making it difficult for the trained model to enhance other types of images or video. In this paper, we explore a novel, fast and flexible, zero-shot, low-light image or video enhancement framework. Without relying on prior training or relationships among neighboring frames, we are committed to estimating the illumination of the input image/frame by a well-designed network. The proposed zero-shot, low-light image/video enhancement architecture includes illumination estimation and residual correction modules. The network architecture is very concise and does not require any paired or unpaired data during training, which allows low-light enhancement to be performed with several simple iterations. Despite its simplicity, we show that the method is fast and generalizes well to diverse lighting conditions. Many experiments on various images and videos qualitatively and quantitatively demonstrate the advantages of our method over state-of-the-art methods.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Graphics Forum","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/cgf.15210","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Low-light image/video enhancement is a challenging task when images or video are captured under harsh lighting conditions. Existing methods mostly formulate this task as an image-to-image conversion task via supervised or unsupervised learning. However, such conversion methods require an extremely large amount of data for training, whether paired or unpaired. In addition, these methods are restricted to specific training data, making it difficult for the trained model to enhance other types of images or video. In this paper, we explore a novel, fast and flexible, zero-shot, low-light image or video enhancement framework. Without relying on prior training or relationships among neighboring frames, we are committed to estimating the illumination of the input image/frame by a well-designed network. The proposed zero-shot, low-light image/video enhancement architecture includes illumination estimation and residual correction modules. The network architecture is very concise and does not require any paired or unpaired data during training, which allows low-light enhancement to be performed with several simple iterations. Despite its simplicity, we show that the method is fast and generalizes well to diverse lighting conditions. Many experiments on various images and videos qualitatively and quantitatively demonstrate the advantages of our method over state-of-the-art methods.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
探索快速灵活的零镜头低照度图像/视频增强技术
低照度图像/视频增强是一项具有挑战性的任务,因为图像或视频是在恶劣的照明条件下拍摄的。现有的方法大多通过有监督或无监督学习将这项任务表述为图像到图像的转换任务。然而,这类转换方法需要大量数据进行训练,无论是配对数据还是非配对数据。此外,这些方法仅限于特定的训练数据,使得训练好的模型难以增强其他类型的图像或视频。在本文中,我们探索了一种新颖、快速、灵活、零镜头、低照度图像或视频增强框架。在不依赖事先训练或相邻帧之间关系的情况下,我们致力于通过精心设计的网络来估计输入图像/帧的光照度。所提出的零镜头、低照度图像/视频增强架构包括照度估计和残差校正模块。该网络架构非常简洁,在训练过程中不需要任何配对或非配对数据,因此只需进行几次简单的迭代即可实现弱光增强。尽管方法简单,但我们发现该方法不仅速度快,而且能很好地适应各种照明条件。在各种图像和视频上进行的大量实验从定性和定量两方面证明了我们的方法比最先进的方法更具优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computer Graphics Forum
Computer Graphics Forum 工程技术-计算机:软件工程
CiteScore
5.80
自引率
12.00%
发文量
175
审稿时长
3-6 weeks
期刊介绍: Computer Graphics Forum is the official journal of Eurographics, published in cooperation with Wiley-Blackwell, and is a unique, international source of information for computer graphics professionals interested in graphics developments worldwide. It is now one of the leading journals for researchers, developers and users of computer graphics in both commercial and academic environments. The journal reports on the latest developments in the field throughout the world and covers all aspects of the theory, practice and application of computer graphics.
期刊最新文献
DiffPop: Plausibility-Guided Object Placement Diffusion for Image Composition Front Matter LGSur-Net: A Local Gaussian Surface Representation Network for Upsampling Highly Sparse Point Cloud 𝒢-Style: Stylized Gaussian Splatting iShapEditing: Intelligent Shape Editing with Diffusion Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1