LoRaDIP: Low-Rank Adaptation With Deep Image Prior for Generative Low-Light Image Enhancement

Zunjin Zhao;Daming Shi
{"title":"LoRaDIP: Low-Rank Adaptation With Deep Image Prior for Generative Low-Light Image Enhancement","authors":"Zunjin Zhao;Daming Shi","doi":"10.1109/TAI.2024.3499950","DOIUrl":null,"url":null,"abstract":"This article presents LoRaDIP, a novel low-light image enhancement (LLIE) model based on deep image priors (DIPs). While DIP-based enhancement models are known for their zero-shot learning, their expensive computational cost remains a challenge. In addressing this issue, our proposed LoRaDIP introduces a low-rank adaptation technique, significantly reducing computational expenses without compromising performance. The contributions of this work are threefold. First, we eliminate the need for estimating initial illumination and reflectance, opting instead to directly estimate the illumination map from the observed image in a generative fashion. The illumination is parameterized by a DIP network. Second, considering the overparameterization of DIP networks, we introduce a low-rank adaptation technique to decrease the number of trainable parameters, thereby reducing computational demands. Third, differing from the existing DIP-based models that rely on a preset fixed number of iterations to halt the optimization process of Retinex decomposition, we propose an automatic stopping criterion based on stable rank, preventing unnecessary iterations. LoRaDIP not only inherits the advantage of requiring only the single input image but also exhibits reduced computational costs while maintaining or even surpassing the performance of state-of-the-art models.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 4","pages":"909-920"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10754638/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This article presents LoRaDIP, a novel low-light image enhancement (LLIE) model based on deep image priors (DIPs). While DIP-based enhancement models are known for their zero-shot learning, their expensive computational cost remains a challenge. In addressing this issue, our proposed LoRaDIP introduces a low-rank adaptation technique, significantly reducing computational expenses without compromising performance. The contributions of this work are threefold. First, we eliminate the need for estimating initial illumination and reflectance, opting instead to directly estimate the illumination map from the observed image in a generative fashion. The illumination is parameterized by a DIP network. Second, considering the overparameterization of DIP networks, we introduce a low-rank adaptation technique to decrease the number of trainable parameters, thereby reducing computational demands. Third, differing from the existing DIP-based models that rely on a preset fixed number of iterations to halt the optimization process of Retinex decomposition, we propose an automatic stopping criterion based on stable rank, preventing unnecessary iterations. LoRaDIP not only inherits the advantage of requiring only the single input image but also exhibits reduced computational costs while maintaining or even surpassing the performance of state-of-the-art models.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
LoRaDIP:基于深度图像先验的低秩自适应生成式低光图像增强
本文提出了一种基于深度图像先验(dip)的低光图像增强(LLIE)模型LoRaDIP。虽然基于dip的增强模型以其零学习而闻名,但其昂贵的计算成本仍然是一个挑战。为了解决这个问题,我们提出的LoRaDIP引入了一种低秩自适应技术,在不影响性能的情况下显著降低了计算费用。这项工作的贡献是三重的。首先,我们消除了估计初始照度和反射率的需要,而是选择直接从观察到的图像中以生成方式估计照度映射。照度由DIP网络参数化。其次,考虑到DIP网络的过参数化问题,引入了低秩自适应技术来减少可训练参数的数量,从而降低了计算量。第三,与现有基于dip的模型依赖于预设的固定迭代次数来停止Retinex分解的优化过程不同,我们提出了一种基于稳定秩的自动停止准则,避免了不必要的迭代。LoRaDIP不仅继承了只需要单个输入图像的优势,而且在保持甚至超越最先进模型的性能的同时,还显示了更低的计算成本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
期刊最新文献
ICAFS: Inter-Client-Aware Feature Selection for Vertical Federated Learning. 2025 Index IEEE Transactions on Artificial Intelligence Table of Contents Front Cover IEEE Transactions on Artificial Intelligence Publication Information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1