Aesthetics-Guided Low-Light Enhancement

Dong Liang;Yuanhang Gao;Ling Li;Zhengyan Xu;Sheng-Jun Huang;Songcan Chen
{"title":"Aesthetics-Guided Low-Light Enhancement","authors":"Dong Liang;Yuanhang Gao;Ling Li;Zhengyan Xu;Sheng-Jun Huang;Songcan Chen","doi":"10.1109/TPAMI.2025.3554639","DOIUrl":null,"url":null,"abstract":"Evaluating the performance of low-light image enhancement (LLE) is highly subjective, thus making integrating human preferences into LLE a necessity. Existing methods fail to consider this and present a series of potentially valid heuristic criteria for training LLE models. In this paper, we propose a new paradigm, i.e., aesthetics-guided low-light image enhancement (ALL-E), which introduces aesthetic preferences to LLE and motivates training in a reinforcement learning framework with an aesthetic reward. Each pixel, functioning as an agent, refines itself by recursive actions. We further present ALL-E+, an extended version of ALL-E, which casts a two-stage aesthetics-guided enhancement and denoising. ALL-E+ achieves low-light enhancement and denoising compensation sequentially in a unified framework, resulting in significant improvements in both subjective visual experience and objective evaluation. Extensive experiments show that integrating aesthetic preferences can further improve the visual experience of enhanced images. Our results on various benchmarks also demonstrate the superiority of our method over state-of-the-art methods.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 7","pages":"5866-5883"},"PeriodicalIF":18.6000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10938859/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Evaluating the performance of low-light image enhancement (LLE) is highly subjective, thus making integrating human preferences into LLE a necessity. Existing methods fail to consider this and present a series of potentially valid heuristic criteria for training LLE models. In this paper, we propose a new paradigm, i.e., aesthetics-guided low-light image enhancement (ALL-E), which introduces aesthetic preferences to LLE and motivates training in a reinforcement learning framework with an aesthetic reward. Each pixel, functioning as an agent, refines itself by recursive actions. We further present ALL-E+, an extended version of ALL-E, which casts a two-stage aesthetics-guided enhancement and denoising. ALL-E+ achieves low-light enhancement and denoising compensation sequentially in a unified framework, resulting in significant improvements in both subjective visual experience and objective evaluation. Extensive experiments show that integrating aesthetic preferences can further improve the visual experience of enhanced images. Our results on various benchmarks also demonstrate the superiority of our method over state-of-the-art methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
美学引导的低光增强
评估低光图像增强(LLE)的性能是高度主观的,因此将人类偏好整合到LLE中是必要的。现有的方法没有考虑到这一点,并提出了一系列潜在有效的启发式标准来训练LLE模型。在本文中,我们提出了一种新的范式,即美学引导的低光图像增强(ALL-E),它将审美偏好引入LLE,并在具有美学奖励的强化学习框架中激励训练。每个像素,作为一个代理,通过递归的动作来完善自己。我们进一步介绍了ALL-E+,这是ALL-E的扩展版本,它采用了两阶段的美学指导增强和去噪。ALL-E+在统一的框架下实现弱光增强和去噪补偿,无论是主观视觉体验还是客观评价都有显著提升。大量实验表明,整合审美偏好可以进一步改善增强图像的视觉体验。我们在各种基准测试上的结果也证明了我们的方法优于最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Spike Camera Optical Flow Estimation Based on Continuous Spike Streams. Bi-C2R: Bidirectional Continual Compatible Representation for Re-Indexing Free Lifelong Person Re-Identification. Modality Equilibrium Matters: Minor-Modality-Aware Adaptive Alternating for Cross-Modal Memory Enhancement. Principled Multimodal Representation Learning. Class-Distribution-Aware Pseudo-Labeling for Semi-Supervised Multi-Label Learning.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1