Li Wang , Cong Shi , Shrinivas Pundlik , Xu Yang , Liyuan Liu , Gang Luo
{"title":"LLD-GAN: An end-to-end network for low-light image demosaicking","authors":"Li Wang , Cong Shi , Shrinivas Pundlik , Xu Yang , Liyuan Liu , Gang Luo","doi":"10.1016/j.displa.2024.102856","DOIUrl":null,"url":null,"abstract":"<div><div>Demosaicking of low and ultra-low light images has wide applications in the fields of consumer electronics, security, and industrial machine vision. Denoising is a challenge in the demosaicking process. This study introduces a comprehensive end-to-end low-light demosaicking framework called LLD-GAN (Low Light Demosaicking Generative Adversarial Network), which greatly reduces the computational complexity. Our architecture employs a Wasserstein GAN framework enhanced by a gradient penalty mechanism. We have redesigned the generator based on the UNet++ network as well as its corresponding discriminator, which makes the model learning more efficient. In addition, we propose a new loss metric grounded in the principles of perceptual loss to obtain images with better visual quality. The contribution of Wasserstein GAN with gradient penalty and perceptual loss function was proved to be beneficial by our ablation experiments. For RGB images, we tested the proposed model under a wide range of low light levels, from 1/30 to 1/150 of normal light level, for 16-bit images with added noise. For actual low-light raw sensor images, the model was evaluated under three distinct lighting conditions: 1/100, 1/250, and 1/300 of normal exposure. The qualitative and quantitative comparison against advanced techniques demonstrates the validity and superiority of the LLD-GAN as a unified denoising-demosaicking tool.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102856"},"PeriodicalIF":3.7000,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938224002208","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Demosaicking of low and ultra-low light images has wide applications in the fields of consumer electronics, security, and industrial machine vision. Denoising is a challenge in the demosaicking process. This study introduces a comprehensive end-to-end low-light demosaicking framework called LLD-GAN (Low Light Demosaicking Generative Adversarial Network), which greatly reduces the computational complexity. Our architecture employs a Wasserstein GAN framework enhanced by a gradient penalty mechanism. We have redesigned the generator based on the UNet++ network as well as its corresponding discriminator, which makes the model learning more efficient. In addition, we propose a new loss metric grounded in the principles of perceptual loss to obtain images with better visual quality. The contribution of Wasserstein GAN with gradient penalty and perceptual loss function was proved to be beneficial by our ablation experiments. For RGB images, we tested the proposed model under a wide range of low light levels, from 1/30 to 1/150 of normal light level, for 16-bit images with added noise. For actual low-light raw sensor images, the model was evaluated under three distinct lighting conditions: 1/100, 1/250, and 1/300 of normal exposure. The qualitative and quantitative comparison against advanced techniques demonstrates the validity and superiority of the LLD-GAN as a unified denoising-demosaicking tool.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.