{"title":"PSC 扩散:用于弱光图像增强的基于斑块的简化条件扩散模型","authors":"Fei Wan, Bingxin Xu, Weiguo Pan, Hongzhe Liu","doi":"10.1007/s00530-024-01391-z","DOIUrl":null,"url":null,"abstract":"<p>Low-light image enhancement is pivotal for augmenting the utility and recognition of visuals captured under inadequate lighting conditions. Previous methods based on Generative Adversarial Networks (GAN) are affected by mode collapse and lack attention to the inherent characteristics of low-light images. This paper propose the Patch-based Simplified Conditional Diffusion Model (PSC Diffusion) for low-light image enhancement due to the outstanding performance of diffusion models in image generation. Specifically, recognizing the potential issue of gradient vanishing in extremely low-light images due to smaller pixel values, we design a simplified U-Net architecture with SimpleGate and Parameter-free attention (SimPF) block to predict noise. This architecture utilizes parameter-free attention mechanism and fewer convolutional layers to reduce multiplication operations across feature maps, resulting in a 12–51% reduction in parameters compared to U-Nets used in several prominent diffusion models, which also accelerates the sampling speed. In addition, preserving intricate details in images during the diffusion process is achieved through employing a patch-based diffusion strategy, integrated with global structure-aware regularization, which effectively enhances the overall quality of the enhanced images. Experiments show that the method proposed in this paper achieves richer image details and better perceptual quality, while the sampling speed is over 35% faster than similar diffusion model-based methods.</p>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PSC diffusion: patch-based simplified conditional diffusion model for low-light image enhancement\",\"authors\":\"Fei Wan, Bingxin Xu, Weiguo Pan, Hongzhe Liu\",\"doi\":\"10.1007/s00530-024-01391-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Low-light image enhancement is pivotal for augmenting the utility and recognition of visuals captured under inadequate lighting conditions. Previous methods based on Generative Adversarial Networks (GAN) are affected by mode collapse and lack attention to the inherent characteristics of low-light images. This paper propose the Patch-based Simplified Conditional Diffusion Model (PSC Diffusion) for low-light image enhancement due to the outstanding performance of diffusion models in image generation. Specifically, recognizing the potential issue of gradient vanishing in extremely low-light images due to smaller pixel values, we design a simplified U-Net architecture with SimpleGate and Parameter-free attention (SimPF) block to predict noise. This architecture utilizes parameter-free attention mechanism and fewer convolutional layers to reduce multiplication operations across feature maps, resulting in a 12–51% reduction in parameters compared to U-Nets used in several prominent diffusion models, which also accelerates the sampling speed. In addition, preserving intricate details in images during the diffusion process is achieved through employing a patch-based diffusion strategy, integrated with global structure-aware regularization, which effectively enhances the overall quality of the enhanced images. Experiments show that the method proposed in this paper achieves richer image details and better perceptual quality, while the sampling speed is over 35% faster than similar diffusion model-based methods.</p>\",\"PeriodicalId\":3,\"journal\":{\"name\":\"ACS Applied Electronic Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2024-06-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Electronic Materials\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s00530-024-01391-z\",\"RegionNum\":3,\"RegionCategory\":\"材料科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00530-024-01391-z","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
PSC diffusion: patch-based simplified conditional diffusion model for low-light image enhancement
Low-light image enhancement is pivotal for augmenting the utility and recognition of visuals captured under inadequate lighting conditions. Previous methods based on Generative Adversarial Networks (GAN) are affected by mode collapse and lack attention to the inherent characteristics of low-light images. This paper propose the Patch-based Simplified Conditional Diffusion Model (PSC Diffusion) for low-light image enhancement due to the outstanding performance of diffusion models in image generation. Specifically, recognizing the potential issue of gradient vanishing in extremely low-light images due to smaller pixel values, we design a simplified U-Net architecture with SimpleGate and Parameter-free attention (SimPF) block to predict noise. This architecture utilizes parameter-free attention mechanism and fewer convolutional layers to reduce multiplication operations across feature maps, resulting in a 12–51% reduction in parameters compared to U-Nets used in several prominent diffusion models, which also accelerates the sampling speed. In addition, preserving intricate details in images during the diffusion process is achieved through employing a patch-based diffusion strategy, integrated with global structure-aware regularization, which effectively enhances the overall quality of the enhanced images. Experiments show that the method proposed in this paper achieves richer image details and better perceptual quality, while the sampling speed is over 35% faster than similar diffusion model-based methods.