Purpose: Low-light endoscopic images often lack contrast and clarity, obscuring anatomical details and reducing diagnostic accuracy. This study develops a method to enhance image brightness and visibility, enabling clearer visualization of critical structures to support precise medical diagnoses and improve patient outcomes.
Methods: To specifically address nonuniform illumination, we propose BrightVAE, a model that uses a dual-receptive-field architecture to decouple global brightness correction from local texture preservation. Integrated attention-based modules (Attencoder and Attenquant) explicitly target and amplify underexposed regions while preventing over-saturation, thereby recovering human-evaluable details in shadowed areas. The model was trained and tested on a public endoscopic dataset, and its performance was evaluated against other techniques using quality metrics.
Results: The model outperformed alternatives, improving PSNR by 3.252 units, structural detail by 0.045, and perceptual quality by 0.014 compared to the best model before us, achieving a PSNR of 30.576, SSIM of 0.879, and LPIPS of 0.133, ensuring superior visibility of shadowed regions.
Conclusion: This approach advances endoscopic imaging by delivering sharper, reliable images, enhancing diagnostic precision in clinical practice. Improved visualization supports better detection of abnormalities, potentially leading to more effective treatment decisions and enhanced patient care.
扫码关注我们
求助内容:
应助结果提醒方式:
