Background and purpose: Bitewing radiographs have high diagnostic value in detecting interproximal caries and assessing alveolar bone levels. However, image compression and sensor limitations may compromise image quality and diagnostic value. This study investigates whether two advanced deep-learning methods-a Real-Enhanced Super-Resolution Generative Adversarial Network (Real-ESRGAN) and a Swin Transformer-based image restoration network (SwinIR)-can effectively reconstruct clinically practical details from degraded bitewing images.
Materials and methods: A curated dataset of 4,004 high-quality bitewing radiographs was downsampled to produce paired low- and high-resolution images. Both deep-learning methods were fine-tuned using identical training protocols. Image quality was evaluated using peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Additionally, a blinded panel of twelve dentists assessed anatomic clarity, diagnostic accuracy, and perceptual realism using a descriptive rating scale.
Results: The SwinIR achieved superior quantitative results, with a PSNR of 35.56 dB and a SSIM of 0.9287, outperforming Real-ESRGAN, which recorded 31.93 dB and 0.8227, respectively. Clinicians also favored SwinIR for its ability to preserve diagnostic structures such as tooth margins and trabecular patterns, whereas Real-ESRGAN was favored for its perceptual realism and smoother texture rendering.
Conclusion: For bitewing radiographs, the Swin Transformer-based super-resolution model demonstrated more faithful preservation of clinically important structures than Real-ESRGAN, making it the preferred choice when diagnostic precision is prioritized.
扫码关注我们
求助内容:
应助结果提醒方式:
