Alnur Alimanov , Md Baharul Islam , Nirase Fathima Abubacker
{"title":"A Hybrid Approach for retinal image super-resolution","authors":"Alnur Alimanov , Md Baharul Islam , Nirase Fathima Abubacker","doi":"10.1016/j.bea.2023.100099","DOIUrl":null,"url":null,"abstract":"<div><p>Experts require large high-resolution retinal images to detect tiny abnormalities, such as microaneurysms or issues of vascular branches. However, these images often suffer from low quality (e.g., resolution) due to poor imaging device configuration and misoperations. Many works utilized Convolutional Neural Network-based (CNN) methods for image super-resolution. The authors focused on making these models more complex by adding layers and various blocks. It leads to additional computational expenses and obstructs the application in real-life scenarios. Thus, this paper proposes a novel, lightweight, deep-learning super-resolution method for retinal images. It comprises a Vision Transformer (ViT) encoder and a convolutional neural network decoder. To our best knowledge, this is the first attempt to use a transformer-based network to solve the issue of accurate retinal image super-resolution. A progressively growing super-resolution training technique is applied to increase the resolution of images by factors of 2, 4, and 8. The prominent architecture remains constant thanks to the adaptive patch embedding layer, which does not lead to additional computational expense due to increased up-scaling factors. This patch embedding layer includes 2-dimensional convolution with specific values of kernel size and strides that depend on the input shape. This strategy has removed the need to append additional super-resolution blocks to the model. The proposed method has been evaluated through quantitative and qualitative measures. The qualitative analysis also includes vessel segmentation of super-resolved and ground truth images. Experimental results indicate that the proposed method outperforms the current state-of-the-art methods.</p></div>","PeriodicalId":72384,"journal":{"name":"Biomedical engineering advances","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical engineering advances","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667099223000294","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Experts require large high-resolution retinal images to detect tiny abnormalities, such as microaneurysms or issues of vascular branches. However, these images often suffer from low quality (e.g., resolution) due to poor imaging device configuration and misoperations. Many works utilized Convolutional Neural Network-based (CNN) methods for image super-resolution. The authors focused on making these models more complex by adding layers and various blocks. It leads to additional computational expenses and obstructs the application in real-life scenarios. Thus, this paper proposes a novel, lightweight, deep-learning super-resolution method for retinal images. It comprises a Vision Transformer (ViT) encoder and a convolutional neural network decoder. To our best knowledge, this is the first attempt to use a transformer-based network to solve the issue of accurate retinal image super-resolution. A progressively growing super-resolution training technique is applied to increase the resolution of images by factors of 2, 4, and 8. The prominent architecture remains constant thanks to the adaptive patch embedding layer, which does not lead to additional computational expense due to increased up-scaling factors. This patch embedding layer includes 2-dimensional convolution with specific values of kernel size and strides that depend on the input shape. This strategy has removed the need to append additional super-resolution blocks to the model. The proposed method has been evaluated through quantitative and qualitative measures. The qualitative analysis also includes vessel segmentation of super-resolved and ground truth images. Experimental results indicate that the proposed method outperforms the current state-of-the-art methods.