F M Javed Mehedi Shamrat , Rashiduzzaman Shakil , Sharmin , Nazmul Hoque ovy , Bonna Akter , Md Zunayed Ahmed , Kawsar Ahmed , Francis M. Bui , Mohammad Ali Moni
{"title":"An advanced deep neural network for fundus image analysis and enhancing diabetic retinopathy detection","authors":"F M Javed Mehedi Shamrat , Rashiduzzaman Shakil , Sharmin , Nazmul Hoque ovy , Bonna Akter , Md Zunayed Ahmed , Kawsar Ahmed , Francis M. Bui , Mohammad Ali Moni","doi":"10.1016/j.health.2024.100303","DOIUrl":null,"url":null,"abstract":"<div><p>Diabetic retinopathy (DR) involves retina damage due to diabetes, often leading to blindness. It is diagnosed via color fundus injections, but the manual analysis is cumbersome and error-prone. While computer vision techniques can predict DR stages, they are computationally intensive and struggle with complex data extraction. In this research, our prime objective was to automate the process of DR classification into its various stages using convolutional neural network (CNN) models. We employed the performance of fifteen pre-trained models with our novel proposed diabetic retinopathy network (DRNet13) model. We aimed to discern the most efficient model for accurate diabetic retinopathy (DR) staging based on fundus images from five DR classes. We preprocessed the image using a median filter for noise reduction and Gamma correction for image enhancement. We expanded our dataset from 3662 to 7500 images to create a more generalized training model through various augmentation techniques. We also evaluated multiple evaluation metrics, including accuracy, precision, F1-score, Sensitivity, Specificity, Area under the curve (AUC), Mean Squared Error (MSE), False Positive Rate (FPR), False Negative Rate (FNR), in addition to confusion matrices for an in-depth comparison of the performance of these models. Feature maps were employed to illuminate decision making areas in the DRNet13 model, which achieved a 97 % accuracy rate for DR detection, surpassing other CNN architectures in speed and efficiency. Despite a few misclassifications, the model's capability to identify critical features demonstrates its potential as an impactful diagnostic tool for timely and accurate identification of diabetic retinopathy.</p></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"5 ","pages":"Article 100303"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772442524000054/pdfft?md5=d8486a0b7c2a66d37a79ca700f9d36fd&pid=1-s2.0-S2772442524000054-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Healthcare analytics (New York, N.Y.)","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772442524000054","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Diabetic retinopathy (DR) involves retina damage due to diabetes, often leading to blindness. It is diagnosed via color fundus injections, but the manual analysis is cumbersome and error-prone. While computer vision techniques can predict DR stages, they are computationally intensive and struggle with complex data extraction. In this research, our prime objective was to automate the process of DR classification into its various stages using convolutional neural network (CNN) models. We employed the performance of fifteen pre-trained models with our novel proposed diabetic retinopathy network (DRNet13) model. We aimed to discern the most efficient model for accurate diabetic retinopathy (DR) staging based on fundus images from five DR classes. We preprocessed the image using a median filter for noise reduction and Gamma correction for image enhancement. We expanded our dataset from 3662 to 7500 images to create a more generalized training model through various augmentation techniques. We also evaluated multiple evaluation metrics, including accuracy, precision, F1-score, Sensitivity, Specificity, Area under the curve (AUC), Mean Squared Error (MSE), False Positive Rate (FPR), False Negative Rate (FNR), in addition to confusion matrices for an in-depth comparison of the performance of these models. Feature maps were employed to illuminate decision making areas in the DRNet13 model, which achieved a 97 % accuracy rate for DR detection, surpassing other CNN architectures in speed and efficiency. Despite a few misclassifications, the model's capability to identify critical features demonstrates its potential as an impactful diagnostic tool for timely and accurate identification of diabetic retinopathy.