{"title":"Optical coherence tomography image recognition of diabetic retinopathy based on deep transfer learning","authors":"","doi":"10.1016/j.jrras.2024.101026","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><p>Diabetic retinopathy (DR) poses a significant challenge as a leading cause of vision impairment among diabetic individuals. Previous endeavors in optical coherence tomography (OCT) image segmentation using conventional deep learning methodologies have exhibited limitations in achieving robust generalization. Our study endeavors to explore the application of deep transfer learning models on OCT images for DR identification, juxtaposing their performance against conventional deep learning approaches.</p></div><div><h3>Methods</h3><p>Our investigation involved a cohort of 103 DR patients admitted to the ophthalmology department of our institution spanning from January 2023 to January 2024. Through a randomized allocation process, these patients were partitioned into distinct training and validation sets at a ratio of 7:3. Two convolution models, VGG19 and DenseNet, were constructed and transfer learning was carried out. The recognition effect of the traditional model and transfer model is compared and verified.</p></div><div><h3>Results</h3><p>Our findings demonstrate that both the VGG19 and DenseNet prediction models exhibit notable segmentation performance following transfer learning compared to their non-transfer learning counterparts. Post-transfer learning, the VGG model achieved accuracy, precision, recall, and F1-score values of 0.890, 0.924, 0.950, and 0.867, respectively, while the DenseNet model achieved corresponding values of 0.897, 0.900, 0.931, and 0.859. Furthermore, in the test set, the area under the curve (AUC) improved significantly for both models post-transfer learning, with the VGG model showcasing an AUC of 0.9118 and the DenseNet model exhibiting an AUC of 0.951.</p></div><div><h3>Conclusion</h3><p>The neural network model leveraging deep transfer learning demonstrates a notable enhancement in the recognition capability of DR based on OCT images. Furthermore, it effectively streamlines the workflow of ophthalmologists, thus warranting further promotion and adoption in clinical practice.</p></div>","PeriodicalId":16920,"journal":{"name":"Journal of Radiation Research and Applied Sciences","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1687850724002103/pdfft?md5=edb4c3a42fb7e18ca07a95fb5f2e073f&pid=1-s2.0-S1687850724002103-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Radiation Research and Applied Sciences","FirstCategoryId":"103","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1687850724002103","RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Objective
Diabetic retinopathy (DR) poses a significant challenge as a leading cause of vision impairment among diabetic individuals. Previous endeavors in optical coherence tomography (OCT) image segmentation using conventional deep learning methodologies have exhibited limitations in achieving robust generalization. Our study endeavors to explore the application of deep transfer learning models on OCT images for DR identification, juxtaposing their performance against conventional deep learning approaches.
Methods
Our investigation involved a cohort of 103 DR patients admitted to the ophthalmology department of our institution spanning from January 2023 to January 2024. Through a randomized allocation process, these patients were partitioned into distinct training and validation sets at a ratio of 7:3. Two convolution models, VGG19 and DenseNet, were constructed and transfer learning was carried out. The recognition effect of the traditional model and transfer model is compared and verified.
Results
Our findings demonstrate that both the VGG19 and DenseNet prediction models exhibit notable segmentation performance following transfer learning compared to their non-transfer learning counterparts. Post-transfer learning, the VGG model achieved accuracy, precision, recall, and F1-score values of 0.890, 0.924, 0.950, and 0.867, respectively, while the DenseNet model achieved corresponding values of 0.897, 0.900, 0.931, and 0.859. Furthermore, in the test set, the area under the curve (AUC) improved significantly for both models post-transfer learning, with the VGG model showcasing an AUC of 0.9118 and the DenseNet model exhibiting an AUC of 0.951.
Conclusion
The neural network model leveraging deep transfer learning demonstrates a notable enhancement in the recognition capability of DR based on OCT images. Furthermore, it effectively streamlines the workflow of ophthalmologists, thus warranting further promotion and adoption in clinical practice.
期刊介绍:
Journal of Radiation Research and Applied Sciences provides a high quality medium for the publication of substantial, original and scientific and technological papers on the development and applications of nuclear, radiation and isotopes in biology, medicine, drugs, biochemistry, microbiology, agriculture, entomology, food technology, chemistry, physics, solid states, engineering, environmental and applied sciences.