{"title":"Pixel-wise classification of the whole retinal vasculature into arteries and veins using supervised learning","authors":"Monika Mokan , Goldie Gabrani , Devanjali Relan","doi":"10.1016/j.bspc.2025.107691","DOIUrl":null,"url":null,"abstract":"<div><h3>Background and Objective:</h3><div>The artery/vein classification in retinal images is the starting step towards assessing retinal features to determine the vessel abnormalities for systemic diseases. Deep learning-based automatic strategies for segmenting and classifying retinal vascular images have been proposed recently. The resultant performance of these strategies is restricted by the absence of large amount of labeled data and severe data imbalances. Less than fifty fundus photos may be found in the majority of the currently accessible publicly available fundus image collections, such as LES, HRF, DRIVE, and others. Recent artery/vein classification research has devalued the significance of pixel-wise classification. In this work, we have devised a pixel-wise classification method that will separate the whole vasculature of the retina into veins and arteries using supervised machine learning algorithm.</div></div><div><h3>Material and Methods:</h3><div>Initially, we pre-processed the retinal images using three different techniques dehazing, median filtering and multiscale self-quotient. Next, intensity-based features are obtained for the pixels in the vessels of the retinal images that have been pre-processed. Three supervised machine learning classifiers k-nearest neighbors, decision trees and random forests have been used to test our classification technique. Among all the mentioned pre-processing techniques and classifiers, we achieved the highest classification accuracy with dehazing technique using decision tree classifier. A decision tree classifier’s input is selected based on the features that have the greatest impact on classification accuracy. We evaluated our approaches on four publicly available retinal datasets LES-AV, HRF, RITE, and Dual Modal 2019 datasets.</div></div><div><h3>Results:</h3><div>We got classification accuracy of 95.60%, 89.15%, 88.66% and 84.07% for the LES-AV, HRF, RITE, and Dual Modal 2019 datasets, respectively.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107691"},"PeriodicalIF":4.9000,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809425002022","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Background and Objective:
The artery/vein classification in retinal images is the starting step towards assessing retinal features to determine the vessel abnormalities for systemic diseases. Deep learning-based automatic strategies for segmenting and classifying retinal vascular images have been proposed recently. The resultant performance of these strategies is restricted by the absence of large amount of labeled data and severe data imbalances. Less than fifty fundus photos may be found in the majority of the currently accessible publicly available fundus image collections, such as LES, HRF, DRIVE, and others. Recent artery/vein classification research has devalued the significance of pixel-wise classification. In this work, we have devised a pixel-wise classification method that will separate the whole vasculature of the retina into veins and arteries using supervised machine learning algorithm.
Material and Methods:
Initially, we pre-processed the retinal images using three different techniques dehazing, median filtering and multiscale self-quotient. Next, intensity-based features are obtained for the pixels in the vessels of the retinal images that have been pre-processed. Three supervised machine learning classifiers k-nearest neighbors, decision trees and random forests have been used to test our classification technique. Among all the mentioned pre-processing techniques and classifiers, we achieved the highest classification accuracy with dehazing technique using decision tree classifier. A decision tree classifier’s input is selected based on the features that have the greatest impact on classification accuracy. We evaluated our approaches on four publicly available retinal datasets LES-AV, HRF, RITE, and Dual Modal 2019 datasets.
Results:
We got classification accuracy of 95.60%, 89.15%, 88.66% and 84.07% for the LES-AV, HRF, RITE, and Dual Modal 2019 datasets, respectively.
期刊介绍:
Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management.
Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.