Behrouz Ebrahimi, David Le, Mansour Abtahi, Albert K Dadzie, Alfa Rossi, Mojtaba Rahimi, Taeyoon Son, Susan Ostmo, J Peter Campbell, R V Paul Chan, Xincheng Yao
{"title":"Assessing spectral effectiveness in color fundus photography for deep learning classification of retinopathy of prematurity.","authors":"Behrouz Ebrahimi, David Le, Mansour Abtahi, Albert K Dadzie, Alfa Rossi, Mojtaba Rahimi, Taeyoon Son, Susan Ostmo, J Peter Campbell, R V Paul Chan, Xincheng Yao","doi":"10.1117/1.JBO.29.7.076001","DOIUrl":null,"url":null,"abstract":"<p><strong>Significance: </strong>Retinopathy of prematurity (ROP) poses a significant global threat to childhood vision, necessitating effective screening strategies. This study addresses the impact of color channels in fundus imaging on ROP diagnosis, emphasizing the efficacy and safety of utilizing longer wavelengths, such as red or green for enhanced depth information and improved diagnostic capabilities.</p><p><strong>Aim: </strong>This study aims to assess the spectral effectiveness in color fundus photography for the deep learning classification of ROP.</p><p><strong>Approach: </strong>A convolutional neural network end-to-end classifier was utilized for deep learning classification of normal, stage 1, stage 2, and stage 3 ROP fundus images. The classification performances with individual-color-channel inputs, i.e., red, green, and blue, and multi-color-channel fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared.</p><p><strong>Results: </strong>For individual-color-channel inputs, similar performance was observed for green channel (88.00% accuracy, 76.00% sensitivity, and 92.00% specificity) and red channel (87.25% accuracy, 74.50% sensitivity, and 91.50% specificity), which is substantially outperforming the blue channel (78.25% accuracy, 56.50% sensitivity, and 85.50% specificity). For multi-color-channel fusion options, the early-fusion and intermediate-fusion architecture showed almost the same performance when compared to the green/red channel input, and they outperformed the late-fusion architecture.</p><p><strong>Conclusions: </strong>This study reveals that the classification of ROP stages can be effectively achieved using either the green or red image alone. This finding enables the exclusion of blue images, acknowledged for their increased susceptibility to light toxicity.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 7","pages":"076001"},"PeriodicalIF":3.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11188587/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Biomedical Optics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1117/1.JBO.29.7.076001","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/6/18 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Significance: Retinopathy of prematurity (ROP) poses a significant global threat to childhood vision, necessitating effective screening strategies. This study addresses the impact of color channels in fundus imaging on ROP diagnosis, emphasizing the efficacy and safety of utilizing longer wavelengths, such as red or green for enhanced depth information and improved diagnostic capabilities.
Aim: This study aims to assess the spectral effectiveness in color fundus photography for the deep learning classification of ROP.
Approach: A convolutional neural network end-to-end classifier was utilized for deep learning classification of normal, stage 1, stage 2, and stage 3 ROP fundus images. The classification performances with individual-color-channel inputs, i.e., red, green, and blue, and multi-color-channel fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared.
Results: For individual-color-channel inputs, similar performance was observed for green channel (88.00% accuracy, 76.00% sensitivity, and 92.00% specificity) and red channel (87.25% accuracy, 74.50% sensitivity, and 91.50% specificity), which is substantially outperforming the blue channel (78.25% accuracy, 56.50% sensitivity, and 85.50% specificity). For multi-color-channel fusion options, the early-fusion and intermediate-fusion architecture showed almost the same performance when compared to the green/red channel input, and they outperformed the late-fusion architecture.
Conclusions: This study reveals that the classification of ROP stages can be effectively achieved using either the green or red image alone. This finding enables the exclusion of blue images, acknowledged for their increased susceptibility to light toxicity.
期刊介绍:
The Journal of Biomedical Optics publishes peer-reviewed papers on the use of modern optical technology for improved health care and biomedical research.