首页 > 最新文献

Ophthalmology science最新文献

英文 中文
OCT Angiography Analysis of Retinal and Choroidal Flow after Proton Beam Therapy for Choroidal Melanoma
IF 3.2 Q1 OPHTHALMOLOGY Pub Date : 2024-12-12 DOI: 10.1016/j.xops.2024.100674
Su-Kyung Jung MD , Edward H. Lee MD , Kavita K. Mishra MD , Inder K. Daftari PhD , Susanna S. Park MD, PhD

Purpose

To evaluate the macular and peripapillary retinal and choroidal flow changes in eyes with choroidal melanoma (CM) treated with proton beam radiation therapy (PBRT) using OCT angiography (OCTA).

Design

A prospective, cross-sectional, single-center study.

Participants

All patients seen at the study center between 2019 and 2024 who received PBRT for CM in 1 eye ≥1 year before enrollment with best-corrected visual acuity (BCVA) >20/200, unremarkable contralateral eye, and agreed to participate.

Methods

After a comprehensive eye examination, including BCVA, Optovue AngioVue was used to obtain the 4.5-mm optic disc and 6.0-mm macular OCT/OCT angiography (OCTA) images of both eyes. All vascular density (VD) measurements were obtained automatically using the OCTA software, except choriocapillaris VD, which was quantitated using ImageJ. The Wilcoxon signed-rank test was used to analyze differences in OCT/OCTA parameters between the treated and the contralateral eyes. Spearman’s ρ was used to identify OCTA parameters associated with BCVA or radiation dose. A P value of <0.05 was considered statistically significant.

Main Outcome Measures

Foveal avascular zone (FAZ) area and perimeter, choriocapillaris and retinal (superficial and deep) capillary VD in the macula and radial peripapillary capillary (RPC) VD on OCTA; macular and retinal nerve fiber layer thickness on OCT, tumor location, laterality and size at baseline, BCVA of both eyes, PBRT dose, and duration of follow-up at enrollment.

Results

Among 24 participants, OCT/OCTA parameters were significantly different in the treated eyes when compared with the contralateral eyes, including increased FAZ area and perimeter, decreased peripapillary retinal nerve fiber layer thickness and RPC VD, and decreased macular choriocapillaris VD and parafoveal and perifoveal superficial retinal plexus VD (P < 0.05). Best-corrected visual acuity in the treated eyes correlated significantly with FAZ area and perimeter, parafoveal and perifoveal deep retinal plexus VD, and radiation dose to fovea but not radiation dose to the optic disc.

Conclusions

Although PBRT can affect both retinal and choroidal vascular flow in the macular and peripapillary region in eyes with CM, BCVA after PBRT seems to correlate best with the retinal vascular flow changes in the macula on OCTA and radiation dose to the fovea.

Financial Disclosure(s)

Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
{"title":"OCT Angiography Analysis of Retinal and Choroidal Flow after Proton Beam Therapy for Choroidal Melanoma","authors":"Su-Kyung Jung MD ,&nbsp;Edward H. Lee MD ,&nbsp;Kavita K. Mishra MD ,&nbsp;Inder K. Daftari PhD ,&nbsp;Susanna S. Park MD, PhD","doi":"10.1016/j.xops.2024.100674","DOIUrl":"10.1016/j.xops.2024.100674","url":null,"abstract":"<div><h3>Purpose</h3><div>To evaluate the macular and peripapillary retinal and choroidal flow changes in eyes with choroidal melanoma (CM) treated with proton beam radiation therapy (PBRT) using OCT angiography (OCTA).</div></div><div><h3>Design</h3><div>A prospective, cross-sectional, single-center study.</div></div><div><h3>Participants</h3><div>All patients seen at the study center between 2019 and 2024 who received PBRT for CM in 1 eye ≥1 year before enrollment with best-corrected visual acuity (BCVA) <u>&gt;</u>20/200, unremarkable contralateral eye, and agreed to participate.</div></div><div><h3>Methods</h3><div>After a comprehensive eye examination, including BCVA, Optovue AngioVue was used to obtain the 4.5-mm optic disc and 6.0-mm macular OCT/OCT angiography (OCTA) images of both eyes. All vascular density (VD) measurements were obtained automatically using the OCTA software, except choriocapillaris VD, which was quantitated using ImageJ. The Wilcoxon signed-rank test was used to analyze differences in OCT/OCTA parameters between the treated and the contralateral eyes. Spearman’s ρ was used to identify OCTA parameters associated with BCVA or radiation dose. A <em>P</em> value of &lt;0.05 was considered statistically significant.</div></div><div><h3>Main Outcome Measures</h3><div>Foveal avascular zone (FAZ) area and perimeter, choriocapillaris and retinal (superficial and deep) capillary VD in the macula and radial peripapillary capillary (RPC) VD on OCTA; macular and retinal nerve fiber layer thickness on OCT, tumor location, laterality and size at baseline, BCVA of both eyes, PBRT dose, and duration of follow-up at enrollment.</div></div><div><h3>Results</h3><div>Among 24 participants, OCT/OCTA parameters were significantly different in the treated eyes when compared with the contralateral eyes, including increased FAZ area and perimeter, decreased peripapillary retinal nerve fiber layer thickness and RPC VD, and decreased macular choriocapillaris VD and parafoveal and perifoveal superficial retinal plexus VD (<em>P</em> &lt; 0.05). Best-corrected visual acuity in the treated eyes correlated significantly with FAZ area and perimeter, parafoveal and perifoveal deep retinal plexus VD, and radiation dose to fovea but not radiation dose to the optic disc.</div></div><div><h3>Conclusions</h3><div>Although PBRT can affect both retinal and choroidal vascular flow in the macular and peripapillary region in eyes with CM, BCVA after PBRT seems to correlate best with the retinal vascular flow changes in the macula on OCTA and radiation dose to the fovea.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 3","pages":"Article 100674"},"PeriodicalIF":3.2,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143510393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AlphaMissense Predictions and ClinVar Annotations: A Deep Learning Approach to Uveal Melanoma
IF 3.2 Q1 OPHTHALMOLOGY Pub Date : 2024-12-06 DOI: 10.1016/j.xops.2024.100673
David J. Taylor Gonzalez MD , Mak B. Djulbegovic MD, MSc , Meghan Sharma MD, MPH , Michael Antonietti BS , Colin K. Kim BS , Vladimir N. Uversky PhD, DSc , Carol L. Karp MD , Carol L. Shields MD , Matthew W. Wilson MD

Objective

Uveal melanoma (UM) poses significant diagnostic and prognostic challenges due to its variable genetic landscape. We explore the use of a novel deep learning tool to assess the functional impact of genetic mutations in UM.

Design

A cross-sectional bioinformatics exploratory data analysis of genetic mutations from UM cases.

Subjects

Genetic data from patients diagnosed with UM were analyzed, explicitly focusing on missense mutations sourced from the Catalogue of Somatic Mutations in Cancer (COSMIC) database.

Methods

We identified missense mutations frequently observed in UM using the COSMIC database, assessed their potential pathogenicity using AlphaMissense, and visualized mutations using AlphaFold. Clinical significance was cross-validated with entries in the ClinVar database.

Main Outcome Measures

The primary outcomes measured were the agreement rates between AlphaMissense predictions and ClinVar annotations regarding the pathogenicity of mutations in critical genes associated with UM, such as GNAQ, GNA11, SF3B1, EIF1AX, and BAP1.

Results

Missense substitutions comprised 91.35% (n = 1310) of mutations in UM found on COSMIC. Of the 151 unique missense mutations analyzed in the most frequently mutated genes, only 40.4% (n = 61) had corresponding data in ClinVar. Notably, AlphaMissense provided definitive classifications for 27.2% (n = 41) of the mutations, which were labeled as “unknown significance” in ClinVar, underscoring its potential to offer more clarity in ambiguous cases. When excluding these mutations of uncertain significance, AlphaMissense showed perfect agreement (100%) with ClinVar across all analyzed genes, demonstrating no discrepancies where a mutation predicted as “pathogenic” was classified as “benign” or vice versa.

Conclusions

Integrating deep learning through AlphaMissense offers a promising approach to understanding the mutational landscape of UM. Our methodology holds the potential to improve genomic diagnostics and inform the development of personalized treatment strategies for UM.

Financial Disclosure(s)

Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
{"title":"AlphaMissense Predictions and ClinVar Annotations: A Deep Learning Approach to Uveal Melanoma","authors":"David J. Taylor Gonzalez MD ,&nbsp;Mak B. Djulbegovic MD, MSc ,&nbsp;Meghan Sharma MD, MPH ,&nbsp;Michael Antonietti BS ,&nbsp;Colin K. Kim BS ,&nbsp;Vladimir N. Uversky PhD, DSc ,&nbsp;Carol L. Karp MD ,&nbsp;Carol L. Shields MD ,&nbsp;Matthew W. Wilson MD","doi":"10.1016/j.xops.2024.100673","DOIUrl":"10.1016/j.xops.2024.100673","url":null,"abstract":"<div><h3>Objective</h3><div>Uveal melanoma (UM) poses significant diagnostic and prognostic challenges due to its variable genetic landscape. We explore the use of a novel deep learning tool to assess the functional impact of genetic mutations in UM.</div></div><div><h3>Design</h3><div>A cross-sectional bioinformatics exploratory data analysis of genetic mutations from UM cases.</div></div><div><h3>Subjects</h3><div>Genetic data from patients diagnosed with UM were analyzed, explicitly focusing on missense mutations sourced from the Catalogue of Somatic Mutations in Cancer (COSMIC) database.</div></div><div><h3>Methods</h3><div>We identified missense mutations frequently observed in UM using the COSMIC database, assessed their potential pathogenicity using AlphaMissense, and visualized mutations using AlphaFold. Clinical significance was cross-validated with entries in the ClinVar database.</div></div><div><h3>Main Outcome Measures</h3><div>The primary outcomes measured were the agreement rates between AlphaMissense predictions and ClinVar annotations regarding the pathogenicity of mutations in critical genes associated with UM, such as <em>GNAQ, GNA11, SF3B1, EIF1AX</em>, and <em>BAP1</em>.</div></div><div><h3>Results</h3><div>Missense substitutions comprised 91.35% (n = 1310) of mutations in UM found on COSMIC. Of the 151 unique missense mutations analyzed in the most frequently mutated genes, only 40.4% (n = 61) had corresponding data in ClinVar. Notably, AlphaMissense provided definitive classifications for 27.2% (n = 41) of the mutations, which were labeled as “unknown significance” in ClinVar, underscoring its potential to offer more clarity in ambiguous cases. When excluding these mutations of uncertain significance, AlphaMissense showed perfect agreement (100%) with ClinVar across all analyzed genes, demonstrating no discrepancies where a mutation predicted as “pathogenic” was classified as “benign” or vice versa.</div></div><div><h3>Conclusions</h3><div>Integrating deep learning through AlphaMissense offers a promising approach to understanding the mutational landscape of UM. Our methodology holds the potential to improve genomic diagnostics and inform the development of personalized treatment strategies for UM.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 3","pages":"Article 100673"},"PeriodicalIF":3.2,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143551921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence Models to Identify Patients with High Probability of Glaucoma Using Electronic Health Records
IF 3.2 Q1 OPHTHALMOLOGY Pub Date : 2024-12-06 DOI: 10.1016/j.xops.2024.100671
Rohith Ravindranath MS, Sophia Y. Wang MD, MS

Purpose

Early detection of glaucoma allows for timely treatment to prevent severe vision loss, but screening requires resource-intensive examinations and imaging, which are challenging for large-scale implementation and evaluation. The purpose of this study was to develop artificial intelligence models that can utilize the wealth of data stored in electronic health records (EHRs) to identify patients who have high probability of developing glaucoma, without the use of any dedicated ophthalmic imaging or clinical data.

Design

Cohort study.

Participants

A total of 64 735 participants who were ≥18 years of age and had ≥2 separate encounters with eye-related diagnoses recorded in their EHR records in the All of Us Research Program, a national multicenter cohort of patients contributing EHR and survey data, and who were enrolled from May 1, 2018, to July 1, 2022.

Methods

We developed models to predict which patients had a diagnosis of glaucoma, using the following machine learning approaches: (1) penalized logistic regression, (2) XGBoost, and (3) a deep learning architecture that included a 1-dimensional convolutional neural network (1D-CNN) and stacked autoencoders. Model input features included demographics and only the nonophthalmic lab results, measurements, medications, and diagnoses available from structured EHR data.

Main Outcome Measures

Evaluation metrics included area under the receiver operating characteristic curve (AUROC).

Results

Of 64 735 patients, 7268 (11.22%) had a glaucoma diagnosis. Overall, AUROC ranged from 0.796 to 0.863. The 1D-CNN model achieved the highest performance with an AUROC score of 0.863 (95% confidence interval [CI], 0.862–0.864). Investigation of 1D-CNN model performance stratified by race/ethnicity showed that AUROC ranged from 0.825 to 0.869 by subpopulation, with the highest performance of 0.869 (95% CI, 0.868–0.870) among the non-Hispanic White subpopulation.

Conclusions

Machine and deep learning models were able to use the extensive systematic data within EHR to identify individuals with glaucoma, without the need for ophthalmic imaging or clinical data. These models could potentially automate identifying high-risk glaucoma patients in EHRs, aiding targeted screening referrals. Additional research is needed to investigate the impact of protected class characteristics such as race/ethnicity on model performance and fairness.

Financial Disclosure(s)

The author(s) have no proprietary or commercial interest in any materials discussed in this article.
{"title":"Artificial Intelligence Models to Identify Patients with High Probability of Glaucoma Using Electronic Health Records","authors":"Rohith Ravindranath MS,&nbsp;Sophia Y. Wang MD, MS","doi":"10.1016/j.xops.2024.100671","DOIUrl":"10.1016/j.xops.2024.100671","url":null,"abstract":"<div><h3>Purpose</h3><div>Early detection of glaucoma allows for timely treatment to prevent severe vision loss, but screening requires resource-intensive examinations and imaging, which are challenging for large-scale implementation and evaluation. The purpose of this study was to develop artificial intelligence models that can utilize the wealth of data stored in electronic health records (EHRs) to identify patients who have high probability of developing glaucoma, without the use of any dedicated ophthalmic imaging or clinical data.</div></div><div><h3>Design</h3><div>Cohort study.</div></div><div><h3>Participants</h3><div>A total of 64 735 participants who were ≥18 years of age and had ≥2 separate encounters with eye-related diagnoses recorded in their EHR records in the All of Us Research Program, a national multicenter cohort of patients contributing EHR and survey data, and who were enrolled from May 1, 2018, to July 1, 2022.</div></div><div><h3>Methods</h3><div>We developed models to predict which patients had a diagnosis of glaucoma, using the following machine learning approaches: (1) penalized logistic regression, (2) XGBoost, and (3) a deep learning architecture that included a 1-dimensional convolutional neural network (1D-CNN) and stacked autoencoders. Model input features included demographics and only the nonophthalmic lab results, measurements, medications, and diagnoses available from structured EHR data.</div></div><div><h3>Main Outcome Measures</h3><div>Evaluation metrics included area under the receiver operating characteristic curve (AUROC).</div></div><div><h3>Results</h3><div>Of 64 735 patients, 7268 (11.22%) had a glaucoma diagnosis. Overall, AUROC ranged from 0.796 to 0.863. The 1D-CNN model achieved the highest performance with an AUROC score of 0.863 (95% confidence interval [CI], 0.862–0.864). Investigation of 1D-CNN model performance stratified by race/ethnicity showed that AUROC ranged from 0.825 to 0.869 by subpopulation, with the highest performance of 0.869 (95% CI, 0.868–0.870) among the non-Hispanic White subpopulation.</div></div><div><h3>Conclusions</h3><div>Machine and deep learning models were able to use the extensive systematic data within EHR to identify individuals with glaucoma, without the need for ophthalmic imaging or clinical data. These models could potentially automate identifying high-risk glaucoma patients in EHRs, aiding targeted screening referrals. Additional research is needed to investigate the impact of protected class characteristics such as race/ethnicity on model performance and fairness.</div></div><div><h3>Financial Disclosure(s)</h3><div>The author(s) have no proprietary or commercial interest in any materials discussed in this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 3","pages":"Article 100671"},"PeriodicalIF":3.2,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143551924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geometrical Features of Subbasal Corneal Whorl-like Nerve Patterns in Dry Eye Disease
IF 3.2 Q1 OPHTHALMOLOGY Pub Date : 2024-12-05 DOI: 10.1016/j.xops.2024.100669
Ziqing Feng MD, Kang Yu MD, Yupei Chen MS, Gengyuan Wang MS, Yuqing Deng MD, Wei Wang MD, Ruiwen Xu MD, Yimin Zhang MD, Peng Xiao PhD, Jin Yuan MD, PhD

Purpose

To investigate the geometrical feature of the whorl-like corneal nerve in dry eye disease (DED) across different severity levels and subtypes and preliminarily explore its diagnostic ability.

Design

Cross-sectional study.

Participants

The study included 29 healthy subjects (51 eyes) and 62 DED patients (95 eyes).

Methods

All subjects underwent comprehensive ophthalmic examinations, dry eye tests, and in vivo confocal microscopy to visualize the whorl-like corneal nerve at the inferior whorl (IW) region and the straight nerve at the central cornea. The structure of the corneal nerve was extracted and characterized using the fractal dimension (CNDf), multifractal dimension (CND0), tortuosity (CNTor), fiber length (CNFL), and numbers of branching points.

Main Outcome Measures

The characteristics of quantified whorl-like corneal nerve metrics in different groups of severity and subtype defined by symptoms and signs of DED.

Results

Compared with the healthy controls, the CNDf, CND0, and CNFL of the IW decreased significantly as early as grade 1 DED (P < 0.05), whereas CNTor increased (P < 0.05). These parameters did not change significantly in the straight nerve. As the DED severity increased, CNDf and CNFL in the whorl-like nerve further decreased in grade 3 DED compared with grade 1. Significant nerve fiber loss was observed in aqueous-deficient DED compared with evaporative DED (P < 0.05). Whorl-like nerve metrics correlated with ocular discomfort, tear film break-up time, tear secretion, and corneal fluorescein staining, respectively (P < 0.05). Furthermore, merging parameters of whorl-like and linear nerve showed an area under the curve value of 0.910 in diagnosing DED.

Conclusions

Geometrical parameters of IW could potentially allow optimization of the staging of DED. Reliable and objective measurements for the whorl-like cornea nerve might facilitate patient stratification and diagnosis of DED.

Financial Disclosure(s)

Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
{"title":"Geometrical Features of Subbasal Corneal Whorl-like Nerve Patterns in Dry Eye Disease","authors":"Ziqing Feng MD,&nbsp;Kang Yu MD,&nbsp;Yupei Chen MS,&nbsp;Gengyuan Wang MS,&nbsp;Yuqing Deng MD,&nbsp;Wei Wang MD,&nbsp;Ruiwen Xu MD,&nbsp;Yimin Zhang MD,&nbsp;Peng Xiao PhD,&nbsp;Jin Yuan MD, PhD","doi":"10.1016/j.xops.2024.100669","DOIUrl":"10.1016/j.xops.2024.100669","url":null,"abstract":"<div><h3>Purpose</h3><div>To investigate the geometrical feature of the whorl-like corneal nerve in dry eye disease (DED) across different severity levels and subtypes and preliminarily explore its diagnostic ability.</div></div><div><h3>Design</h3><div>Cross-sectional study.</div></div><div><h3>Participants</h3><div>The study included 29 healthy subjects (51 eyes) and 62 DED patients (95 eyes).</div></div><div><h3>Methods</h3><div>All subjects underwent comprehensive ophthalmic examinations, dry eye tests, and in vivo confocal microscopy to visualize the whorl-like corneal nerve at the inferior whorl (IW) region and the straight nerve at the central cornea. The structure of the corneal nerve was extracted and characterized using the fractal dimension (CND<sub>f</sub>), multifractal dimension (CND<sub>0</sub>), tortuosity (CNTor), fiber length (CNFL), and numbers of branching points.</div></div><div><h3>Main Outcome Measures</h3><div>The characteristics of quantified whorl-like corneal nerve metrics in different groups of severity and subtype defined by symptoms and signs of DED.</div></div><div><h3>Results</h3><div>Compared with the healthy controls, the CND<sub>f</sub>, CND<sub>0</sub>, and CNFL of the IW decreased significantly as early as grade 1 DED (<em>P</em> &lt; 0.05), whereas CNTor increased (<em>P</em> &lt; 0.05). These parameters did not change significantly in the straight nerve. As the DED severity increased, CND<sub>f</sub> and CNFL in the whorl-like nerve further decreased in grade 3 DED compared with grade 1. Significant nerve fiber loss was observed in aqueous-deficient DED compared with evaporative DED (<em>P</em> &lt; 0.05). Whorl-like nerve metrics correlated with ocular discomfort, tear film break-up time, tear secretion, and corneal fluorescein staining, respectively (<em>P</em> &lt; 0.05). Furthermore, merging parameters of whorl-like and linear nerve showed an area under the curve value of 0.910 in diagnosing DED.</div></div><div><h3>Conclusions</h3><div>Geometrical parameters of IW could potentially allow optimization of the staging of DED. Reliable and objective measurements for the whorl-like cornea nerve might facilitate patient stratification and diagnosis of DED.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100669"},"PeriodicalIF":3.2,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11787521/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143082487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation of Deep Learning–Based Automatic Retinal Layer Segmentation Algorithms for Age-Related Macular Degeneration with 2 Spectral-Domain OCT Devices
IF 3.2 Q1 OPHTHALMOLOGY Pub Date : 2024-12-04 DOI: 10.1016/j.xops.2024.100670
Souvick Mukherjee PhD , Tharindu De Silva PhD , Cameron Duic BS , Gopal Jayakar BS , Tiarnan D.L. Keenan BM BCh, PhD , Alisa T. Thavikulwat MD , Emily Chew MD , Catherine Cukras MD, PhD
<div><h3>Purpose</h3><div>Segmentations of retinal layers in spectral-domain OCT (SD-OCT) images serve as a crucial tool for identifying and analyzing the progression of various retinal diseases, encompassing a broad spectrum of abnormalities associated with age-related macular degeneration (AMD). The training of deep learning algorithms necessitates well-defined ground truth labels, validated by experts, to delineate boundaries accurately. However, this resource-intensive process has constrained the widespread application of such algorithms across diverse OCT devices. This work validates deep learning image segmentation models across multiple OCT devices by testing robustness in generating clinically relevant metrics.</div></div><div><h3>Design</h3><div>Prospective comparative study.</div></div><div><h3>Participants</h3><div>Adults >50 years of age with no AMD to advanced AMD, as defined in the Age-Related Eye Disease Study, in ≥1 eye, were enrolled. Four hundred two SD-OCT scans were used in this study.</div></div><div><h3>Methods</h3><div>We evaluate 2 separate state-of-the-art segmentation algorithms through a training process using images obtained from 1 OCT device (Heidelberg-Spectralis) and subsequent testing using images acquired from 2 OCT devices (Heidelberg-Spectralis and Zeiss-Cirrus). This assessment is performed on a dataset that encompasses a range of retinal pathologies, spanning from disease-free conditions to severe forms of AMD, with a focus on evaluating the device independence of the algorithms.</div></div><div><h3>Main Outcome Measures</h3><div>Performance metrics (including mean squared error, mean absolute error [MAE], and Dice coefficients) for the segmentations of the internal limiting membrane (ILM), retinal pigment epithelium (RPE), and RPE to Bruch’s membrane region, along with en face thickness maps, volumetric estimations (in mm<sup>3</sup>). Violin plots and Bland–Altman plots comparing predictions against ground truth are also presented.</div></div><div><h3>Results</h3><div>The UNet and DeepLabv3, trained on Spectralis B-scans, demonstrate clinically useful outcomes when applied to Cirrus test B-scans. Review of the Cirrus test data by 2 independent annotators revealed that the aggregated MAE in pixels for ILM was 1.82 ± 0.24 (equivalent to 7.0 ± 0.9 μm) and for RPE was 2.46 ± 0.66 (9.5 ± 2.6 μm). Additionally, the Dice similarity coefficient for the RPE drusen complex region, comparing predictions to ground truth, reached 0.87 ± 0.01.</div></div><div><h3>Conclusions</h3><div>In the pursuit of task-specific goals such as retinal layer segmentation, a segmentation network has the capacity to acquire domain-independent features from a large training dataset. This enables the utilization of the network to execute tasks in domains where ground truth is hard to generate.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end
{"title":"Validation of Deep Learning–Based Automatic Retinal Layer Segmentation Algorithms for Age-Related Macular Degeneration with 2 Spectral-Domain OCT Devices","authors":"Souvick Mukherjee PhD ,&nbsp;Tharindu De Silva PhD ,&nbsp;Cameron Duic BS ,&nbsp;Gopal Jayakar BS ,&nbsp;Tiarnan D.L. Keenan BM BCh, PhD ,&nbsp;Alisa T. Thavikulwat MD ,&nbsp;Emily Chew MD ,&nbsp;Catherine Cukras MD, PhD","doi":"10.1016/j.xops.2024.100670","DOIUrl":"10.1016/j.xops.2024.100670","url":null,"abstract":"&lt;div&gt;&lt;h3&gt;Purpose&lt;/h3&gt;&lt;div&gt;Segmentations of retinal layers in spectral-domain OCT (SD-OCT) images serve as a crucial tool for identifying and analyzing the progression of various retinal diseases, encompassing a broad spectrum of abnormalities associated with age-related macular degeneration (AMD). The training of deep learning algorithms necessitates well-defined ground truth labels, validated by experts, to delineate boundaries accurately. However, this resource-intensive process has constrained the widespread application of such algorithms across diverse OCT devices. This work validates deep learning image segmentation models across multiple OCT devices by testing robustness in generating clinically relevant metrics.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Design&lt;/h3&gt;&lt;div&gt;Prospective comparative study.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Participants&lt;/h3&gt;&lt;div&gt;Adults &gt;50 years of age with no AMD to advanced AMD, as defined in the Age-Related Eye Disease Study, in ≥1 eye, were enrolled. Four hundred two SD-OCT scans were used in this study.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Methods&lt;/h3&gt;&lt;div&gt;We evaluate 2 separate state-of-the-art segmentation algorithms through a training process using images obtained from 1 OCT device (Heidelberg-Spectralis) and subsequent testing using images acquired from 2 OCT devices (Heidelberg-Spectralis and Zeiss-Cirrus). This assessment is performed on a dataset that encompasses a range of retinal pathologies, spanning from disease-free conditions to severe forms of AMD, with a focus on evaluating the device independence of the algorithms.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Main Outcome Measures&lt;/h3&gt;&lt;div&gt;Performance metrics (including mean squared error, mean absolute error [MAE], and Dice coefficients) for the segmentations of the internal limiting membrane (ILM), retinal pigment epithelium (RPE), and RPE to Bruch’s membrane region, along with en face thickness maps, volumetric estimations (in mm&lt;sup&gt;3&lt;/sup&gt;). Violin plots and Bland–Altman plots comparing predictions against ground truth are also presented.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Results&lt;/h3&gt;&lt;div&gt;The UNet and DeepLabv3, trained on Spectralis B-scans, demonstrate clinically useful outcomes when applied to Cirrus test B-scans. Review of the Cirrus test data by 2 independent annotators revealed that the aggregated MAE in pixels for ILM was 1.82 ± 0.24 (equivalent to 7.0 ± 0.9 μm) and for RPE was 2.46 ± 0.66 (9.5 ± 2.6 μm). Additionally, the Dice similarity coefficient for the RPE drusen complex region, comparing predictions to ground truth, reached 0.87 ± 0.01.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Conclusions&lt;/h3&gt;&lt;div&gt;In the pursuit of task-specific goals such as retinal layer segmentation, a segmentation network has the capacity to acquire domain-independent features from a large training dataset. This enables the utilization of the network to execute tasks in domains where ground truth is hard to generate.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Financial Disclosure(s)&lt;/h3&gt;&lt;div&gt;Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end ","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 3","pages":"Article 100670"},"PeriodicalIF":3.2,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Could Infectious Agents Play a Role in the Onset of Age-related Macular Degeneration? A Scoping Review
IF 3.2 Q1 OPHTHALMOLOGY Pub Date : 2024-11-30 DOI: 10.1016/j.xops.2024.100668
Petra P. Larsen MD, PhD , Virginie Dinet PhD , Cécile Delcourt PhD , Catherine Helmer MD, PhD , Morgane Linard MD, PhD

Topic

This scoping review aims to summarize the current state of knowledge on the potential involvement of infections in age-related macular degeneration (AMD).

Clinical relevance

Age-related macular degeneration is a multifactorial disease and the leading cause of vision loss among older adults in developed countries. Clarifying whether certain infections participate in its onset or progression seems essential, given the potential implications for treatment and prevention.

Methods

Using the PubMed database, we searched for articles in English, published until June 1, 2023, whose title and/or abstract contained terms related to AMD and infections. All types of study design, infectious agents, AMD diagnostic methods, and AMD stages were considered. Articles dealing with the oral and gut microbiota were not included but we provide a brief summary of high-quality literature reviews recently published on the subject.

Results

Two investigators independently screened the 868 articles obtained by our algorithm and the reference lists of selected studies. In total, 40 articles were included, among which 30 on human data, 9 animal studies, 6 in vitro experiments, and 1 hypothesis paper (sometimes with several data types in the same article). Of these, 27 studies were published after 2010, highlighting a growing interest in recent years. A wide range of infectious agents has been investigated, including various microbiota (nasal, pharyngeal), 8 bacteria, 6 viral species, and 1 yeast. Among them, most have been investigated anecdotally. Only Chlamydia pneumoniae, Cytomegalovirus, and hepatitis B virus received more attention with 17, 6, and 4 studies, respectively. Numerous potential pathophysiological mechanisms have been discussed, including (1) an indirect role of infectious agents (i.e. a role of infections located distant from the eye, mainly through their interactions with the immune system) and (2) a direct role of some infectious agents implying potential infection of various cells types within AMD-related tissues.

Conclusions

Overall, this review highlights the diversity of possible interactions between infectious agents and AMD and suggests avenues of research to enrich the data currently available, which provide an insufficient level of evidence to conclude whether or not infectious agents are involved in this pathology.

Financial Disclosure(s)

Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
{"title":"Could Infectious Agents Play a Role in the Onset of Age-related Macular Degeneration? A Scoping Review","authors":"Petra P. Larsen MD, PhD ,&nbsp;Virginie Dinet PhD ,&nbsp;Cécile Delcourt PhD ,&nbsp;Catherine Helmer MD, PhD ,&nbsp;Morgane Linard MD, PhD","doi":"10.1016/j.xops.2024.100668","DOIUrl":"10.1016/j.xops.2024.100668","url":null,"abstract":"<div><h3>Topic</h3><div>This scoping review aims to summarize the current state of knowledge on the potential involvement of infections in age-related macular degeneration (AMD).</div></div><div><h3>Clinical relevance</h3><div>Age-related macular degeneration is a multifactorial disease and the leading cause of vision loss among older adults in developed countries. Clarifying whether certain infections participate in its onset or progression seems essential, given the potential implications for treatment and prevention.</div></div><div><h3>Methods</h3><div>Using the PubMed database, we searched for articles in English, published until June 1, 2023, whose title and/or abstract contained terms related to AMD and infections. All types of study design, infectious agents, AMD diagnostic methods, and AMD stages were considered. Articles dealing with the oral and gut microbiota were not included but we provide a brief summary of high-quality literature reviews recently published on the subject.</div></div><div><h3>Results</h3><div>Two investigators independently screened the 868 articles obtained by our algorithm and the reference lists of selected studies. In total, 40 articles were included, among which 30 on human data, 9 animal studies, 6 in vitro experiments, and 1 hypothesis paper (sometimes with several data types in the same article). Of these, 27 studies were published after 2010, highlighting a growing interest in recent years. A wide range of infectious agents has been investigated, including various microbiota (nasal, pharyngeal), 8 bacteria, 6 viral species, and 1 yeast. Among them, most have been investigated anecdotally. Only <em>Chlamydia pneumoniae</em>, <em>Cytomegalovirus</em>, and hepatitis B virus received more attention with 17, 6, and 4 studies, respectively. Numerous potential pathophysiological mechanisms have been discussed, including (1) an indirect role of infectious agents (i.e. a role of infections located distant from the eye, mainly through their interactions with the immune system) and (2) a direct role of some infectious agents implying potential infection of various cells types within AMD-related tissues.</div></div><div><h3>Conclusions</h3><div>Overall, this review highlights the diversity of possible interactions between infectious agents and AMD and suggests avenues of research to enrich the data currently available, which provide an insufficient level of evidence to conclude whether or not infectious agents are involved in this pathology.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100668"},"PeriodicalIF":3.2,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143169440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Glaucoma Detection and Feature Identification via GPT-4V Fundus Image Analysis
IF 3.2 Q1 OPHTHALMOLOGY Pub Date : 2024-11-29 DOI: 10.1016/j.xops.2024.100667
Jalil Jalili PhD , Anuwat Jiravarnsirikul MD , Christopher Bowd PhD , Benton Chuter MD , Akram Belghith PhD , Michael H. Goldbaum MD , Sally L. Baxter MD , Robert N. Weinreb MD , Linda M. Zangwill PhD , Mark Christopher PhD

Purpose

The aim is to assess GPT-4V's (OpenAI) diagnostic accuracy and its capability to identify glaucoma-related features compared to expert evaluations.

Design

Evaluation of multimodal large language models for reviewing fundus images in glaucoma.

Subjects

A total of 300 fundus images from 3 public datasets (ACRIMA, ORIGA, and RIM-One v3) that included 139 glaucomatous and 161 nonglaucomatous cases were analyzed.

Methods

Preprocessing ensured each image was centered on the optic disc. GPT-4's vision-preview model (GPT-4V) assessed each image for various glaucoma-related criteria: image quality, image gradability, cup-to-disc ratio, peripapillary atrophy, disc hemorrhages, rim thinning (by quadrant and clock hour), glaucoma status, and estimated probability of glaucoma. Each image was analyzed twice by GPT-4V to evaluate consistency in its predictions. Two expert graders independently evaluated the same images using identical criteria. Comparisons between GPT-4V's assessments, expert evaluations, and dataset labels were made to determine accuracy, sensitivity, specificity, and Cohen kappa.

Main Outcome Measures

The main parameters measured were the accuracy, sensitivity, specificity, and Cohen kappa of GPT-4V in detecting glaucoma compared with expert evaluations.

Results

GPT-4V successfully provided glaucoma assessments for all 300 fundus images across the datasets, although approximately 35% required multiple prompt submissions. GPT-4V's overall accuracy in glaucoma detection was slightly lower (0.68, 0.70, and 0.81, respectively) than that of expert graders (0.78, 0.80, and 0.88, for expert grader 1 and 0.72, 0.78, and 0.87, for expert grader 2, respectively), across the ACRIMA, ORIGA, and RIM-ONE datasets. In Glaucoma detection, GPT-4V showed variable agreement by dataset and expert graders, with Cohen kappa values ranging from 0.08 to 0.72. In terms of feature detection, GPT-4V demonstrated high consistency (repeatability) in image gradability, with an agreement accuracy of ≥89% and substantial agreement in rim thinning and cup-to-disc ratio assessments, although kappas were generally lower than expert-to-expert agreement.

Conclusions

GPT-4V shows promise as a tool in glaucoma screening and detection through fundus image analysis, demonstrating generally high agreement with expert evaluations of key diagnostic features, although agreement did vary substantially across datasets.

Financial Disclosure(s)

Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
{"title":"Glaucoma Detection and Feature Identification via GPT-4V Fundus Image Analysis","authors":"Jalil Jalili PhD ,&nbsp;Anuwat Jiravarnsirikul MD ,&nbsp;Christopher Bowd PhD ,&nbsp;Benton Chuter MD ,&nbsp;Akram Belghith PhD ,&nbsp;Michael H. Goldbaum MD ,&nbsp;Sally L. Baxter MD ,&nbsp;Robert N. Weinreb MD ,&nbsp;Linda M. Zangwill PhD ,&nbsp;Mark Christopher PhD","doi":"10.1016/j.xops.2024.100667","DOIUrl":"10.1016/j.xops.2024.100667","url":null,"abstract":"<div><h3>Purpose</h3><div>The aim is to assess GPT-4V's (OpenAI) diagnostic accuracy and its capability to identify glaucoma-related features compared to expert evaluations.</div></div><div><h3>Design</h3><div>Evaluation of multimodal large language models for reviewing fundus images in glaucoma.</div></div><div><h3>Subjects</h3><div>A total of 300 fundus images from 3 public datasets (ACRIMA, ORIGA, and RIM-One v3) that included 139 glaucomatous and 161 nonglaucomatous cases were analyzed.</div></div><div><h3>Methods</h3><div>Preprocessing ensured each image was centered on the optic disc. GPT-4's vision-preview model (GPT-4V) assessed each image for various glaucoma-related criteria: image quality, image gradability, cup-to-disc ratio, peripapillary atrophy, disc hemorrhages, rim thinning (by quadrant and clock hour), glaucoma status, and estimated probability of glaucoma. Each image was analyzed twice by GPT-4V to evaluate consistency in its predictions. Two expert graders independently evaluated the same images using identical criteria. Comparisons between GPT-4V's assessments, expert evaluations, and dataset labels were made to determine accuracy, sensitivity, specificity, and Cohen kappa.</div></div><div><h3>Main Outcome Measures</h3><div>The main parameters measured were the accuracy, sensitivity, specificity, and Cohen kappa of GPT-4V in detecting glaucoma compared with expert evaluations.</div></div><div><h3>Results</h3><div>GPT-4V successfully provided glaucoma assessments for all 300 fundus images across the datasets, although approximately 35% required multiple prompt submissions. GPT-4V's overall accuracy in glaucoma detection was slightly lower (0.68, 0.70, and 0.81, respectively) than that of expert graders (0.78, 0.80, and 0.88, for expert grader 1 and 0.72, 0.78, and 0.87, for expert grader 2, respectively), across the ACRIMA, ORIGA, and RIM-ONE datasets. In Glaucoma detection, GPT-4V showed variable agreement by dataset and expert graders, with Cohen kappa values ranging from 0.08 to 0.72. In terms of feature detection, GPT-4V demonstrated high consistency (repeatability) in image gradability, with an agreement accuracy of ≥89% and substantial agreement in rim thinning and cup-to-disc ratio assessments, although kappas were generally lower than expert-to-expert agreement.</div></div><div><h3>Conclusions</h3><div>GPT-4V shows promise as a tool in glaucoma screening and detection through fundus image analysis, demonstrating generally high agreement with expert evaluations of key diagnostic features, although agreement did vary substantially across datasets.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100667"},"PeriodicalIF":3.2,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11773068/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143061713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Deep Learning for Differentiating Bacterial and Fungal Keratitis Using Prospective Representative Data
IF 3.2 Q1 OPHTHALMOLOGY Pub Date : 2024-11-29 DOI: 10.1016/j.xops.2024.100665
N.V. Prajna MD , Jad Assaf MD , Nisha R. Acharya MD, MS , Jennifer Rose-Nussbaumer MD , Thomas M. Lietman MD , J. Peter Campbell MD, MPH , Jeremy D. Keenan MD, MPH , Xubo Song PhD , Travis K. Redd MD, MPH

Objective

This study develops and evaluates multimodal machine learning models for differentiating bacterial and fungal keratitis using a prospective representative dataset from South India.

Design

Machine learning classifier training and validation study.

Participants

Five hundred ninety-nine subjects diagnosed with acute infectious keratitis at Aravind Eye Hospital in Madurai, India.

Methods

We developed and compared 3 prediction models to distinguish bacterial and fungal keratitis using a prospective, consecutively-collected, representative dataset gathered over a full calendar year (the MADURAI dataset). These models included a clinical data model, a computer vision model using the EfficientNet architecture, and a multimodal model combining both imaging and clinical data. We partitioned the MADURAI dataset into 70% train/validation and 30% test sets. Model training was performed with fivefold cross-validation. We also compared the performance of the MADURAI-trained computer vision model against a model with identical architecture but trained on a preexisting dataset collated from multiple prior bacterial and fungal keratitis randomized clinical trials (RCTs) (the RCT-trained computer vision model).

Main Outcome Measures

The primary evaluation metric was the area under the precision-recall curve (AUPRC). Secondary metrics included area under the receiver operating characteristic curve (AUROC), accuracy, and F1 score.

Results

The MADURAI-trained computer vision model outperformed the clinical data model and the RCT-trained computer vision model on the hold-out test set, with an AUPRC 0.94 (95% confidence interval: 0.92–0.96), AUROC 0.81 (0.76–0.85), accuracy 77%, and F1 score 0.85. The multimodal model did not substantially improve performance compared with the computer vision model.

Conclusions

The best-performing machine learning classifier for infectious keratitis was a computer vision model trained using the MADURAI dataset. These findings suggest that image-based deep learning could significantly enhance diagnostic capabilities for infectious keratitis and emphasize the importance of using prospective, consecutively-collected, representative data for machine learning model training and evaluation.

Financial Disclosure(s)

Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
{"title":"Multimodal Deep Learning for Differentiating Bacterial and Fungal Keratitis Using Prospective Representative Data","authors":"N.V. Prajna MD ,&nbsp;Jad Assaf MD ,&nbsp;Nisha R. Acharya MD, MS ,&nbsp;Jennifer Rose-Nussbaumer MD ,&nbsp;Thomas M. Lietman MD ,&nbsp;J. Peter Campbell MD, MPH ,&nbsp;Jeremy D. Keenan MD, MPH ,&nbsp;Xubo Song PhD ,&nbsp;Travis K. Redd MD, MPH","doi":"10.1016/j.xops.2024.100665","DOIUrl":"10.1016/j.xops.2024.100665","url":null,"abstract":"<div><h3>Objective</h3><div>This study develops and evaluates multimodal machine learning models for differentiating bacterial and fungal keratitis using a prospective representative dataset from South India.</div></div><div><h3>Design</h3><div>Machine learning classifier training and validation study.</div></div><div><h3>Participants</h3><div>Five hundred ninety-nine subjects diagnosed with acute infectious keratitis at Aravind Eye Hospital in Madurai, India.</div></div><div><h3>Methods</h3><div>We developed and compared 3 prediction models to distinguish bacterial and fungal keratitis using a prospective, consecutively-collected, representative dataset gathered over a full calendar year (the MADURAI dataset). These models included a clinical data model, a computer vision model using the EfficientNet architecture, and a multimodal model combining both imaging and clinical data. We partitioned the MADURAI dataset into 70% train/validation and 30% test sets. Model training was performed with fivefold cross-validation. We also compared the performance of the MADURAI-trained computer vision model against a model with identical architecture but trained on a preexisting dataset collated from multiple prior bacterial and fungal keratitis randomized clinical trials (RCTs) (the RCT-trained computer vision model).</div></div><div><h3>Main Outcome Measures</h3><div>The primary evaluation metric was the area under the precision-recall curve (AUPRC). Secondary metrics included area under the receiver operating characteristic curve (AUROC), accuracy, and F1 score.</div></div><div><h3>Results</h3><div>The MADURAI-trained computer vision model outperformed the clinical data model and the RCT-trained computer vision model on the hold-out test set, with an AUPRC 0.94 (95% confidence interval: 0.92–0.96), AUROC 0.81 (0.76–0.85), accuracy 77%, and F1 score 0.85. The multimodal model did not substantially improve performance compared with the computer vision model.</div></div><div><h3>Conclusions</h3><div>The best-performing machine learning classifier for infectious keratitis was a computer vision model trained using the MADURAI dataset. These findings suggest that image-based deep learning could significantly enhance diagnostic capabilities for infectious keratitis and emphasize the importance of using prospective, consecutively-collected, representative data for machine learning model training and evaluation.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100665"},"PeriodicalIF":3.2,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11758206/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143048797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Associations of Retinal Microvascular Density and Fractal Dimension with Glaucoma: A Prospective Study from UK Biobank
IF 3.2 Q1 OPHTHALMOLOGY Pub Date : 2024-11-28 DOI: 10.1016/j.xops.2024.100661
Qi Chen MD , Suyu Miao MD , Yuzhe Jiang MD , Danli Shi MD, PhD , Weiyun You MD , Lin Liu MD, PhD , Mayinuer Yusufu MTI , Yufan Chen MD , Ruobing Wang MD, PhD

Objective

To explore the association between retinal microvascular parameters and glaucoma.

Design

Prospective study.

Subjects

The UK Biobank subjects with fundus images and without a history of glaucoma.

Methods

We employed the Retina-based Microvascular Health Assessment System to utilize the noninvasive nature of fundus photography and quantify retinal microvascular parameters including retinal vascular skeleton density (VSD) and fractal dimension (FD). We also utilized propensity score matching (PSM) to pair individuals with glaucoma and healthy controls. Propensity score matching was implemented via a logistic regression model with a caliper of 0.1 and a matching ratio of 1:4 no replacements. We conducted univariable Cox regression analyses to study the association between retinal microvascular parameters and incident glaucoma, in both continuous and quartile forms.

Main Outcome Measure

Vascular skeleton density, FD, and glaucoma.

Results

In a study of 41 632 participants without prior glaucoma, 482 cases of glaucoma were recorded during a median follow-up of 11.0 years. In the Cox proportional hazards regression model post-PSM, we found that incident glaucoma has significant negative associations with arteriolar VSD (hazard ratio [HR] = 0.24, 95% confidence interval [CI] 0.11–0.52, P < 0.001), venular VSD (HR = 0.34, 95% CI 0.15–0.74, P = 0.007), arteriolar FD (HR = 0.24, 95% CI 0.10–0.60, P = 0.002), and venular FD (HR = 0.31, 95% CI 0.12–0.85, P = 0.022). Subgroup analysis using covariates revealed that individuals aged ≥60 years, nonsmokers, moderate alcohol consumers, and those with hypertension and myopia exhibited P values <0.05 consistently prematching and postmatching, differing from other subgroups within this covariate.

Conclusions

Our study found that reduced retinal VSD and lower FD are linked to elevated glaucoma risk.

Financial Disclosure(s)

Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
{"title":"Associations of Retinal Microvascular Density and Fractal Dimension with Glaucoma: A Prospective Study from UK Biobank","authors":"Qi Chen MD ,&nbsp;Suyu Miao MD ,&nbsp;Yuzhe Jiang MD ,&nbsp;Danli Shi MD, PhD ,&nbsp;Weiyun You MD ,&nbsp;Lin Liu MD, PhD ,&nbsp;Mayinuer Yusufu MTI ,&nbsp;Yufan Chen MD ,&nbsp;Ruobing Wang MD, PhD","doi":"10.1016/j.xops.2024.100661","DOIUrl":"10.1016/j.xops.2024.100661","url":null,"abstract":"<div><h3>Objective</h3><div>To explore the association between retinal microvascular parameters and glaucoma.</div></div><div><h3>Design</h3><div>Prospective study.</div></div><div><h3>Subjects</h3><div>The UK Biobank subjects with fundus images and without a history of glaucoma.</div></div><div><h3>Methods</h3><div>We employed the Retina-based Microvascular Health Assessment System to utilize the noninvasive nature of fundus photography and quantify retinal microvascular parameters including retinal vascular skeleton density (VSD) and fractal dimension (FD). We also utilized propensity score matching (PSM) to pair individuals with glaucoma and healthy controls. Propensity score matching was implemented via a logistic regression model with a caliper of 0.1 and a matching ratio of 1:4 no replacements. We conducted univariable Cox regression analyses to study the association between retinal microvascular parameters and incident glaucoma, in both continuous and quartile forms.</div></div><div><h3>Main Outcome Measure</h3><div>Vascular skeleton density, FD, and glaucoma.</div></div><div><h3>Results</h3><div>In a study of 41 632 participants without prior glaucoma, 482 cases of glaucoma were recorded during a median follow-up of 11.0 years. In the Cox proportional hazards regression model post-PSM, we found that incident glaucoma has significant negative associations with arteriolar VSD (hazard ratio [HR] = 0.24, 95% confidence interval [CI] 0.11–0.52, <em>P</em> &lt; 0.001), venular VSD (HR = 0.34, 95% CI 0.15–0.74, <em>P</em> = 0.007), arteriolar FD (HR = 0.24, 95% CI 0.10–0.60, <em>P</em> = 0.002), and venular FD (HR = 0.31, 95% CI 0.12–0.85, <em>P</em> = 0.022). Subgroup analysis using covariates revealed that individuals aged ≥60 years, nonsmokers, moderate alcohol consumers, and those with hypertension and myopia exhibited <em>P</em> values &lt;0.05 consistently prematching and postmatching, differing from other subgroups within this covariate.</div></div><div><h3>Conclusions</h3><div>Our study found that reduced retinal VSD and lower FD are linked to elevated glaucoma risk.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100661"},"PeriodicalIF":3.2,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11754513/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EyeLiner
IF 3.2 Q1 OPHTHALMOLOGY Pub Date : 2024-11-28 DOI: 10.1016/j.xops.2024.100664
Yoga Advaith Veturi MSc , Steve McNamara OD , Scott Kinder MS, Christopher William Clark MS, Upasana Thakuria MS, Benjamin Bearce MS, Niranjan Manoharan MD, Naresh Mandava MD, Malik Y. Kahook MD, Praveer Singh PhD, Jayashree Kalpathy-Cramer PhD
<div><h3>Objective</h3><div>Detecting and measuring changes in longitudinal fundus imaging is key to monitoring disease progression in chronic ophthalmic diseases, such as glaucoma and macular degeneration. Clinicians assess changes in disease status by either independently reviewing or manually juxtaposing longitudinally acquired color fundus photos (CFPs). Distinguishing variations in image acquisition due to camera orientation, zoom, and exposure from true disease-related changes can be challenging. This makes manual image evaluation variable and subjective, potentially impacting clinical decision-making. We introduce our deep learning (DL) pipeline, “EyeLiner,” for registering, or aligning, 2-dimensional CFPs. Improved alignment of longitudinal image pairs may compensate for differences that are due to camera orientation while preserving pathological changes.</div></div><div><h3>Design</h3><div>EyeLiner registers a “moving” image to a “fixed” image using a DL-based keypoint matching algorithm.</div></div><div><h3>Participants</h3><div>We evaluate EyeLiner on 3 longitudinal data sets: Fundus Image REgistration (FIRE), sequential images for glaucoma forecast (SIGF), and our internal glaucoma data set from the Colorado Ophthalmology Research Information System (CORIS).</div></div><div><h3>Methods</h3><div>Anatomical keypoints along the retinal blood vessels were detected from the moving and fixed images using a convolutional neural network and subsequently matched using a transformer-based algorithm. Finally, transformation parameters were learned using the corresponding keypoints.</div></div><div><h3>Main Outcome Measures</h3><div>We computed the mean distance (MD) between manually annotated keypoints from the fixed and the registered moving image. For comparison to existing state-of-the-art retinal registration approaches, we used the mean area under the curve (AUC) metric introduced in the FIRE data set study.</div></div><div><h3>Results</h3><div>EyeLiner effectively aligns longitudinal image pairs from FIRE, SIGF, and CORIS, as qualitatively evaluated through registration checkerboards and flicker animations. Quantitative results show that the MD decreased for this model after alignment from 321.32 to 3.74 pixels for FIRE, 9.86 to 2.03 pixels for CORIS, and 25.23 to 5.94 pixels for SIGF. We also obtained an AUC of 0.85, 0.94, and 0.84 on FIRE, CORIS, and SIGF, respectively, beating the current state-of-the-art SuperRetina (AUC<sub>FIRE</sub> = 0.76, AUC<sub>CORIS</sub> = 0.83, AUC<sub>SIGF</sub> = 0.74).</div></div><div><h3>Conclusions</h3><div>Our pipeline demonstrates improved alignment of image pairs in comparison to the current state-of-the-art methods on 3 separate data sets. We envision that this method will enable clinicians to align image pairs and better visualize changes in disease over time.</div></div><div><h3>Financial Disclosure(s)</h3><div>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at th
{"title":"EyeLiner","authors":"Yoga Advaith Veturi MSc ,&nbsp;Steve McNamara OD ,&nbsp;Scott Kinder MS,&nbsp;Christopher William Clark MS,&nbsp;Upasana Thakuria MS,&nbsp;Benjamin Bearce MS,&nbsp;Niranjan Manoharan MD,&nbsp;Naresh Mandava MD,&nbsp;Malik Y. Kahook MD,&nbsp;Praveer Singh PhD,&nbsp;Jayashree Kalpathy-Cramer PhD","doi":"10.1016/j.xops.2024.100664","DOIUrl":"10.1016/j.xops.2024.100664","url":null,"abstract":"&lt;div&gt;&lt;h3&gt;Objective&lt;/h3&gt;&lt;div&gt;Detecting and measuring changes in longitudinal fundus imaging is key to monitoring disease progression in chronic ophthalmic diseases, such as glaucoma and macular degeneration. Clinicians assess changes in disease status by either independently reviewing or manually juxtaposing longitudinally acquired color fundus photos (CFPs). Distinguishing variations in image acquisition due to camera orientation, zoom, and exposure from true disease-related changes can be challenging. This makes manual image evaluation variable and subjective, potentially impacting clinical decision-making. We introduce our deep learning (DL) pipeline, “EyeLiner,” for registering, or aligning, 2-dimensional CFPs. Improved alignment of longitudinal image pairs may compensate for differences that are due to camera orientation while preserving pathological changes.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Design&lt;/h3&gt;&lt;div&gt;EyeLiner registers a “moving” image to a “fixed” image using a DL-based keypoint matching algorithm.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Participants&lt;/h3&gt;&lt;div&gt;We evaluate EyeLiner on 3 longitudinal data sets: Fundus Image REgistration (FIRE), sequential images for glaucoma forecast (SIGF), and our internal glaucoma data set from the Colorado Ophthalmology Research Information System (CORIS).&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Methods&lt;/h3&gt;&lt;div&gt;Anatomical keypoints along the retinal blood vessels were detected from the moving and fixed images using a convolutional neural network and subsequently matched using a transformer-based algorithm. Finally, transformation parameters were learned using the corresponding keypoints.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Main Outcome Measures&lt;/h3&gt;&lt;div&gt;We computed the mean distance (MD) between manually annotated keypoints from the fixed and the registered moving image. For comparison to existing state-of-the-art retinal registration approaches, we used the mean area under the curve (AUC) metric introduced in the FIRE data set study.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Results&lt;/h3&gt;&lt;div&gt;EyeLiner effectively aligns longitudinal image pairs from FIRE, SIGF, and CORIS, as qualitatively evaluated through registration checkerboards and flicker animations. Quantitative results show that the MD decreased for this model after alignment from 321.32 to 3.74 pixels for FIRE, 9.86 to 2.03 pixels for CORIS, and 25.23 to 5.94 pixels for SIGF. We also obtained an AUC of 0.85, 0.94, and 0.84 on FIRE, CORIS, and SIGF, respectively, beating the current state-of-the-art SuperRetina (AUC&lt;sub&gt;FIRE&lt;/sub&gt; = 0.76, AUC&lt;sub&gt;CORIS&lt;/sub&gt; = 0.83, AUC&lt;sub&gt;SIGF&lt;/sub&gt; = 0.74).&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Conclusions&lt;/h3&gt;&lt;div&gt;Our pipeline demonstrates improved alignment of image pairs in comparison to the current state-of-the-art methods on 3 separate data sets. We envision that this method will enable clinicians to align image pairs and better visualize changes in disease over time.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Financial Disclosure(s)&lt;/h3&gt;&lt;div&gt;Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at th","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 2","pages":"Article 100664"},"PeriodicalIF":3.2,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11773051/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143061686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Ophthalmology science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1