Highly Accurate and Precise Automated Cup-to-Disc Ratio Quantification for Glaucoma Screening

IF 3.2 Q1 OPHTHALMOLOGY Ophthalmology science Pub Date : 2024-04-27 DOI:10.1016/j.xops.2024.100540
Abadh K. Chaurasia MOptom , Connor J. Greatbatch MBBS , Xikun Han PhD , Puya Gharahkhani PhD , David A. Mackey MD, FRANZCO , Stuart MacGregor PhD , Jamie E. Craig MBBS, PhD , Alex W. Hewitt MBBS, FRANZCO, PhD
{"title":"Highly Accurate and Precise Automated Cup-to-Disc Ratio Quantification for Glaucoma Screening","authors":"Abadh K. Chaurasia MOptom ,&nbsp;Connor J. Greatbatch MBBS ,&nbsp;Xikun Han PhD ,&nbsp;Puya Gharahkhani PhD ,&nbsp;David A. Mackey MD, FRANZCO ,&nbsp;Stuart MacGregor PhD ,&nbsp;Jamie E. Craig MBBS, PhD ,&nbsp;Alex W. Hewitt MBBS, FRANZCO, PhD","doi":"10.1016/j.xops.2024.100540","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><p>An enlarged cup-to-disc ratio (CDR) is a hallmark of glaucomatous optic neuropathy. Manual assessment of the CDR may be less accurate and more time-consuming than automated methods. Here, we sought to develop and validate a deep learning–based algorithm to automatically determine the CDR from fundus images.</p></div><div><h3>Design</h3><p>Algorithm development for estimating CDR using fundus data from a population-based observational study.</p></div><div><h3>Participants</h3><p>A total of 181 768 fundus images from the United Kingdom Biobank (UKBB), Drishti_GS, and EyePACS.</p></div><div><h3>Methods</h3><p>FastAI and PyTorch libraries were used to train a convolutional neural network–based model on fundus images from the UKBB. Models were constructed to determine image gradability (classification analysis) as well as to estimate CDR (regression analysis). The best-performing model was then validated for use in glaucoma screening using a multiethnic dataset from EyePACS and Drishti_GS.</p></div><div><h3>Main Outcome Measures</h3><p>The area under the receiver operating characteristic curve and coefficient of determination.</p></div><div><h3>Results</h3><p>Our gradability model vgg19_batch normalization (bn) achieved an accuracy of 97.13% on a validation set of 16 045 images, with 99.26% precision and area under the receiver operating characteristic curve of 96.56%. Using regression analysis, our best-performing model (trained on the vgg19_bn architecture) attained a coefficient of determination of 0.8514 (95% confidence interval [CI]: 0.8459–0.8568), while the mean squared error was 0.0050 (95% CI: 0.0048–0.0051) and mean absolute error was 0.0551 (95% CI: 0.0543–0.0559) on a validation set of 12 183 images for determining CDR. The regression point was converted into classification metrics using a tolerance of 0.2 for 20 classes; the classification metrics achieved an accuracy of 99.20%. The EyePACS dataset (98 172 healthy, 3270 glaucoma) was then used to externally validate the model for glaucoma classification, with an accuracy, sensitivity, and specificity of 82.49%, 72.02%, and 82.83%, respectively.</p></div><div><h3>Conclusions</h3><p>Our models were precise in determining image gradability and estimating CDR. Although our artificial intelligence–derived CDR estimates achieve high accuracy, the CDR threshold for glaucoma screening will vary depending on other clinical parameters.</p></div><div><h3>Financial Disclosure(s)</h3><p>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</p></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":null,"pages":null},"PeriodicalIF":3.2000,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666914524000769/pdfft?md5=2051286bae03382c02eff7d1df69d56a&pid=1-s2.0-S2666914524000769-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ophthalmology science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666914524000769","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Objective

An enlarged cup-to-disc ratio (CDR) is a hallmark of glaucomatous optic neuropathy. Manual assessment of the CDR may be less accurate and more time-consuming than automated methods. Here, we sought to develop and validate a deep learning–based algorithm to automatically determine the CDR from fundus images.

Design

Algorithm development for estimating CDR using fundus data from a population-based observational study.

Participants

A total of 181 768 fundus images from the United Kingdom Biobank (UKBB), Drishti_GS, and EyePACS.

Methods

FastAI and PyTorch libraries were used to train a convolutional neural network–based model on fundus images from the UKBB. Models were constructed to determine image gradability (classification analysis) as well as to estimate CDR (regression analysis). The best-performing model was then validated for use in glaucoma screening using a multiethnic dataset from EyePACS and Drishti_GS.

Main Outcome Measures

The area under the receiver operating characteristic curve and coefficient of determination.

Results

Our gradability model vgg19_batch normalization (bn) achieved an accuracy of 97.13% on a validation set of 16 045 images, with 99.26% precision and area under the receiver operating characteristic curve of 96.56%. Using regression analysis, our best-performing model (trained on the vgg19_bn architecture) attained a coefficient of determination of 0.8514 (95% confidence interval [CI]: 0.8459–0.8568), while the mean squared error was 0.0050 (95% CI: 0.0048–0.0051) and mean absolute error was 0.0551 (95% CI: 0.0543–0.0559) on a validation set of 12 183 images for determining CDR. The regression point was converted into classification metrics using a tolerance of 0.2 for 20 classes; the classification metrics achieved an accuracy of 99.20%. The EyePACS dataset (98 172 healthy, 3270 glaucoma) was then used to externally validate the model for glaucoma classification, with an accuracy, sensitivity, and specificity of 82.49%, 72.02%, and 82.83%, respectively.

Conclusions

Our models were precise in determining image gradability and estimating CDR. Although our artificial intelligence–derived CDR estimates achieve high accuracy, the CDR threshold for glaucoma screening will vary depending on other clinical parameters.

Financial Disclosure(s)

Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于青光眼筛查的高精度杯盘比自动定量法
目的 杯盘比(CDR)增大是青光眼性视神经病变的标志。与自动方法相比,人工评估 CDR 的准确性可能较低,而且更耗时。方法使用FastAI和PyTorch库在英国生物库(UKBB)、Drishti_GS和EyePACS的眼底图像上训练基于卷积神经网络的模型。建立的模型用于确定图像的渐变性(分类分析)以及估计 CDR(回归分析)。结果我们的梯度模型 vgg19_batch normalization (bn) 在 16 045 张图像的验证集上达到了 97.13% 的准确率,准确率为 99.26%,接收器操作特征曲线下的面积为 96.56%。通过回归分析,我们的最佳模型(在 vgg19_bn 架构上训练)在确定 CDR 的 12 183 张验证集上的决定系数为 0.8514(95% 置信区间 [CI]:0.8459-0.8568),平均平方误差为 0.0050(95% CI:0.0048-0.0051),平均绝对误差为 0.0551(95% CI:0.0543-0.0559)。回归点被转换成分类指标,20 个类别的容差为 0.2;分类指标的准确率达到 99.20%。然后使用 EyePACS 数据集(98 172 个健康数据集,3270 个青光眼数据集)对青光眼分类模型进行外部验证,准确率、灵敏度和特异性分别为 82.49%、72.02% 和 82.83%。虽然我们的人工智能CDR估计值达到了很高的准确度,但青光眼筛查的CDR阈值会因其他临床参数的不同而有所差异。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Ophthalmology science
Ophthalmology science Ophthalmology
CiteScore
3.40
自引率
0.00%
发文量
0
审稿时长
89 days
期刊最新文献
Barriers to Extracting and Harmonizing Glaucoma Testing Data: Gaps, Shortcomings, and the Pursuit of FAIRness Severity Scale of Diabetic Macular Ischemia Based on the Distribution of Capillary Nonperfusion in OCT Angiography Editorial Board Table of Contents Cover
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1