Abadh K. Chaurasia MOptom , Connor J. Greatbatch MBBS , Xikun Han PhD , Puya Gharahkhani PhD , David A. Mackey MD, FRANZCO , Stuart MacGregor PhD , Jamie E. Craig MBBS, PhD , Alex W. Hewitt MBBS, FRANZCO, PhD
{"title":"用于青光眼筛查的高精度杯盘比自动定量法","authors":"Abadh K. Chaurasia MOptom , Connor J. Greatbatch MBBS , Xikun Han PhD , Puya Gharahkhani PhD , David A. Mackey MD, FRANZCO , Stuart MacGregor PhD , Jamie E. Craig MBBS, PhD , Alex W. Hewitt MBBS, FRANZCO, PhD","doi":"10.1016/j.xops.2024.100540","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><p>An enlarged cup-to-disc ratio (CDR) is a hallmark of glaucomatous optic neuropathy. Manual assessment of the CDR may be less accurate and more time-consuming than automated methods. Here, we sought to develop and validate a deep learning–based algorithm to automatically determine the CDR from fundus images.</p></div><div><h3>Design</h3><p>Algorithm development for estimating CDR using fundus data from a population-based observational study.</p></div><div><h3>Participants</h3><p>A total of 181 768 fundus images from the United Kingdom Biobank (UKBB), Drishti_GS, and EyePACS.</p></div><div><h3>Methods</h3><p>FastAI and PyTorch libraries were used to train a convolutional neural network–based model on fundus images from the UKBB. Models were constructed to determine image gradability (classification analysis) as well as to estimate CDR (regression analysis). The best-performing model was then validated for use in glaucoma screening using a multiethnic dataset from EyePACS and Drishti_GS.</p></div><div><h3>Main Outcome Measures</h3><p>The area under the receiver operating characteristic curve and coefficient of determination.</p></div><div><h3>Results</h3><p>Our gradability model vgg19_batch normalization (bn) achieved an accuracy of 97.13% on a validation set of 16 045 images, with 99.26% precision and area under the receiver operating characteristic curve of 96.56%. Using regression analysis, our best-performing model (trained on the vgg19_bn architecture) attained a coefficient of determination of 0.8514 (95% confidence interval [CI]: 0.8459–0.8568), while the mean squared error was 0.0050 (95% CI: 0.0048–0.0051) and mean absolute error was 0.0551 (95% CI: 0.0543–0.0559) on a validation set of 12 183 images for determining CDR. The regression point was converted into classification metrics using a tolerance of 0.2 for 20 classes; the classification metrics achieved an accuracy of 99.20%. The EyePACS dataset (98 172 healthy, 3270 glaucoma) was then used to externally validate the model for glaucoma classification, with an accuracy, sensitivity, and specificity of 82.49%, 72.02%, and 82.83%, respectively.</p></div><div><h3>Conclusions</h3><p>Our models were precise in determining image gradability and estimating CDR. Although our artificial intelligence–derived CDR estimates achieve high accuracy, the CDR threshold for glaucoma screening will vary depending on other clinical parameters.</p></div><div><h3>Financial Disclosure(s)</h3><p>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</p></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":null,"pages":null},"PeriodicalIF":3.2000,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666914524000769/pdfft?md5=2051286bae03382c02eff7d1df69d56a&pid=1-s2.0-S2666914524000769-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Highly Accurate and Precise Automated Cup-to-Disc Ratio Quantification for Glaucoma Screening\",\"authors\":\"Abadh K. Chaurasia MOptom , Connor J. Greatbatch MBBS , Xikun Han PhD , Puya Gharahkhani PhD , David A. Mackey MD, FRANZCO , Stuart MacGregor PhD , Jamie E. Craig MBBS, PhD , Alex W. Hewitt MBBS, FRANZCO, PhD\",\"doi\":\"10.1016/j.xops.2024.100540\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Objective</h3><p>An enlarged cup-to-disc ratio (CDR) is a hallmark of glaucomatous optic neuropathy. Manual assessment of the CDR may be less accurate and more time-consuming than automated methods. Here, we sought to develop and validate a deep learning–based algorithm to automatically determine the CDR from fundus images.</p></div><div><h3>Design</h3><p>Algorithm development for estimating CDR using fundus data from a population-based observational study.</p></div><div><h3>Participants</h3><p>A total of 181 768 fundus images from the United Kingdom Biobank (UKBB), Drishti_GS, and EyePACS.</p></div><div><h3>Methods</h3><p>FastAI and PyTorch libraries were used to train a convolutional neural network–based model on fundus images from the UKBB. Models were constructed to determine image gradability (classification analysis) as well as to estimate CDR (regression analysis). The best-performing model was then validated for use in glaucoma screening using a multiethnic dataset from EyePACS and Drishti_GS.</p></div><div><h3>Main Outcome Measures</h3><p>The area under the receiver operating characteristic curve and coefficient of determination.</p></div><div><h3>Results</h3><p>Our gradability model vgg19_batch normalization (bn) achieved an accuracy of 97.13% on a validation set of 16 045 images, with 99.26% precision and area under the receiver operating characteristic curve of 96.56%. Using regression analysis, our best-performing model (trained on the vgg19_bn architecture) attained a coefficient of determination of 0.8514 (95% confidence interval [CI]: 0.8459–0.8568), while the mean squared error was 0.0050 (95% CI: 0.0048–0.0051) and mean absolute error was 0.0551 (95% CI: 0.0543–0.0559) on a validation set of 12 183 images for determining CDR. The regression point was converted into classification metrics using a tolerance of 0.2 for 20 classes; the classification metrics achieved an accuracy of 99.20%. The EyePACS dataset (98 172 healthy, 3270 glaucoma) was then used to externally validate the model for glaucoma classification, with an accuracy, sensitivity, and specificity of 82.49%, 72.02%, and 82.83%, respectively.</p></div><div><h3>Conclusions</h3><p>Our models were precise in determining image gradability and estimating CDR. Although our artificial intelligence–derived CDR estimates achieve high accuracy, the CDR threshold for glaucoma screening will vary depending on other clinical parameters.</p></div><div><h3>Financial Disclosure(s)</h3><p>Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.</p></div>\",\"PeriodicalId\":74363,\"journal\":{\"name\":\"Ophthalmology science\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-04-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2666914524000769/pdfft?md5=2051286bae03382c02eff7d1df69d56a&pid=1-s2.0-S2666914524000769-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ophthalmology science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666914524000769\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ophthalmology science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666914524000769","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
Highly Accurate and Precise Automated Cup-to-Disc Ratio Quantification for Glaucoma Screening
Objective
An enlarged cup-to-disc ratio (CDR) is a hallmark of glaucomatous optic neuropathy. Manual assessment of the CDR may be less accurate and more time-consuming than automated methods. Here, we sought to develop and validate a deep learning–based algorithm to automatically determine the CDR from fundus images.
Design
Algorithm development for estimating CDR using fundus data from a population-based observational study.
Participants
A total of 181 768 fundus images from the United Kingdom Biobank (UKBB), Drishti_GS, and EyePACS.
Methods
FastAI and PyTorch libraries were used to train a convolutional neural network–based model on fundus images from the UKBB. Models were constructed to determine image gradability (classification analysis) as well as to estimate CDR (regression analysis). The best-performing model was then validated for use in glaucoma screening using a multiethnic dataset from EyePACS and Drishti_GS.
Main Outcome Measures
The area under the receiver operating characteristic curve and coefficient of determination.
Results
Our gradability model vgg19_batch normalization (bn) achieved an accuracy of 97.13% on a validation set of 16 045 images, with 99.26% precision and area under the receiver operating characteristic curve of 96.56%. Using regression analysis, our best-performing model (trained on the vgg19_bn architecture) attained a coefficient of determination of 0.8514 (95% confidence interval [CI]: 0.8459–0.8568), while the mean squared error was 0.0050 (95% CI: 0.0048–0.0051) and mean absolute error was 0.0551 (95% CI: 0.0543–0.0559) on a validation set of 12 183 images for determining CDR. The regression point was converted into classification metrics using a tolerance of 0.2 for 20 classes; the classification metrics achieved an accuracy of 99.20%. The EyePACS dataset (98 172 healthy, 3270 glaucoma) was then used to externally validate the model for glaucoma classification, with an accuracy, sensitivity, and specificity of 82.49%, 72.02%, and 82.83%, respectively.
Conclusions
Our models were precise in determining image gradability and estimating CDR. Although our artificial intelligence–derived CDR estimates achieve high accuracy, the CDR threshold for glaucoma screening will vary depending on other clinical parameters.
Financial Disclosure(s)
Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.