OCT-based Visual Field Estimation Using Segmentation-free 3D CNN Shows Lower Variability than Subjective Standard Automated Perimetry

Makoto Koyama, Satoru Inoda, Yuta Ueno, Yoshikazu Ito, Tetsuro Oshika, Masaki Tanito
{"title":"OCT-based Visual Field Estimation Using Segmentation-free 3D CNN Shows Lower Variability than Subjective Standard Automated Perimetry","authors":"Makoto Koyama, Satoru Inoda, Yuta Ueno, Yoshikazu Ito, Tetsuro Oshika, Masaki Tanito","doi":"10.1101/2024.08.17.24312150","DOIUrl":null,"url":null,"abstract":"Purpose: To train and evaluate a segmentation-free 3D convolutional neural network (3DCNN) model for estimating visual field (VF) from optical coherence tomography (OCT) images and to compare the residual variability of OCT-based estimated VF (OCT-VF) with that of Humphrey Field Analyzer (HFA) measurements in a diverse clinical population.\nDesign: Retrospective cross-sectional study.\nParticipants: 5,351 patients (9,564 eyes) who underwent macular OCT imaging and Humphrey Field Analyzer (HFA) tests (24-2 or 10-2 test patterns) at a university hospital from 2006 to 2023. The dataset included 47,653 paired OCT-VF data points, including various ocular conditions.\nMethods: We trained a segmentation-free 3DCNN model based on the EfficientNet3D-b0 architecture on a comprehensive OCT dataset to estimate VF. We evaluated the model's performance using Pearson's correlation coefficient and Bland‒Altman analysis. We assessed residual variability using a jackknife resampling approach and compared OCT-VF and HFA datasets using generalized estimating equations (GEE), adjusting the number of VF tests, follow-up duration, age, and clustering by eye and patient.\nMain Outcome Measures: Correlations between estimated and measured VF thresholds and mean deviations (MDs), and residual variability of OCT-VF and HFA.\nResults: We observed strong correlations between the estimated and measured VF parameters (Pearson's r: 24-2 thresholds 0.893, MD 0.932; 10-2 thresholds 0.902, MD 0.945; all p < 0.001). Bland‒Altman analysis showed good agreement between the estimated and measured MD, with a slight proportional bias. GEE analysis demonstrated significantly lower residual variability for OCT-VF than for HFA (24-2 thresholds: 1.10 vs. 2.48 dB; 10-2 thresholds: 1.20 vs. 2.48 dB; all p < 0.001, Bonferroni-corrected), with lower variability across all test points, severities, and ages, thus highlighting the robustness of the segmentation-free 3DCNN approach in a heterogeneous clinical sample.\nConclusions: A segmentation-free 3DCNN model objectively estimated VF from OCT images with high accuracy and significantly lower residual variability than subjective HFA measurements in a heterogeneous clinical sample, including patients with glaucoma and individuals with other ocular diseases. The improved reliability, lower variability, and objective nature of OCT-VF highlight its value for enhancing VF assessment and monitoring of various ocular conditions, potentially facilitating earlier detection of progression and more efficient disease management.","PeriodicalId":501390,"journal":{"name":"medRxiv - Ophthalmology","volume":"22 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Ophthalmology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.08.17.24312150","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: To train and evaluate a segmentation-free 3D convolutional neural network (3DCNN) model for estimating visual field (VF) from optical coherence tomography (OCT) images and to compare the residual variability of OCT-based estimated VF (OCT-VF) with that of Humphrey Field Analyzer (HFA) measurements in a diverse clinical population. Design: Retrospective cross-sectional study. Participants: 5,351 patients (9,564 eyes) who underwent macular OCT imaging and Humphrey Field Analyzer (HFA) tests (24-2 or 10-2 test patterns) at a university hospital from 2006 to 2023. The dataset included 47,653 paired OCT-VF data points, including various ocular conditions. Methods: We trained a segmentation-free 3DCNN model based on the EfficientNet3D-b0 architecture on a comprehensive OCT dataset to estimate VF. We evaluated the model's performance using Pearson's correlation coefficient and Bland‒Altman analysis. We assessed residual variability using a jackknife resampling approach and compared OCT-VF and HFA datasets using generalized estimating equations (GEE), adjusting the number of VF tests, follow-up duration, age, and clustering by eye and patient. Main Outcome Measures: Correlations between estimated and measured VF thresholds and mean deviations (MDs), and residual variability of OCT-VF and HFA. Results: We observed strong correlations between the estimated and measured VF parameters (Pearson's r: 24-2 thresholds 0.893, MD 0.932; 10-2 thresholds 0.902, MD 0.945; all p < 0.001). Bland‒Altman analysis showed good agreement between the estimated and measured MD, with a slight proportional bias. GEE analysis demonstrated significantly lower residual variability for OCT-VF than for HFA (24-2 thresholds: 1.10 vs. 2.48 dB; 10-2 thresholds: 1.20 vs. 2.48 dB; all p < 0.001, Bonferroni-corrected), with lower variability across all test points, severities, and ages, thus highlighting the robustness of the segmentation-free 3DCNN approach in a heterogeneous clinical sample. Conclusions: A segmentation-free 3DCNN model objectively estimated VF from OCT images with high accuracy and significantly lower residual variability than subjective HFA measurements in a heterogeneous clinical sample, including patients with glaucoma and individuals with other ocular diseases. The improved reliability, lower variability, and objective nature of OCT-VF highlight its value for enhancing VF assessment and monitoring of various ocular conditions, potentially facilitating earlier detection of progression and more efficient disease management.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用无分割三维 CNN 进行基于光学视网膜的视野估计比主观标准自动周边测量法显示出更低的变异性
目的:训练和评估一种无分割三维卷积神经网络(3DCNN)模型,用于从光学相干断层扫描(OCT)图像中估计视野(VF),并在不同临床人群中比较基于OCT估计的VF(OCT-VF)与汉弗莱视野分析仪(HFA)测量的残差:设计:回顾性横断面研究:5351名患者(9564只眼睛)于2006年至2023年期间在一家大学医院接受了黄斑OCT成像和汉弗莱视场分析仪(HFA)测试(24-2或10-2测试模式)。数据集包括 47,653 个成对的 OCT-VF 数据点,其中包括各种眼部状况:我们在综合 OCT 数据集上训练了一个基于 EfficientNet3D-b0 架构的无分割 3DCNN 模型,以估计 VF。我们使用皮尔逊相关系数和 Bland-Altman 分析评估了模型的性能。我们使用杰克刀重采样方法评估了残余变异性,并使用广义估计方程(GEE)比较了OCT-VF和HFA数据集,调整了VF测试次数、随访时间、年龄以及按眼睛和患者分组的情况:主要结果指标:估计VF阈值和测量VF阈值与平均偏差(MD)之间的相关性,以及OCT-VF和HFA的残差:我们观察到估计和测量的 VF 参数之间存在很强的相关性(Pearson's r:24-2 阈值 0.893,MD 0.932;10-2 阈值 0.902,MD 0.945;所有 p < 0.001)。Bland-Altman 分析表明,估计值与测量值之间的一致性良好,但存在轻微的比例偏差。GEE 分析表明,OCT-VF 的残差明显低于 HFA(24-2 阈值:1.10 vs. 2.48):1.10 vs. 2.48 dB; 10-2 thresholds:1.20 vs. 2.48 dB; all p < 0.001, Bonferroni-corrected),所有测试点、严重程度和年龄的变异性都更低,从而突出了无分割 3DCNN 方法在异质性临床样本中的稳健性:结论:在包括青光眼患者和其他眼部疾病患者在内的异质性临床样本中,无分割 3DCNN 模型能从 OCT 图像中客观估计 VF,与主观 HFA 测量值相比,准确性高,残余变异性低得多。OCT-VF 可靠性更高、变异性更低,而且具有客观性,这凸显了它在加强 VF 评估和监测各种眼部疾病方面的价值,可能有助于更早地发现病情发展和更有效地进行疾病管理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Prediction of the ectasia screening index from raw Casia2 volume data for keratoconus identification by using convolutional neural networks Utilizing AI-Generated Plain Language Summaries to Enhance Interdisciplinary Understanding of Ophthalmology Notes: A Randomized Trial Deep Learning-Based Detection of Reticular Pseudodrusen in Age-Related Macular Degeneration on Optical Coherence Tomography Photoreceptor outer segment reflectivity with ultrahigh resolution visible light optical coherence tomography in systemic hydroxychloroquine use Comparison of visual function analysis of people with low vision using three different models of augmented reality devices
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1