Self-supervised contrastive learning improves machine learning discrimination of full thickness macular holes from epiretinal membranes in retinal OCT scans

Tim Wheeler, Kaitlyn Hunter, Patricia Anne Garcia, Henry Li, Andrew Thomson, Allan Hunter, Courosh Mehanian
{"title":"Self-supervised contrastive learning improves machine learning discrimination of full thickness macular holes from epiretinal membranes in retinal OCT scans","authors":"Tim Wheeler, Kaitlyn Hunter, Patricia Anne Garcia, Henry Li, Andrew Thomson, Allan Hunter, Courosh Mehanian","doi":"10.1101/2023.11.14.23298513","DOIUrl":null,"url":null,"abstract":"There is a growing interest in using computer-assisted models for the detection of macular conditions using optical coherence tomography (OCT) data. As the quantity of clinical scan data of specific conditions is limited, these models are typically developed by fine-tuning a generalized network to classify specific macular conditions of interest. Full thickness macular holes (FTMH) present a condition requiring timely surgical intervention to prevent permanent vision loss. Other works on automated FTMH classification have tended to use supervised ImageNet pre-trained networks with good results but leave room for improvement. In this paper, we develop a model for FTMH classification using OCT slices around the central foveal region to pre-train a naïve network using contrastive self-supervised learning. We found that self-supervised pre-trained networks outperform ImageNet pre-trained networks despite a small training set size (284 eyes total, 51 FTMH+ eyes, 3 slices from each eye). 3D spatial contrast pre-training yields a model with an F1- score of 1.0 on holdout data (50 eyes total, 10 FTMH+), compared ImageNet pre-trained models, respectively. These results demonstrate that even limited data may be applied toward self-supervised pre- training to substantially improve performance for FTMH classification, indicating applicability toward other OCT-based problems.","PeriodicalId":501390,"journal":{"name":"medRxiv - Ophthalmology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Ophthalmology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2023.11.14.23298513","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

There is a growing interest in using computer-assisted models for the detection of macular conditions using optical coherence tomography (OCT) data. As the quantity of clinical scan data of specific conditions is limited, these models are typically developed by fine-tuning a generalized network to classify specific macular conditions of interest. Full thickness macular holes (FTMH) present a condition requiring timely surgical intervention to prevent permanent vision loss. Other works on automated FTMH classification have tended to use supervised ImageNet pre-trained networks with good results but leave room for improvement. In this paper, we develop a model for FTMH classification using OCT slices around the central foveal region to pre-train a naïve network using contrastive self-supervised learning. We found that self-supervised pre-trained networks outperform ImageNet pre-trained networks despite a small training set size (284 eyes total, 51 FTMH+ eyes, 3 slices from each eye). 3D spatial contrast pre-training yields a model with an F1- score of 1.0 on holdout data (50 eyes total, 10 FTMH+), compared ImageNet pre-trained models, respectively. These results demonstrate that even limited data may be applied toward self-supervised pre- training to substantially improve performance for FTMH classification, indicating applicability toward other OCT-based problems.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
自监督对比学习改进了机器学习在视网膜OCT扫描中对视网膜前膜全层黄斑孔的识别
使用计算机辅助模型,利用光学相干断层扫描(OCT)数据检测黄斑状况的兴趣越来越大。由于特定情况的临床扫描数据数量有限,这些模型通常是通过微调广义网络来对感兴趣的特定黄斑情况进行分类来开发的。全厚度黄斑孔(FTMH)目前的条件需要及时的手术干预,以防止永久性视力丧失。其他关于自动FTMH分类的工作倾向于使用有监督的ImageNet预训练网络,结果很好,但还有改进的空间。在本文中,我们开发了一个FTMH分类模型,使用中央中央凹区域周围的OCT切片来使用对比自监督学习预训练naïve网络。我们发现自监督预训练网络优于ImageNet预训练网络,尽管训练集很小(总共284只眼睛,51只FTMH+眼睛,每只眼睛3张切片)。与ImageNet预训练模型相比,3D空间对比预训练得到的模型在holdout数据(共50只眼,10 FTMH+)上的F1-得分分别为1.0。这些结果表明,即使有限的数据也可以应用于自监督预训练,从而大大提高FTMH分类的性能,这表明了对其他基于oct的问题的适用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Prediction of the ectasia screening index from raw Casia2 volume data for keratoconus identification by using convolutional neural networks Utilizing AI-Generated Plain Language Summaries to Enhance Interdisciplinary Understanding of Ophthalmology Notes: A Randomized Trial Deep Learning-Based Detection of Reticular Pseudodrusen in Age-Related Macular Degeneration on Optical Coherence Tomography Photoreceptor outer segment reflectivity with ultrahigh resolution visible light optical coherence tomography in systemic hydroxychloroquine use Comparison of visual function analysis of people with low vision using three different models of augmented reality devices
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1