The quest for early detection of retinal disease: 3D CycleGAN-based translation of optical coherence tomography into confocal microscopy.

Biological imaging Pub Date : 2024-12-16 eCollection Date: 2024-01-01 DOI:10.1017/S2633903X24000163
Xin Tian, Nantheera Anantrasirichai, Lindsay Nicholson, Alin Achim
{"title":"The quest for early detection of retinal disease: 3D CycleGAN-based translation of optical coherence tomography into confocal microscopy.","authors":"Xin Tian, Nantheera Anantrasirichai, Lindsay Nicholson, Alin Achim","doi":"10.1017/S2633903X24000163","DOIUrl":null,"url":null,"abstract":"<p><p>Optical coherence tomography (OCT) and confocal microscopy are pivotal in retinal imaging, offering distinct advantages and limitations. <i>In vivo</i> OCT offers rapid, noninvasive imaging but can suffer from clarity issues and motion artifacts, while <i>ex vivo</i> confocal microscopy, providing high-resolution, cellular-detailed color images, is invasive and raises ethical concerns. To bridge the benefits of both modalities, we propose a novel framework based on unsupervised 3D CycleGAN for translating unpaired <i>in vivo</i> OCT to <i>ex vivo</i> confocal microscopy images. This marks the first attempt to exploit the inherent 3D information of OCT and translate it into the rich, detailed color domain of confocal microscopy. We also introduce a unique dataset, OCT2Confocal, comprising mouse OCT and confocal retinal images, facilitating the development of and establishing a benchmark for cross-modal image translation research. Our model has been evaluated both quantitatively and qualitatively, achieving Fréchet inception distance (FID) scores of 0.766 and Kernel Inception Distance (KID) scores as low as 0.153, and leading subjective mean opinion scores (MOS). Our model demonstrated superior image fidelity and quality with limited data over existing methods. Our approach effectively synthesizes color information from 3D confocal images, closely approximating target outcomes and suggesting enhanced potential for diagnostic and monitoring applications in ophthalmology.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e15"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11704141/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biological imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/S2633903X24000163","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Optical coherence tomography (OCT) and confocal microscopy are pivotal in retinal imaging, offering distinct advantages and limitations. In vivo OCT offers rapid, noninvasive imaging but can suffer from clarity issues and motion artifacts, while ex vivo confocal microscopy, providing high-resolution, cellular-detailed color images, is invasive and raises ethical concerns. To bridge the benefits of both modalities, we propose a novel framework based on unsupervised 3D CycleGAN for translating unpaired in vivo OCT to ex vivo confocal microscopy images. This marks the first attempt to exploit the inherent 3D information of OCT and translate it into the rich, detailed color domain of confocal microscopy. We also introduce a unique dataset, OCT2Confocal, comprising mouse OCT and confocal retinal images, facilitating the development of and establishing a benchmark for cross-modal image translation research. Our model has been evaluated both quantitatively and qualitatively, achieving Fréchet inception distance (FID) scores of 0.766 and Kernel Inception Distance (KID) scores as low as 0.153, and leading subjective mean opinion scores (MOS). Our model demonstrated superior image fidelity and quality with limited data over existing methods. Our approach effectively synthesizes color information from 3D confocal images, closely approximating target outcomes and suggesting enhanced potential for diagnostic and monitoring applications in ophthalmology.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
视网膜疾病早期检测的探索:基于3D cyclegan的光学相干断层扫描转化为共聚焦显微镜。
光学相干断层扫描(OCT)和共聚焦显微镜是视网膜成像的关键,具有明显的优势和局限性。活体OCT提供快速、无创成像,但可能存在清晰度问题和运动伪影,而离体共聚焦显微镜提供高分辨率、细胞细节的彩色图像,具有侵入性,并引起伦理问题。为了弥合这两种模式的优势,我们提出了一个基于无监督3D CycleGAN的新框架,用于将未匹配的体内OCT转换为离体共聚焦显微镜图像。这标志着首次尝试利用OCT固有的3D信息,并将其转化为共聚焦显微镜丰富、详细的色域。我们还介绍了一个独特的数据集,OCT2Confocal,包括小鼠OCT和共聚焦视网膜图像,促进了跨模态图像翻译研究的发展和建立了一个基准。我们的模型已经进行了定量和定性评估,获得了0.766的fr起始距离(FID)分数和0.153的内核起始距离(KID)分数,以及领先的主观平均意见分数(MOS)。与现有方法相比,我们的模型在有限的数据下显示出更高的图像保真度和质量。我们的方法有效地从3D共聚焦图像中合成颜色信息,接近目标结果,并表明在眼科诊断和监测应用的潜力增强。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Deep-learning-based image compression for microscopy images: An empirical study. The quest for early detection of retinal disease: 3D CycleGAN-based translation of optical coherence tomography into confocal microscopy. Bridging the gap: Integrating cutting-edge techniques into biological imaging with deepImageJ. Deep-blur: Blind identification and deblurring with convolutional neural networks. Exploring self-supervised learning biases for microscopy image representation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1