Constructing Synthetic Chorio-Retinal Patches using Generative Adversarial Networks

J. Kugelman, D. Alonso-Caneiro, Scott A. Read, Stephen J. Vincent, F. Chen, M. Collins
{"title":"Constructing Synthetic Chorio-Retinal Patches using Generative Adversarial Networks","authors":"J. Kugelman, D. Alonso-Caneiro, Scott A. Read, Stephen J. Vincent, F. Chen, M. Collins","doi":"10.1109/DICTA47822.2019.8946089","DOIUrl":null,"url":null,"abstract":"The segmentation of tissue layers in optical coherence tomography (OCT) images of the internal lining of the eye (the retina and choroid) is commonly performed for clinical and research purposes. However, manual segmentation of the numerous scans is time consuming, tedious and error-prone. Fortunately, machine learning-based automated approaches for image segmentation tasks are becoming more common. However, poor performance of these methods can result from a lack of quantity or diversity in the data used to train the models. Recently, generative adversarial networks (GANs) have demonstrated the ability to generate synthetic images, which may be useful for data augmentation purposes. Here, we propose the application of GANs to construct chorio-retinal patches from OCT images which may be used to augment data for a patch-based approach to boundary segmentation. Given the complexity of GAN training, a range of experiments are performed to optimize performance. We show that it is feasible to generate 32×32 versions of such patches that are visually indistinguishable from their real variants. In the best case, the segmentation performance utilizing solely synthetic data to train the model is nearly comparable to real data on all three layer boundaries of interest. The difference in mean absolute error for the inner boundary of the inner limiting membrane (ILM) [0.50 vs. 0.48 pixels], outer boundary of the retinal pigment epithelium (RPE) [0.48 vs. 0.44 pixels] and choroid-scleral interface (CSI) [4.42 vs. 4.00 pixels] shows the performance using synthetic data to be only marginally inferior. These findings highlight the potential use of GANs for data augmentation in future work with chorio-retinal OCT images.","PeriodicalId":6696,"journal":{"name":"2019 Digital Image Computing: Techniques and Applications (DICTA)","volume":"153 1","pages":"1-8"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA47822.2019.8946089","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

The segmentation of tissue layers in optical coherence tomography (OCT) images of the internal lining of the eye (the retina and choroid) is commonly performed for clinical and research purposes. However, manual segmentation of the numerous scans is time consuming, tedious and error-prone. Fortunately, machine learning-based automated approaches for image segmentation tasks are becoming more common. However, poor performance of these methods can result from a lack of quantity or diversity in the data used to train the models. Recently, generative adversarial networks (GANs) have demonstrated the ability to generate synthetic images, which may be useful for data augmentation purposes. Here, we propose the application of GANs to construct chorio-retinal patches from OCT images which may be used to augment data for a patch-based approach to boundary segmentation. Given the complexity of GAN training, a range of experiments are performed to optimize performance. We show that it is feasible to generate 32×32 versions of such patches that are visually indistinguishable from their real variants. In the best case, the segmentation performance utilizing solely synthetic data to train the model is nearly comparable to real data on all three layer boundaries of interest. The difference in mean absolute error for the inner boundary of the inner limiting membrane (ILM) [0.50 vs. 0.48 pixels], outer boundary of the retinal pigment epithelium (RPE) [0.48 vs. 0.44 pixels] and choroid-scleral interface (CSI) [4.42 vs. 4.00 pixels] shows the performance using synthetic data to be only marginally inferior. These findings highlight the potential use of GANs for data augmentation in future work with chorio-retinal OCT images.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用生成对抗网络构建合成绒毛膜-视网膜补丁
光学相干断层扫描(OCT)图像中组织层的分割(视网膜和脉络膜)通常用于临床和研究目的。然而,手工分割大量的扫描是耗时的,繁琐的,容易出错。幸运的是,基于机器学习的图像分割任务自动化方法正变得越来越普遍。然而,这些方法的性能差可能是由于用于训练模型的数据缺乏数量或多样性。最近,生成对抗网络(GANs)已经证明了生成合成图像的能力,这可能对数据增强有用。在这里,我们提出应用gan从OCT图像中构建绒毛膜-视网膜斑块,这些斑块可用于增强数据,用于基于斑块的边界分割方法。考虑到GAN训练的复杂性,我们进行了一系列的实验来优化性能。我们证明了生成32×32版本的补丁是可行的,这些补丁在视觉上与它们的真实变体无法区分。在最好的情况下,仅使用合成数据来训练模型的分割性能几乎可以与所有三个感兴趣的层边界上的真实数据相媲美。内限制膜(ILM)内边界(0.50 vs. 0.48像素)、视网膜色素上皮(RPE)外边界(0.48 vs. 0.44像素)和脉膜-巩膜界面(CSI) [4.42 vs. 4.00像素]的平均绝对误差差异表明,使用合成数据的表现仅略差。这些发现强调了gan在未来绒毛膜-视网膜OCT图像数据增强方面的潜在应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Enhanced Micro Target Detection through Local Motion Feedback in Biologically Inspired Algorithms Hyperspectral Image Analysis for Writer Identification using Deep Learning Robust Image Watermarking Framework Powered by Convolutional Encoder-Decoder Network Single View 3D Point Cloud Reconstruction using Novel View Synthesis and Self-Supervised Depth Estimation Semantic Segmentation under Severe Imaging Conditions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1