伪多模态融合网络视网膜OCT去噪。

Dewei Hu, Joseph D Malone, Yigit Atay, Yuankai K Tao, Ipek Oguz
{"title":"伪多模态融合网络视网膜OCT去噪。","authors":"Dewei Hu,&nbsp;Joseph D Malone,&nbsp;Yigit Atay,&nbsp;Yuankai K Tao,&nbsp;Ipek Oguz","doi":"10.1007/978-3-030-63419-3_13","DOIUrl":null,"url":null,"abstract":"<p><p>Optical coherence tomography (OCT) is a prevalent imaging technique for retina. However, it is affected by multiplicative speckle noise that can degrade the visibility of essential anatomical structures, including blood vessels and tissue layers. Although averaging repeated B-scan frames can significantly improve the signal-to-noise-ratio (SNR), this requires longer acquisition time, which can introduce motion artifacts and cause discomfort to patients. In this study, we propose a learning-based method that exploits information from the single-frame noisy B-scan and a pseudo-modality that is created with the aid of the self-fusion method. The pseudo-modality provides good SNR for layers that are barely perceptible in the noisy B-scan but can over-smooth fine features such as small vessels. By using a fusion network, desired features from each modality can be combined, and the weight of their contribution is adjustable. Evaluated by intensity-based and structural metrics, the result shows that our method can effectively suppress the speckle noise and enhance the contrast between retina layers while the overall structure and small blood vessels are preserved. Compared to the single modality network, our method improves the structural similarity with low noise B-scan from 0.559 ± 0.033 to 0.576 ± 0.031.</p>","PeriodicalId":93803,"journal":{"name":"Ophthalmic medical image analysis : 7th International Workshop, OMIA 2020, held in conjunction with MICCAI 2020, Lima, Peru, October 8, 2020, proceedings","volume":"12069 ","pages":"125-135"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9241435/pdf/nihms-1752651.pdf","citationCount":"6","resultStr":"{\"title\":\"Retinal OCT Denoising with Pseudo-Multimodal Fusion Network.\",\"authors\":\"Dewei Hu,&nbsp;Joseph D Malone,&nbsp;Yigit Atay,&nbsp;Yuankai K Tao,&nbsp;Ipek Oguz\",\"doi\":\"10.1007/978-3-030-63419-3_13\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Optical coherence tomography (OCT) is a prevalent imaging technique for retina. However, it is affected by multiplicative speckle noise that can degrade the visibility of essential anatomical structures, including blood vessels and tissue layers. Although averaging repeated B-scan frames can significantly improve the signal-to-noise-ratio (SNR), this requires longer acquisition time, which can introduce motion artifacts and cause discomfort to patients. In this study, we propose a learning-based method that exploits information from the single-frame noisy B-scan and a pseudo-modality that is created with the aid of the self-fusion method. The pseudo-modality provides good SNR for layers that are barely perceptible in the noisy B-scan but can over-smooth fine features such as small vessels. By using a fusion network, desired features from each modality can be combined, and the weight of their contribution is adjustable. Evaluated by intensity-based and structural metrics, the result shows that our method can effectively suppress the speckle noise and enhance the contrast between retina layers while the overall structure and small blood vessels are preserved. Compared to the single modality network, our method improves the structural similarity with low noise B-scan from 0.559 ± 0.033 to 0.576 ± 0.031.</p>\",\"PeriodicalId\":93803,\"journal\":{\"name\":\"Ophthalmic medical image analysis : 7th International Workshop, OMIA 2020, held in conjunction with MICCAI 2020, Lima, Peru, October 8, 2020, proceedings\",\"volume\":\"12069 \",\"pages\":\"125-135\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9241435/pdf/nihms-1752651.pdf\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ophthalmic medical image analysis : 7th International Workshop, OMIA 2020, held in conjunction with MICCAI 2020, Lima, Peru, October 8, 2020, proceedings\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/978-3-030-63419-3_13\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2020/11/20 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ophthalmic medical image analysis : 7th International Workshop, OMIA 2020, held in conjunction with MICCAI 2020, Lima, Peru, October 8, 2020, proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-030-63419-3_13","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2020/11/20 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

光学相干断层扫描(OCT)是一种常用的视网膜成像技术。然而,它受到乘法斑点噪声的影响,可以降低基本解剖结构的可见性,包括血管和组织层。虽然对重复的b扫描帧进行平均可以显著提高信噪比(SNR),但这需要更长的采集时间,这可能会引入运动伪影,并给患者带来不适。在本研究中,我们提出了一种基于学习的方法,该方法利用单帧噪声b扫描的信息和借助自融合方法创建的伪模态。伪模态为在嘈杂的b扫描中几乎无法察觉的层提供了良好的信噪比,但可以过度平滑精细特征,如小血管。通过使用融合网络,可以将每个模态的所需特征组合在一起,并且它们的贡献权重是可调的。结果表明,该方法可以有效地抑制斑点噪声,增强视网膜层间对比度,同时保留整体结构和小血管。与单模态网络相比,我们的方法将低噪声b扫描的结构相似度从0.559±0.033提高到0.576±0.031。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Retinal OCT Denoising with Pseudo-Multimodal Fusion Network.

Optical coherence tomography (OCT) is a prevalent imaging technique for retina. However, it is affected by multiplicative speckle noise that can degrade the visibility of essential anatomical structures, including blood vessels and tissue layers. Although averaging repeated B-scan frames can significantly improve the signal-to-noise-ratio (SNR), this requires longer acquisition time, which can introduce motion artifacts and cause discomfort to patients. In this study, we propose a learning-based method that exploits information from the single-frame noisy B-scan and a pseudo-modality that is created with the aid of the self-fusion method. The pseudo-modality provides good SNR for layers that are barely perceptible in the noisy B-scan but can over-smooth fine features such as small vessels. By using a fusion network, desired features from each modality can be combined, and the weight of their contribution is adjustable. Evaluated by intensity-based and structural metrics, the result shows that our method can effectively suppress the speckle noise and enhance the contrast between retina layers while the overall structure and small blood vessels are preserved. Compared to the single modality network, our method improves the structural similarity with low noise B-scan from 0.559 ± 0.033 to 0.576 ± 0.031.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Ophthalmic Medical Image Analysis: 9th International Workshop, OMIA 2022, Held in Conjunction with MICCAI 2022, Singapore, Singapore, September 22, 2022, Proceedings Ophthalmic Medical Image Analysis: 8th International Workshop, OMIA 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings Correction to: Impact of Data Augmentation on Retinal OCT Image Segmentation for Diabetic Macular Edema Analysis Retinal OCT Denoising with Pseudo-Multimodal Fusion Network. Ophthalmic Medical Image Analysis: 7th International Workshop, OMIA 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 8, 2020, Proceedings
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1