对OCT图像分割中单源域泛化数据增强的再思考。

IF 6.8 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Journal of Biomedical and Health Informatics Pub Date : 2025-08-01 DOI:10.1109/JBHI.2025.3543630
Jiayi Lu, Shaodong Ma, Yonghuai Liu, Yuhui Ma, Lei Mou, Yang Jiang, Yitian Zhao
{"title":"对OCT图像分割中单源域泛化数据增强的再思考。","authors":"Jiayi Lu, Shaodong Ma, Yonghuai Liu, Yuhui Ma, Lei Mou, Yang Jiang, Yitian Zhao","doi":"10.1109/JBHI.2025.3543630","DOIUrl":null,"url":null,"abstract":"<p><p>Domain shifts between samples acquired with different instruments are one of the major challenges in accurate segmentation of Optical Coherence Tomography (OCT) images. Given that OCT images may be acquired with different devices in different clinical centers, this study presents astyle and structure data augmentation (SSDA) method to improve the adaptability of segmentation models. Inspired by our initial analysis of OCT domain differences, we propose an innovative hypothesis that domain shifts are primarily due to differences in image style and anatomical structure, which further guides the design of our method. By designing a modality-specific NURBS curve for style enhancement and implementing global and local elastic deformation fields, SSDA addresses both stylistic and structural variations in OCT data. Global deformations simulate changes in retinal curvature, while local deformations model layer-specific changes observed in OCT images. We validate our hypothesis through a comprehensive evaluation conducted on five OCT data domains, each differing in device type and imaging conditions. We train models on each of these domains for single-domain generalisation experiments and evaluate performance on the remaining unseen domains. The results show that SSDA outperforms existing methods when segmenting OCT images from different sources with different requirements for retinal layer segmentation. Specifically, across five different source domain generalisation experiments, SSDA achieves approximately 1.6% higher Dice and 2.6% improved MIOU, underscoring its superior segmentation accuracy and robust generalisation across all evaluated unseen domains.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":"5642-5655"},"PeriodicalIF":6.8000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Rethinking Data Augmentation for Single-Source Domain Generalization in OCT Image Segmentation.\",\"authors\":\"Jiayi Lu, Shaodong Ma, Yonghuai Liu, Yuhui Ma, Lei Mou, Yang Jiang, Yitian Zhao\",\"doi\":\"10.1109/JBHI.2025.3543630\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Domain shifts between samples acquired with different instruments are one of the major challenges in accurate segmentation of Optical Coherence Tomography (OCT) images. Given that OCT images may be acquired with different devices in different clinical centers, this study presents astyle and structure data augmentation (SSDA) method to improve the adaptability of segmentation models. Inspired by our initial analysis of OCT domain differences, we propose an innovative hypothesis that domain shifts are primarily due to differences in image style and anatomical structure, which further guides the design of our method. By designing a modality-specific NURBS curve for style enhancement and implementing global and local elastic deformation fields, SSDA addresses both stylistic and structural variations in OCT data. Global deformations simulate changes in retinal curvature, while local deformations model layer-specific changes observed in OCT images. We validate our hypothesis through a comprehensive evaluation conducted on five OCT data domains, each differing in device type and imaging conditions. We train models on each of these domains for single-domain generalisation experiments and evaluate performance on the remaining unseen domains. The results show that SSDA outperforms existing methods when segmenting OCT images from different sources with different requirements for retinal layer segmentation. Specifically, across five different source domain generalisation experiments, SSDA achieves approximately 1.6% higher Dice and 2.6% improved MIOU, underscoring its superior segmentation accuracy and robust generalisation across all evaluated unseen domains.</p>\",\"PeriodicalId\":13073,\"journal\":{\"name\":\"IEEE Journal of Biomedical and Health Informatics\",\"volume\":\"PP \",\"pages\":\"5642-5655\"},\"PeriodicalIF\":6.8000,\"publicationDate\":\"2025-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal of Biomedical and Health Informatics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1109/JBHI.2025.3543630\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Biomedical and Health Informatics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/JBHI.2025.3543630","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

不同仪器采集的样品之间的域漂移是光学相干断层扫描(OCT)图像准确分割的主要挑战之一。针对不同临床中心不同设备采集OCT图像的情况,本研究提出了style and structure data augmentation (SSDA)方法来提高分割模型的适应性。受我们对OCT域差异的初步分析的启发,我们提出了一个创新的假设,即域移动主要是由于图像风格和解剖结构的差异,这进一步指导了我们方法的设计。通过为样式增强设计特定于模态的NURBS曲线,并实现全局和局部弹性变形场,SSDA解决了OCT数据中的风格和结构变化。全局变形模拟视网膜曲率的变化,而局部变形模拟OCT图像中观察到的层特异性变化。我们通过对五个OCT数据域进行综合评估来验证我们的假设,每个数据域在设备类型和成像条件上都有所不同。我们在每个领域上训练模型进行单域泛化实验,并评估其余未见领域的性能。结果表明,在对不同来源、不同视网膜层分割要求的OCT图像进行分割时,SSDA算法优于现有方法。具体来说,在五个不同的源域泛化实验中,SSDA实现了大约1.6%的Dice和2.6%的MIOU改进,强调了其卓越的分割精度和在所有评估的未见域的鲁棒泛化。源代码可以在https://github.com/iMED-Lab/SSDA-OCTSeg上找到。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Rethinking Data Augmentation for Single-Source Domain Generalization in OCT Image Segmentation.

Domain shifts between samples acquired with different instruments are one of the major challenges in accurate segmentation of Optical Coherence Tomography (OCT) images. Given that OCT images may be acquired with different devices in different clinical centers, this study presents astyle and structure data augmentation (SSDA) method to improve the adaptability of segmentation models. Inspired by our initial analysis of OCT domain differences, we propose an innovative hypothesis that domain shifts are primarily due to differences in image style and anatomical structure, which further guides the design of our method. By designing a modality-specific NURBS curve for style enhancement and implementing global and local elastic deformation fields, SSDA addresses both stylistic and structural variations in OCT data. Global deformations simulate changes in retinal curvature, while local deformations model layer-specific changes observed in OCT images. We validate our hypothesis through a comprehensive evaluation conducted on five OCT data domains, each differing in device type and imaging conditions. We train models on each of these domains for single-domain generalisation experiments and evaluate performance on the remaining unseen domains. The results show that SSDA outperforms existing methods when segmenting OCT images from different sources with different requirements for retinal layer segmentation. Specifically, across five different source domain generalisation experiments, SSDA achieves approximately 1.6% higher Dice and 2.6% improved MIOU, underscoring its superior segmentation accuracy and robust generalisation across all evaluated unseen domains.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Journal of Biomedical and Health Informatics
IEEE Journal of Biomedical and Health Informatics COMPUTER SCIENCE, INFORMATION SYSTEMS-COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
CiteScore
13.60
自引率
6.50%
发文量
1151
期刊介绍: IEEE Journal of Biomedical and Health Informatics publishes original papers presenting recent advances where information and communication technologies intersect with health, healthcare, life sciences, and biomedicine. Topics include acquisition, transmission, storage, retrieval, management, and analysis of biomedical and health information. The journal covers applications of information technologies in healthcare, patient monitoring, preventive care, early disease diagnosis, therapy discovery, and personalized treatment protocols. It explores electronic medical and health records, clinical information systems, decision support systems, medical and biological imaging informatics, wearable systems, body area/sensor networks, and more. Integration-related topics like interoperability, evidence-based medicine, and secure patient data are also addressed.
期刊最新文献
EEG-VLM: A Hierarchical Vision-Language Model With Multi-Level Feature Alignment and Visually Enhanced Language-Guided Reasoning for EEG Image-Based Sleep Stage Prediction. Evaluating Large Language Models in Crisis Detection: A Real-World Benchmark from Psychological Support Hotlines. FD-MSGL: Drug Repositioning via Frequency-Domain Multi-Source Synergistic Graph Learning. DFCNet: A Precise Detection Approach for Obstructive Sleep Apnea-Hypopnea Events Using Airflow and Respiratory Effort Signals. Leveraging Language Embeddings from EMA Surveys to Predict Perceived Social Isolation among Stroke Survivors.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1