TIG-UDA: Generative unsupervised domain adaptation with transformer-embedded invariance for cross-modality medical image segmentation

IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Biomedical Signal Processing and Control Pub Date : 2025-03-05 DOI:10.1016/j.bspc.2025.107722
Jiapeng Li , Yijia Chen , Shijie Li , Lisheng Xu , Wei Qian , Shuai Tian , Lin Qi
{"title":"TIG-UDA: Generative unsupervised domain adaptation with transformer-embedded invariance for cross-modality medical image segmentation","authors":"Jiapeng Li ,&nbsp;Yijia Chen ,&nbsp;Shijie Li ,&nbsp;Lisheng Xu ,&nbsp;Wei Qian ,&nbsp;Shuai Tian ,&nbsp;Lin Qi","doi":"10.1016/j.bspc.2025.107722","DOIUrl":null,"url":null,"abstract":"<div><div>Unsupervised domain adaptation (UDA) in medical image segmentation aims to transfer knowledge from a labeled source domain to an unlabeled target domain, especially when there are significant differences in data distribution across multi-modal medical images. Traditional UDA methods typically involve image translation and segmentation modules. However, during image translation, the anatomical structure of the generated images may vary, resulting in a mismatch of source domain labels and impacting subsequent segmentation. In addition, during image segmentation, although the Transformer architecture is used in UDA tasks due to its superior global context capture ability, it may not effectively facilitate knowledge transfer in UDA tasks due to lacking the adaptability of the self-attention mechanism in Transformers. To address these issues, we propose a generative UDA network with invariance mining, named TIG-UDA, for cross-modality multi-organ medical image segmentation, which includes an image style translation network (ISTN) and an invariance adaptation segmentation network (IASN). In ISTN, we not only introduce a structure preservation mechanism to guide image generation to achieve anatomical structure consistency, but also align the latent semantic features of source and target domain images to enhance the quality of the generated images. In IASN, we propose an invariance adaptation module that can extract the invariability weights of learned features in the attention mechanism of Transformer to compensate for the differences between source and target domains. Experimental results on two public cross-modality datasets (MS-CMR dataset and Abdomen dataset) show the promising segmentation performance of TIG-UDA compared with other state-of-the-art UDA methods.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107722"},"PeriodicalIF":4.9000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809425002332","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Unsupervised domain adaptation (UDA) in medical image segmentation aims to transfer knowledge from a labeled source domain to an unlabeled target domain, especially when there are significant differences in data distribution across multi-modal medical images. Traditional UDA methods typically involve image translation and segmentation modules. However, during image translation, the anatomical structure of the generated images may vary, resulting in a mismatch of source domain labels and impacting subsequent segmentation. In addition, during image segmentation, although the Transformer architecture is used in UDA tasks due to its superior global context capture ability, it may not effectively facilitate knowledge transfer in UDA tasks due to lacking the adaptability of the self-attention mechanism in Transformers. To address these issues, we propose a generative UDA network with invariance mining, named TIG-UDA, for cross-modality multi-organ medical image segmentation, which includes an image style translation network (ISTN) and an invariance adaptation segmentation network (IASN). In ISTN, we not only introduce a structure preservation mechanism to guide image generation to achieve anatomical structure consistency, but also align the latent semantic features of source and target domain images to enhance the quality of the generated images. In IASN, we propose an invariance adaptation module that can extract the invariability weights of learned features in the attention mechanism of Transformer to compensate for the differences between source and target domains. Experimental results on two public cross-modality datasets (MS-CMR dataset and Abdomen dataset) show the promising segmentation performance of TIG-UDA compared with other state-of-the-art UDA methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
Biomedical Signal Processing and Control
Biomedical Signal Processing and Control 工程技术-工程:生物医学
CiteScore
9.80
自引率
13.70%
发文量
822
审稿时长
4 months
期刊介绍: Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management. Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.
期刊最新文献
Attention-enhanced U-Net based network for cancerous tissue segmentation Gaussian regressed generative adversarial network based hermitian extreme gradient boosting for plant leaf disease detection Computer-aided diagnosis of spinal deformities based on keypoints detection in human back depth images Advancing cardiovascular risk prediction: A fusion of SVM models with fuzzy logic and the Sugeno integral Altered visual network modularity and communication in ADHD subtypes: Classification via source-localized EEG modules
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1