Pub Date : 2025-10-14DOI: 10.1109/tmi.2025.3620406
Hao Lin,Yonghong Song,You Su,Yunfei Ma
Deformable image registration aims to achieve nonlinear alignment of image spaces by estimating dense displacement fields. It is widely used in clinical tasks such as surgical planning, assisted diagnosis, and surgical navigation. While efficient, deep learning registration methods often struggle with large, complex displacements. Pyramid-based approaches address this with a coarse-to-fine strategy, but their single-feature processing can lead to error accumulation. In this paper, we introduce a dense Mixture of Experts (MoE) pyramid registration model, using routing schemes and multiple heterogeneous experts to increase the width and flexibility of feature processing within a single layer. The collaboration among heterogeneous experts enables the model to retain more precise details and maintain greater feature freedom when dealing with complex displacements. We use only deformation fields as the information transmission paradigm between different levels, with deformation field interactions between layers, which encourages the model to focus on the feature location matching process and perform registration in the correct direction. We do not utilize any complex mechanisms such as attention or ViT, keeping the model at its simplest form. The powerful deformable capability allows the model to perform volume registration directly and accurately without the need for affine registration. Experimental results show that the model achieves outstanding performance across four public datasets, including brain registration, lung registration, and abdominal multi-modal registration. The code will be published at https://github.com/Darlinglinlinlin/MOE_Morph.
{"title":"MoE-Morph: Lightweight Pyramid Model with Heterogeneous Mixture of Experts for Deformable Medical Image Registration.","authors":"Hao Lin,Yonghong Song,You Su,Yunfei Ma","doi":"10.1109/tmi.2025.3620406","DOIUrl":"https://doi.org/10.1109/tmi.2025.3620406","url":null,"abstract":"Deformable image registration aims to achieve nonlinear alignment of image spaces by estimating dense displacement fields. It is widely used in clinical tasks such as surgical planning, assisted diagnosis, and surgical navigation. While efficient, deep learning registration methods often struggle with large, complex displacements. Pyramid-based approaches address this with a coarse-to-fine strategy, but their single-feature processing can lead to error accumulation. In this paper, we introduce a dense Mixture of Experts (MoE) pyramid registration model, using routing schemes and multiple heterogeneous experts to increase the width and flexibility of feature processing within a single layer. The collaboration among heterogeneous experts enables the model to retain more precise details and maintain greater feature freedom when dealing with complex displacements. We use only deformation fields as the information transmission paradigm between different levels, with deformation field interactions between layers, which encourages the model to focus on the feature location matching process and perform registration in the correct direction. We do not utilize any complex mechanisms such as attention or ViT, keeping the model at its simplest form. The powerful deformable capability allows the model to perform volume registration directly and accurately without the need for affine registration. Experimental results show that the model achieves outstanding performance across four public datasets, including brain registration, lung registration, and abdominal multi-modal registration. The code will be published at https://github.com/Darlinglinlinlin/MOE_Morph.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"37 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145288378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1109/tmi.2025.3621093
Peixian Liang, Yifan Ding, Yizhe Zhang, Jianxu Chen, Hao Zheng, Hongxiao Wang, Yejia Zhang, Guangyu Meng, Tim Weninger, Michael Niemier, X. Sharon Hu, Danny Z Chen
{"title":"Cell Instance Segmentation: The Devil Is in the Boundaries","authors":"Peixian Liang, Yifan Ding, Yizhe Zhang, Jianxu Chen, Hao Zheng, Hongxiao Wang, Yejia Zhang, Guangyu Meng, Tim Weninger, Michael Niemier, X. Sharon Hu, Danny Z Chen","doi":"10.1109/tmi.2025.3621093","DOIUrl":"https://doi.org/10.1109/tmi.2025.3621093","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"71 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1109/tmi.2025.3621452
Junfei Hu,Tao Zhou,Kaiwen Huang,Yi Zhou,Haofeng Zhang,Boqiang Fan,Huazhu Fu
Few-Shot Learning (FSL) has garnered increasing attention for data-scarce scenarios, particularly in medical segmentation tasks where only a few labeled data points are available. Existing few-shot segmentation methods typically learn prototypes from support images and employ nearest-neighbor searching to segment query images. Despite notable progress, effectively learning prototypes for each class remains a challenging task to achieve promising results. In this paper, we propose an Uncertainty-guided Prototype Reliability Enhancement Network (UPRE-Net) for few-shot medical image segmentation. Specifically, we present a dual-support branch to maximize the extraction of information from support images through augmentation techniques. To enhance the reliability of prototypes, we propose an Uncertainty-guided Prototype Generation (UPG) module. Within the UPG module, we first extract both global and local prototypes for each class and then apply uncertainty measures to select the most informative prototypes. Additionally, to effectively combine the prediction results from the dual-support branch, we present a Reliable Dynamic Fusion (RDF) module. This module dynamically integrates the two prediction results to generate a more reliable output. Furthermore, we present an Uncertainty-induced Weighted Loss (UWL) to ensure that the model pays more attention to these regions with high uncertainty. Experiments on four benchmark medical image datasets demonstrate that our proposed model significantly outperforms state-of-the-art methods. The code will be released at https://github.com/taozh2017/UPRENet.
{"title":"Uncertainty-guided Prototype Reliability Enhancement Network for Few-Shot Medical Image Segmentation.","authors":"Junfei Hu,Tao Zhou,Kaiwen Huang,Yi Zhou,Haofeng Zhang,Boqiang Fan,Huazhu Fu","doi":"10.1109/tmi.2025.3621452","DOIUrl":"https://doi.org/10.1109/tmi.2025.3621452","url":null,"abstract":"Few-Shot Learning (FSL) has garnered increasing attention for data-scarce scenarios, particularly in medical segmentation tasks where only a few labeled data points are available. Existing few-shot segmentation methods typically learn prototypes from support images and employ nearest-neighbor searching to segment query images. Despite notable progress, effectively learning prototypes for each class remains a challenging task to achieve promising results. In this paper, we propose an Uncertainty-guided Prototype Reliability Enhancement Network (UPRE-Net) for few-shot medical image segmentation. Specifically, we present a dual-support branch to maximize the extraction of information from support images through augmentation techniques. To enhance the reliability of prototypes, we propose an Uncertainty-guided Prototype Generation (UPG) module. Within the UPG module, we first extract both global and local prototypes for each class and then apply uncertainty measures to select the most informative prototypes. Additionally, to effectively combine the prediction results from the dual-support branch, we present a Reliable Dynamic Fusion (RDF) module. This module dynamically integrates the two prediction results to generate a more reliable output. Furthermore, we present an Uncertainty-induced Weighted Loss (UWL) to ensure that the model pays more attention to these regions with high uncertainty. Experiments on four benchmark medical image datasets demonstrate that our proposed model significantly outperforms state-of-the-art methods. The code will be released at https://github.com/taozh2017/UPRENet.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"1 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145288544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1109/tmi.2025.3620714
Zhuotong Cai, Tianyi Zeng, Jiazhen Zhang, Eléonore V. Lieffrig, Kathryn Fontaine, Chenyu You, Enette Mae Revilla, James S. Duncan, Jingmin Xin, Yihuan Lu, John A. Onofrey
{"title":"PET Head Motion Estimation Using Supervised Deep Learning with Attention","authors":"Zhuotong Cai, Tianyi Zeng, Jiazhen Zhang, Eléonore V. Lieffrig, Kathryn Fontaine, Chenyu You, Enette Mae Revilla, James S. Duncan, Jingmin Xin, Yihuan Lu, John A. Onofrey","doi":"10.1109/tmi.2025.3620714","DOIUrl":"https://doi.org/10.1109/tmi.2025.3620714","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"117 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145282738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1109/tmi.2025.3620585
Peiqing Lv, Yaonan Wang, Min Liu, Zhe Zhang, Yunfeng Ma, Licheng Liu, Erik Meijering
{"title":"CiSeg: Unsupervised Cross-Modality Adaptation for 3D Medical Image Segmentation via Causal Intervention","authors":"Peiqing Lv, Yaonan Wang, Min Liu, Zhe Zhang, Yunfeng Ma, Licheng Liu, Erik Meijering","doi":"10.1109/tmi.2025.3620585","DOIUrl":"https://doi.org/10.1109/tmi.2025.3620585","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"76 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145282735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unsupervised High-Order Implicit Neural Representation with Line Attention for Metal Artifact Reduction","authors":"Hongyu Chen, Shaoguang Huang, Wei He, Guangyi Yang, Hongyan Zhang","doi":"10.1109/tmi.2025.3620222","DOIUrl":"https://doi.org/10.1109/tmi.2025.3620222","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"37 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145260753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1109/tmi.2025.3618754
Mengzhou Li, Chuang Niu, Ge Wang, Maya R Amma, Krishna M Chapagain, Stefan Gabrielson, Andrew Li, Kevin Jonker, Niels de Ruiter, Jennifer A Clark, Phil Butler, Anthony Butler, Hengyong Yu
{"title":"Deep Few-view High-resolution Photon-counting CT at Halved Dose for Extremity Imaging","authors":"Mengzhou Li, Chuang Niu, Ge Wang, Maya R Amma, Krishna M Chapagain, Stefan Gabrielson, Andrew Li, Kevin Jonker, Niels de Ruiter, Jennifer A Clark, Phil Butler, Anthony Butler, Hengyong Yu","doi":"10.1109/tmi.2025.3618754","DOIUrl":"https://doi.org/10.1109/tmi.2025.3618754","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"19 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145260747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-09DOI: 10.1109/tmi.2025.3619837
Yong Chen,Xiangde Luo,Renyi Chen,Yiyue Li,Han Zhang,He Lyu,Huan Song,Kang Li
Domain adaptation in medical image segmentation enables pre-trained models to generalize to new target domains. Given limited annotated data and privacy constraints, Source-Free Active Domain Adaptation (SFADA) methods provide promising solutions by selecting a few target samples for labeling without accessing source samples. However, in a fully source-free setting, existing works have not fully explored how to select these target samples in a class-balanced manner and how to conduct robust model adaptation using both labeled and unlabeled samples. In this study, we discover that boundary samples with source-like semantics but sharp predictive discrepancies are beneficial for SFADA. We define these samples as the most influential points and propose a slice-wise framework using influential points learning to explore them. Specifically, we detect source-like samples to retain source-specific knowledge. For each target sample, an adaptive K-nearest neighbor algorithm based on local density is introduced to construct neighborhoods of source-like samples for knowledge transfer. We then propose a class-balanced Kullback-Leibler divergence for these neighborhoods, calculating it to obtain an influential score ranking. A diverse subset of the highest-ranked target samples (considered influential points) is manually annotated. Furthermore, we design a progressive teacher model to facilitate SFADA for medical image segmentation. Guided by influential points, this model independently generates and utilizes pseudo-labels to mitigate error accumulation. To further suppress noise, curriculum learning is incorporated into the model to progressively leverage reliable supervision signals from pseudo-labels. Experiments on multiple benchmarks demonstrate that our method outperforms state-of-the-art methods even with only 2.5% of the labeling budget.
{"title":"Source-Free Active Domain Adaptation via Influential-Points-Guided Progressive Teacher for Medical Image Segmentation.","authors":"Yong Chen,Xiangde Luo,Renyi Chen,Yiyue Li,Han Zhang,He Lyu,Huan Song,Kang Li","doi":"10.1109/tmi.2025.3619837","DOIUrl":"https://doi.org/10.1109/tmi.2025.3619837","url":null,"abstract":"Domain adaptation in medical image segmentation enables pre-trained models to generalize to new target domains. Given limited annotated data and privacy constraints, Source-Free Active Domain Adaptation (SFADA) methods provide promising solutions by selecting a few target samples for labeling without accessing source samples. However, in a fully source-free setting, existing works have not fully explored how to select these target samples in a class-balanced manner and how to conduct robust model adaptation using both labeled and unlabeled samples. In this study, we discover that boundary samples with source-like semantics but sharp predictive discrepancies are beneficial for SFADA. We define these samples as the most influential points and propose a slice-wise framework using influential points learning to explore them. Specifically, we detect source-like samples to retain source-specific knowledge. For each target sample, an adaptive K-nearest neighbor algorithm based on local density is introduced to construct neighborhoods of source-like samples for knowledge transfer. We then propose a class-balanced Kullback-Leibler divergence for these neighborhoods, calculating it to obtain an influential score ranking. A diverse subset of the highest-ranked target samples (considered influential points) is manually annotated. Furthermore, we design a progressive teacher model to facilitate SFADA for medical image segmentation. Guided by influential points, this model independently generates and utilizes pseudo-labels to mitigate error accumulation. To further suppress noise, curriculum learning is incorporated into the model to progressively leverage reliable supervision signals from pseudo-labels. Experiments on multiple benchmarks demonstrate that our method outperforms state-of-the-art methods even with only 2.5% of the labeling budget.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"126 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-09DOI: 10.1109/tmi.2025.3619809
Davood Karimi, Camilo Calixto, Haykel Snoussi, Bo Li, Maria Camila Cortes-Albornoz, Clemente Velasco-Annis, Caitlin Rollins, Lana Pierotich, Camilo Jaimes, Ali Gholipour, Simon K. Warfield
{"title":"Detailed delineation of the fetal brain in diffusion MRI via multi-task learning","authors":"Davood Karimi, Camilo Calixto, Haykel Snoussi, Bo Li, Maria Camila Cortes-Albornoz, Clemente Velasco-Annis, Caitlin Rollins, Lana Pierotich, Camilo Jaimes, Ali Gholipour, Simon K. Warfield","doi":"10.1109/tmi.2025.3619809","DOIUrl":"https://doi.org/10.1109/tmi.2025.3619809","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"57 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145255596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}