Pub Date : 2025-11-10DOI: 10.1109/tmi.2025.3630705
Jérôme Kowalski, Lorenzo Sala, Dirk Drasdo, Irene Vignon-Clementel
{"title":"Derivation and validation of compartment models: implications for dynamic imaging","authors":"Jérôme Kowalski, Lorenzo Sala, Dirk Drasdo, Irene Vignon-Clementel","doi":"10.1109/tmi.2025.3630705","DOIUrl":"https://doi.org/10.1109/tmi.2025.3630705","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"142 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145484670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10DOI: 10.1109/tmi.2025.3631049
Qian Zeng, Le Zhang, Yipeng Liu, Ce Zhu, Fan Zhang
{"title":"FunOTTA: On-the-Fly Adaptation on Cross-Domain Fundus Image via Stable Test-time Training","authors":"Qian Zeng, Le Zhang, Yipeng Liu, Ce Zhu, Fan Zhang","doi":"10.1109/tmi.2025.3631049","DOIUrl":"https://doi.org/10.1109/tmi.2025.3631049","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"39 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145484669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07DOI: 10.1109/tmi.2025.3630584
Tianran Li, Marius Staring, Yuchuan Qiao
{"title":"Efficient Large-Deformation Medical Image Registration via Recurrent Dynamic Correlation","authors":"Tianran Li, Marius Staring, Yuchuan Qiao","doi":"10.1109/tmi.2025.3630584","DOIUrl":"https://doi.org/10.1109/tmi.2025.3630584","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"28 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145461403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-04DOI: 10.1109/tmi.2025.3628764
Xingyue Wang,Zhentao Liu,Haoshen Wang,Minhui Tan,Zhiming Cui
Cone-beam computed tomography (CBCT) plays a crucial role in dental clinical applications, but metal implants often cause severe artifacts, challenging accurate diagnosis. Most deep learning-based methods attempt to achieve metal artifact reduction (MAR) by training neural networks on paired simulated data. However, they often struggle to preserve anatomical structures around metal implants, and fail to bridge the domain gap between real-world and simulated data, leading to suboptimal performance in practice. To address these issues, we propose a two-stage diffusion framework with a strong emphasis on structure preservation and domain generalization. In Stage I, a structure-aware diffusion model is trained to extract artifact-free clean edge maps from artifact-affected CBCT images. This training is supervised by the tooth contours derived from the fusion of intraoral scan (IOS) data and CBCT images to improve generalization to real-world data. In Stage II, these extracted clean edge maps serve as structural priors to guide the MAR process. Additionally, we introduce a segmentation-guided sampling (SGS) strategy in this stage to further enhance structure preservation during inference. Experiments on both simulated and real-world data demonstrate that our method achieves superior artifact reduction and better preservation of dental structures compared to competing approaches.
{"title":"Structure-Preserving Two-Stage Diffusion Model for CBCT Metal Artifact Reduction.","authors":"Xingyue Wang,Zhentao Liu,Haoshen Wang,Minhui Tan,Zhiming Cui","doi":"10.1109/tmi.2025.3628764","DOIUrl":"https://doi.org/10.1109/tmi.2025.3628764","url":null,"abstract":"Cone-beam computed tomography (CBCT) plays a crucial role in dental clinical applications, but metal implants often cause severe artifacts, challenging accurate diagnosis. Most deep learning-based methods attempt to achieve metal artifact reduction (MAR) by training neural networks on paired simulated data. However, they often struggle to preserve anatomical structures around metal implants, and fail to bridge the domain gap between real-world and simulated data, leading to suboptimal performance in practice. To address these issues, we propose a two-stage diffusion framework with a strong emphasis on structure preservation and domain generalization. In Stage I, a structure-aware diffusion model is trained to extract artifact-free clean edge maps from artifact-affected CBCT images. This training is supervised by the tooth contours derived from the fusion of intraoral scan (IOS) data and CBCT images to improve generalization to real-world data. In Stage II, these extracted clean edge maps serve as structural priors to guide the MAR process. Additionally, we introduce a segmentation-guided sampling (SGS) strategy in this stage to further enhance structure preservation during inference. Experiments on both simulated and real-world data demonstrate that our method achieves superior artifact reduction and better preservation of dental structures compared to competing approaches.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"27 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145440598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-04DOI: 10.1109/tmi.2025.3628662
Hongming Shan,Uwe Kruger,Ge Wang
IEEE Transactions on Medical Imaging (TMI) publishes high-quality work that innovates imaging methods and advances medicine, science, and engineering. While artificial intelligence (AI) is currently prominent, the journal's scope extends well beyond AI-based imaging to encompass a full spectrum of imaging methods involving CT, MRI, PET, SPECT, ultrasound, optical, and hybrid systems, image reconstruction and processing (ranging from analytical and iterative algorithms to emerging deep imaging approaches), quantitative imaging and analysis (radiomics, biomarkers, and health analytics), image-guided interventions and therapy, as well as multimodal and multi-scale imaging with integration of imaging and non-imaging data.
{"title":"Editorial Criteria for TMI Papers-Significance, Innovation, Evaluation, and Reproducibility.","authors":"Hongming Shan,Uwe Kruger,Ge Wang","doi":"10.1109/tmi.2025.3628662","DOIUrl":"https://doi.org/10.1109/tmi.2025.3628662","url":null,"abstract":"IEEE Transactions on Medical Imaging (TMI) publishes high-quality work that innovates imaging methods and advances medicine, science, and engineering. While artificial intelligence (AI) is currently prominent, the journal's scope extends well beyond AI-based imaging to encompass a full spectrum of imaging methods involving CT, MRI, PET, SPECT, ultrasound, optical, and hybrid systems, image reconstruction and processing (ranging from analytical and iterative algorithms to emerging deep imaging approaches), quantitative imaging and analysis (radiomics, biomarkers, and health analytics), image-guided interventions and therapy, as well as multimodal and multi-scale imaging with integration of imaging and non-imaging data.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"53 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145440605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Precise quantification and localization of tracer-labeled neurons are essential for unraveling brain connectivity patterns and constructing a mesoscopic brain connectome atlas in macaques. However, methodological challenges and limitations in dataset development have impeded this scientific progress. This work introduced the Macaque Fluorescently Labeled Neurons (MFN) dataset, derived from retrograde tracing on three rhesus macaques. The dataset, meticulously annotated by six specialists, includes 1,600 images and 33,411 high-quality neuron annotations. Leveraging this dataset, we developed a Dense Convolutional Attention U-Net (DAUNet) cell counting model. By integrating Dense Convolutional blocks and a multi-scale attention module, the model exhibits robust feature extraction and representation capabilities while maintaining low complexity. On the MFN dataset, DAUNet achieved a Mean Absolute Error of 0.97 for cell counting and an F1-score of 96.29% for cell localization, outperforming several benchmark models. Extensive validation across four additional public datasets demonstrated the robust generalization ability of the model. Furthermore, the trained model was applied to quantify labeled neurons of a macaque brain, mapping the input connectivity patterns of two adjacent subregions in the lateral prefrontal cortex. This work provides a training dataset and algorithmic resource that advances mesoscopic brain connectivity research in macaques. The MFN dataset and source code are available at https://github.com/Gendwar/DAUnet.
{"title":"Neuron Counting for Macaque Mesoscopic Brain Connectivity Research.","authors":"Zhenwei Dong,Xinyi Liu,Weiyang Shi,Yuheng Lu,Yanyan Liu,Xiaoxiao Hou,Hongji Sun,Ming Song,Zhengyi Yang,Tianzi Jiang","doi":"10.1109/tmi.2025.3628678","DOIUrl":"https://doi.org/10.1109/tmi.2025.3628678","url":null,"abstract":"Precise quantification and localization of tracer-labeled neurons are essential for unraveling brain connectivity patterns and constructing a mesoscopic brain connectome atlas in macaques. However, methodological challenges and limitations in dataset development have impeded this scientific progress. This work introduced the Macaque Fluorescently Labeled Neurons (MFN) dataset, derived from retrograde tracing on three rhesus macaques. The dataset, meticulously annotated by six specialists, includes 1,600 images and 33,411 high-quality neuron annotations. Leveraging this dataset, we developed a Dense Convolutional Attention U-Net (DAUNet) cell counting model. By integrating Dense Convolutional blocks and a multi-scale attention module, the model exhibits robust feature extraction and representation capabilities while maintaining low complexity. On the MFN dataset, DAUNet achieved a Mean Absolute Error of 0.97 for cell counting and an F1-score of 96.29% for cell localization, outperforming several benchmark models. Extensive validation across four additional public datasets demonstrated the robust generalization ability of the model. Furthermore, the trained model was applied to quantify labeled neurons of a macaque brain, mapping the input connectivity patterns of two adjacent subregions in the lateral prefrontal cortex. This work provides a training dataset and algorithmic resource that advances mesoscopic brain connectivity research in macaques. The MFN dataset and source code are available at https://github.com/Gendwar/DAUnet.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"1 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145440604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-contrast magnetic resonance imaging (MRI) has important value in clinical applications because it can reflect comprehensive tissue characterization from anatomy and function to metabolism. Previous studies utilize abundant details in high-resolution (HR) reference (Ref) images to guide the super-resolution (SR) of low-resolution (LR) images, termed multi-contrast MRI SR. Yet, their clinical applications are hindered by: (1) discrepancies in MRI equipment and acquisition protocols across hospitals (which lead to gaps in data distribution), and (2) lack of paired LR and HR images in certain modalities for supervised training. Herein, we rethink multi-contrast MRI from a clinical perspective, and propose an implicit sampling and generation (ISG) network plus an unsupervised fine-tuning (FT) framework. Briefly, the ISG network possesses a powerful representation capability, enabling arbitrary-scale LR inputs and SR outputs. The fine-tuning framework, as a test-time training technique, allows models to be adapted to testing data. Experiments are conducted on two clinical datasets containing amide proton transfer weighted (APTw) images from tumor patients and fluid-attenuated inversion recovery (FLAIR) images from a 5T scanner, respectively. For tumor patients, our ISG+FT proves 4× SR capacity in APTw metabolic images, receiving good recognition from radiologists. In both quantitative and qualitative evaluations, ISG+FT outperforms state-of-the-art baselines. The ablation and robustness study further demonstrate the rationality of ISG+FT. Overall, our proposed method shows considerable promise in clinical scenarios.
{"title":"Multi-Contrast MRI Super-Resolution in Brain Tumors: Arbitrary-Scale Implicit Sampling and Unsupervised Fine-Tuning.","authors":"Wenxuan Chen,Yulin Wang,Zhongsen Li,Shuai Wang,Sirui Wu,Chuyu Liu,Yonghong Fan,Benqi Zhao,Zhuozhao Zheng,Dinggang Shen,Xiaolei Song","doi":"10.1109/tmi.2025.3628113","DOIUrl":"https://doi.org/10.1109/tmi.2025.3628113","url":null,"abstract":"Multi-contrast magnetic resonance imaging (MRI) has important value in clinical applications because it can reflect comprehensive tissue characterization from anatomy and function to metabolism. Previous studies utilize abundant details in high-resolution (HR) reference (Ref) images to guide the super-resolution (SR) of low-resolution (LR) images, termed multi-contrast MRI SR. Yet, their clinical applications are hindered by: (1) discrepancies in MRI equipment and acquisition protocols across hospitals (which lead to gaps in data distribution), and (2) lack of paired LR and HR images in certain modalities for supervised training. Herein, we rethink multi-contrast MRI from a clinical perspective, and propose an implicit sampling and generation (ISG) network plus an unsupervised fine-tuning (FT) framework. Briefly, the ISG network possesses a powerful representation capability, enabling arbitrary-scale LR inputs and SR outputs. The fine-tuning framework, as a test-time training technique, allows models to be adapted to testing data. Experiments are conducted on two clinical datasets containing amide proton transfer weighted (APTw) images from tumor patients and fluid-attenuated inversion recovery (FLAIR) images from a 5T scanner, respectively. For tumor patients, our ISG+FT proves 4× SR capacity in APTw metabolic images, receiving good recognition from radiologists. In both quantitative and qualitative evaluations, ISG+FT outperforms state-of-the-art baselines. The ablation and robustness study further demonstrate the rationality of ISG+FT. Overall, our proposed method shows considerable promise in clinical scenarios.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"25 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145433844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Immunohistochemical (IHC) biomarker prediction greatly benefits from multimodal data fusion. However, the simultaneous acquisition of genomic and pathological data is often constrained by cost or technical limitations. To address this, we propose a novel Genomics-guided Multimodal Knowledge Decomposition Network (GMKDN), a framework that effectively integrates genomics and pathology data during training while dynamically adapting to available data during inference. GMKDN introduces two key innovations: (1) the Batch-Sample Multimodal Knowledge Decomposition (BMKD) module, which decomposes input features into pathology-specific, modality-general, and genomics-specific components to reduce redundancy and enhance knowledge transferability, and (2) the Online Similarity-Preserving Knowledge Distillation (OSKD) module, which optimizes activation similarity matrices to facilitate robust knowledge transfer between teacher and student models. The BMKD module improves generalization across modalities, while the OSKD module enhances model robustness, particularly when certain modalities are unavailable during inference. Extensive evaluations conducted on the TCGA-BRCA dataset and an external test cohort (QHSU) demonstrate that GMKDN consistently outperforms state-of-the-art (SOTA) slide-based multiple instance learning (MIL) approaches as well as existing multimodal learning models, establishing a new benchmark for breast cancer biomarker prediction. Our code is available at https://github.com/qiyuanzz/GMKDN.
{"title":"Online Teaching: Distilling Decomposed Multimodal Knowledge for Breast Cancer Biomarker Prediction.","authors":"Qibin Zhang,Xinyu Hao,Tong Wang,Yanmei Zhu,Yaqi Du,Peng Gao,Fengyu Cong,Cheng Lu,Hongming Xu","doi":"10.1109/tmi.2025.3628252","DOIUrl":"https://doi.org/10.1109/tmi.2025.3628252","url":null,"abstract":"Immunohistochemical (IHC) biomarker prediction greatly benefits from multimodal data fusion. However, the simultaneous acquisition of genomic and pathological data is often constrained by cost or technical limitations. To address this, we propose a novel Genomics-guided Multimodal Knowledge Decomposition Network (GMKDN), a framework that effectively integrates genomics and pathology data during training while dynamically adapting to available data during inference. GMKDN introduces two key innovations: (1) the Batch-Sample Multimodal Knowledge Decomposition (BMKD) module, which decomposes input features into pathology-specific, modality-general, and genomics-specific components to reduce redundancy and enhance knowledge transferability, and (2) the Online Similarity-Preserving Knowledge Distillation (OSKD) module, which optimizes activation similarity matrices to facilitate robust knowledge transfer between teacher and student models. The BMKD module improves generalization across modalities, while the OSKD module enhances model robustness, particularly when certain modalities are unavailable during inference. Extensive evaluations conducted on the TCGA-BRCA dataset and an external test cohort (QHSU) demonstrate that GMKDN consistently outperforms state-of-the-art (SOTA) slide-based multiple instance learning (MIL) approaches as well as existing multimodal learning models, establishing a new benchmark for breast cancer biomarker prediction. Our code is available at https://github.com/qiyuanzz/GMKDN.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"36 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145433845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}