Pub Date : 2025-09-29DOI: 10.1109/tmi.2025.3615574
Pooneh Roshanitabrizi, Vishwesh Nath, Kelsey Brown, Taylor Gloria Broudy, Zhifan Jiang, Abhijeet Parida, Joselyn Rwebembera, Emmy Okello, Andrea Beaton, Holger R. Roth, Craig A. Sable, Marius George Linguraru
{"title":"End-to-end Spatiotemporal Analysis of Color Doppler Echocardiograms: Application for Rheumatic Heart Disease Detection","authors":"Pooneh Roshanitabrizi, Vishwesh Nath, Kelsey Brown, Taylor Gloria Broudy, Zhifan Jiang, Abhijeet Parida, Joselyn Rwebembera, Emmy Okello, Andrea Beaton, Holger R. Roth, Craig A. Sable, Marius George Linguraru","doi":"10.1109/tmi.2025.3615574","DOIUrl":"https://doi.org/10.1109/tmi.2025.3615574","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"5 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145188349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-26DOI: 10.1109/tmi.2025.3614853
Xiaoyu Liu,Linhao Qu,Ziyue Xie,Yonghong Shi,Zhijian Song
Labeling multiple organs for segmentation is a complex and time-consuming process, resulting in a scarcity of comprehensively labeled multi-organ datasets while the emergence of numerous partially labeled datasets. Current methods face three critical limitations: incomplete exploitation of available supervision; complex inference, and insufficient validation of generalization capabilities. This paper proposes a new framework based on mutual learning, aiming to improve multi-organ segmentation performance by complementing information among partially labeled datasets. Specifically, this method consists of three key components: (1) partial-organ segmentation models training with Difference Mutual Learning, (2) pseudo-label generation and filtering, and (3) full-organ segmentation models training enhanced by Similarity Mutual Learning. Difference Mutual Learning enables each partial-organ segmentation model to utilize labels and features from other datasets as complementary signals, improving cross-dataset organ detection for better pseudo labels. Similarity Mutual Learning augments each full-organ segmentation model training with two additional supervision sources: inter-dataset ground truths and dynamic reliable transferred features, significantly boosting segmentation accuracy. The model obtained by this method achieves both high accuracy and efficient inference for multi-organ segmentation. Extensive experiments conducted on nine datasets spanning the head-neck, chest, abdomen, and pelvis demonstrate that the proposed method achieves SOTA performance.
{"title":"Deep Mutual Learning among Partially Labeled Datasets for Multi-Organ Segmentation.","authors":"Xiaoyu Liu,Linhao Qu,Ziyue Xie,Yonghong Shi,Zhijian Song","doi":"10.1109/tmi.2025.3614853","DOIUrl":"https://doi.org/10.1109/tmi.2025.3614853","url":null,"abstract":"Labeling multiple organs for segmentation is a complex and time-consuming process, resulting in a scarcity of comprehensively labeled multi-organ datasets while the emergence of numerous partially labeled datasets. Current methods face three critical limitations: incomplete exploitation of available supervision; complex inference, and insufficient validation of generalization capabilities. This paper proposes a new framework based on mutual learning, aiming to improve multi-organ segmentation performance by complementing information among partially labeled datasets. Specifically, this method consists of three key components: (1) partial-organ segmentation models training with Difference Mutual Learning, (2) pseudo-label generation and filtering, and (3) full-organ segmentation models training enhanced by Similarity Mutual Learning. Difference Mutual Learning enables each partial-organ segmentation model to utilize labels and features from other datasets as complementary signals, improving cross-dataset organ detection for better pseudo labels. Similarity Mutual Learning augments each full-organ segmentation model training with two additional supervision sources: inter-dataset ground truths and dynamic reliable transferred features, significantly boosting segmentation accuracy. The model obtained by this method achieves both high accuracy and efficient inference for multi-organ segmentation. Extensive experiments conducted on nine datasets spanning the head-neck, chest, abdomen, and pelvis demonstrate that the proposed method achieves SOTA performance.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"77 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145153397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-26DOI: 10.1109/tmi.2025.3614767
Nils Marquardt, Tobias Hengsbach, Marco Mauritz, Benedikt Wirth, Klaus Schäfers
{"title":"Motion simulation of radio-labeled cells in whole-body positron emission tomography","authors":"Nils Marquardt, Tobias Hengsbach, Marco Mauritz, Benedikt Wirth, Klaus Schäfers","doi":"10.1109/tmi.2025.3614767","DOIUrl":"https://doi.org/10.1109/tmi.2025.3614767","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"5 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145153907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-25DOI: 10.1109/tmi.2025.3608467
Yanglong He,Rongjun Ge,Hui Tang,Yuxin Liu,Mengqing Su,Jean-Louis Coatrieux,Huazhong Shu,Yang Chen,Yuting He
In the field of medical image processing, vascular image segmentation plays a crucial role in clinical diagnosis, treatment planning, prognosis, and medical decision-making. Accurate and automated segmentation of vascular images can assist clinicians in understanding the vascular network structure, leading to more informed medical decisions. However, manual annotation of vascular images is time-consuming and challenging due to the fine and low-contrast vascular branches, especially in the medical imaging domain where annotation requires specialized knowledge and clinical expertise. Data-driven deep learning models struggle to achieve good performance when only a small number of annotated vascular images are available. To address this issue, this paper proposes a novel Conditional Virtual Imaging (CVI) framework for few-shot vascular image segmentation learning. The framework combines limited annotated data with extensive unlabeled data to generate high-quality images, effectively improving the accuracy and robustness of segmentation learning. Our approach primarily includes two innovations: First, aligned image-mask pair generation, which leverages the powerful image generation capabilities of large pre-trained models to produce high-quality vascular images with complex structures using only a few training images; Second, the Dual-Consistency Learning (DCL) strategy, which simultaneously trains the generator and segmentation model, allowing them to learn from each other and maximize the utilization of limited data. Experimental results demonstrate that our CVI framework can generate high-quality medical images and effectively enhance the performance of segmentation models in few-shot scenarios. Our code will be made publicly available online.
{"title":"Conditional Virtual Imaging for Few-Shot Vascular Image Segmentation.","authors":"Yanglong He,Rongjun Ge,Hui Tang,Yuxin Liu,Mengqing Su,Jean-Louis Coatrieux,Huazhong Shu,Yang Chen,Yuting He","doi":"10.1109/tmi.2025.3608467","DOIUrl":"https://doi.org/10.1109/tmi.2025.3608467","url":null,"abstract":"In the field of medical image processing, vascular image segmentation plays a crucial role in clinical diagnosis, treatment planning, prognosis, and medical decision-making. Accurate and automated segmentation of vascular images can assist clinicians in understanding the vascular network structure, leading to more informed medical decisions. However, manual annotation of vascular images is time-consuming and challenging due to the fine and low-contrast vascular branches, especially in the medical imaging domain where annotation requires specialized knowledge and clinical expertise. Data-driven deep learning models struggle to achieve good performance when only a small number of annotated vascular images are available. To address this issue, this paper proposes a novel Conditional Virtual Imaging (CVI) framework for few-shot vascular image segmentation learning. The framework combines limited annotated data with extensive unlabeled data to generate high-quality images, effectively improving the accuracy and robustness of segmentation learning. Our approach primarily includes two innovations: First, aligned image-mask pair generation, which leverages the powerful image generation capabilities of large pre-trained models to produce high-quality vascular images with complex structures using only a few training images; Second, the Dual-Consistency Learning (DCL) strategy, which simultaneously trains the generator and segmentation model, allowing them to learn from each other and maximize the utilization of limited data. Experimental results demonstrate that our CVI framework can generate high-quality medical images and effectively enhance the performance of segmentation models in few-shot scenarios. Our code will be made publicly available online.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"61 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145140266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-22DOI: 10.1109/tmi.2025.3612437
Peng Hu, Xin Tong, Li Lin, Lihong V. Wang
{"title":"Data-driven System Matrix Manipulation Enabling Fast Functional Imaging in Tomography","authors":"Peng Hu, Xin Tong, Li Lin, Lihong V. Wang","doi":"10.1109/tmi.2025.3612437","DOIUrl":"https://doi.org/10.1109/tmi.2025.3612437","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"59 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145116149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}