首页 > 最新文献

IEEE Transactions on Medical Imaging最新文献

英文 中文
CT Diagnostic Mode-Oriented and Cross Difficulty-Aware Network for Pulmonary Embolism Segmentation 基于CT诊断模式和交叉困难感知网络的肺栓塞分割
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-11-10 DOI: 10.1109/tmi.2025.3631047
Ruolin Xiao, Xinming Li, Congyue Guo, Shiteng Suo, Kaiyi Zheng, Jianhua Ma, Qianjin Feng, Xianyue Quan, Wei Yang, Liming Zhong
{"title":"CT Diagnostic Mode-Oriented and Cross Difficulty-Aware Network for Pulmonary Embolism Segmentation","authors":"Ruolin Xiao, Xinming Li, Congyue Guo, Shiteng Suo, Kaiyi Zheng, Jianhua Ma, Qianjin Feng, Xianyue Quan, Wei Yang, Liming Zhong","doi":"10.1109/tmi.2025.3631047","DOIUrl":"https://doi.org/10.1109/tmi.2025.3631047","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"108 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145484672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Derivation and validation of compartment models: implications for dynamic imaging 室模型的推导和验证:对动态成像的影响
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-11-10 DOI: 10.1109/tmi.2025.3630705
Jérôme Kowalski, Lorenzo Sala, Dirk Drasdo, Irene Vignon-Clementel
{"title":"Derivation and validation of compartment models: implications for dynamic imaging","authors":"Jérôme Kowalski, Lorenzo Sala, Dirk Drasdo, Irene Vignon-Clementel","doi":"10.1109/tmi.2025.3630705","DOIUrl":"https://doi.org/10.1109/tmi.2025.3630705","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"142 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145484670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FunOTTA: On-the-Fly Adaptation on Cross-Domain Fundus Image via Stable Test-time Training FunOTTA:基于稳定测试时间训练的跨域眼底图像的动态自适应
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-11-10 DOI: 10.1109/tmi.2025.3631049
Qian Zeng, Le Zhang, Yipeng Liu, Ce Zhu, Fan Zhang
{"title":"FunOTTA: On-the-Fly Adaptation on Cross-Domain Fundus Image via Stable Test-time Training","authors":"Qian Zeng, Le Zhang, Yipeng Liu, Ce Zhu, Fan Zhang","doi":"10.1109/tmi.2025.3631049","DOIUrl":"https://doi.org/10.1109/tmi.2025.3631049","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"39 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145484669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable Normative Modeling for Brain Disorder Identification in Resting-State fMRI 静息状态fMRI脑障碍识别的可解释规范模型
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-11-10 DOI: 10.1109/tmi.2025.3631105
Yeajin Shon, Eunsong Kang, Da-Woon Heo, Heung-Il Suk
{"title":"Explainable Normative Modeling for Brain Disorder Identification in Resting-State fMRI","authors":"Yeajin Shon, Eunsong Kang, Da-Woon Heo, Heung-Il Suk","doi":"10.1109/tmi.2025.3631105","DOIUrl":"https://doi.org/10.1109/tmi.2025.3631105","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"1 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145484667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Large-Deformation Medical Image Registration via Recurrent Dynamic Correlation 基于循环动态相关的大变形医学图像配准
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-11-07 DOI: 10.1109/tmi.2025.3630584
Tianran Li, Marius Staring, Yuchuan Qiao
{"title":"Efficient Large-Deformation Medical Image Registration via Recurrent Dynamic Correlation","authors":"Tianran Li, Marius Staring, Yuchuan Qiao","doi":"10.1109/tmi.2025.3630584","DOIUrl":"https://doi.org/10.1109/tmi.2025.3630584","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"28 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145461403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure-Preserving Two-Stage Diffusion Model for CBCT Metal Artifact Reduction. CBCT金属伪影还原的保结构两阶段扩散模型。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-11-04 DOI: 10.1109/tmi.2025.3628764
Xingyue Wang,Zhentao Liu,Haoshen Wang,Minhui Tan,Zhiming Cui
Cone-beam computed tomography (CBCT) plays a crucial role in dental clinical applications, but metal implants often cause severe artifacts, challenging accurate diagnosis. Most deep learning-based methods attempt to achieve metal artifact reduction (MAR) by training neural networks on paired simulated data. However, they often struggle to preserve anatomical structures around metal implants, and fail to bridge the domain gap between real-world and simulated data, leading to suboptimal performance in practice. To address these issues, we propose a two-stage diffusion framework with a strong emphasis on structure preservation and domain generalization. In Stage I, a structure-aware diffusion model is trained to extract artifact-free clean edge maps from artifact-affected CBCT images. This training is supervised by the tooth contours derived from the fusion of intraoral scan (IOS) data and CBCT images to improve generalization to real-world data. In Stage II, these extracted clean edge maps serve as structural priors to guide the MAR process. Additionally, we introduce a segmentation-guided sampling (SGS) strategy in this stage to further enhance structure preservation during inference. Experiments on both simulated and real-world data demonstrate that our method achieves superior artifact reduction and better preservation of dental structures compared to competing approaches.
锥形束计算机断层扫描(CBCT)在牙科临床应用中起着至关重要的作用,但金属植入物通常会导致严重的伪影,难以准确诊断。大多数基于深度学习的方法试图通过在成对模拟数据上训练神经网络来实现金属伪影还原(MAR)。然而,它们通常难以保留金属植入物周围的解剖结构,并且无法弥合现实世界和模拟数据之间的领域差距,导致实践中的性能不佳。为了解决这些问题,我们提出了一个两阶段扩散框架,强调结构保存和领域泛化。在第一阶段,训练一个结构感知扩散模型,从受伪影影响的CBCT图像中提取无伪影的干净边缘图。该训练由口腔内扫描(IOS)数据和CBCT图像融合得出的牙齿轮廓来监督,以提高对现实世界数据的泛化。在阶段II中,这些提取的干净边缘图作为结构先验来指导MAR过程。此外,我们在这一阶段引入了分割引导采样(SGS)策略,以进一步增强推理过程中的结构保存。在模拟和真实数据上的实验表明,与竞争方法相比,我们的方法可以更好地减少伪影,更好地保存牙齿结构。
{"title":"Structure-Preserving Two-Stage Diffusion Model for CBCT Metal Artifact Reduction.","authors":"Xingyue Wang,Zhentao Liu,Haoshen Wang,Minhui Tan,Zhiming Cui","doi":"10.1109/tmi.2025.3628764","DOIUrl":"https://doi.org/10.1109/tmi.2025.3628764","url":null,"abstract":"Cone-beam computed tomography (CBCT) plays a crucial role in dental clinical applications, but metal implants often cause severe artifacts, challenging accurate diagnosis. Most deep learning-based methods attempt to achieve metal artifact reduction (MAR) by training neural networks on paired simulated data. However, they often struggle to preserve anatomical structures around metal implants, and fail to bridge the domain gap between real-world and simulated data, leading to suboptimal performance in practice. To address these issues, we propose a two-stage diffusion framework with a strong emphasis on structure preservation and domain generalization. In Stage I, a structure-aware diffusion model is trained to extract artifact-free clean edge maps from artifact-affected CBCT images. This training is supervised by the tooth contours derived from the fusion of intraoral scan (IOS) data and CBCT images to improve generalization to real-world data. In Stage II, these extracted clean edge maps serve as structural priors to guide the MAR process. Additionally, we introduce a segmentation-guided sampling (SGS) strategy in this stage to further enhance structure preservation during inference. Experiments on both simulated and real-world data demonstrate that our method achieves superior artifact reduction and better preservation of dental structures compared to competing approaches.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"27 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145440598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Criteria for TMI Papers-Significance, Innovation, Evaluation, and Reproducibility. TMI论文的编辑标准——意义、创新、评价和可重复性。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-11-04 DOI: 10.1109/tmi.2025.3628662
Hongming Shan,Uwe Kruger,Ge Wang
IEEE Transactions on Medical Imaging (TMI) publishes high-quality work that innovates imaging methods and advances medicine, science, and engineering. While artificial intelligence (AI) is currently prominent, the journal's scope extends well beyond AI-based imaging to encompass a full spectrum of imaging methods involving CT, MRI, PET, SPECT, ultrasound, optical, and hybrid systems, image reconstruction and processing (ranging from analytical and iterative algorithms to emerging deep imaging approaches), quantitative imaging and analysis (radiomics, biomarkers, and health analytics), image-guided interventions and therapy, as well as multimodal and multi-scale imaging with integration of imaging and non-imaging data.
IEEE医学成像汇刊(TMI)出版创新成像方法和推进医学、科学和工程的高质量工作。虽然人工智能(AI)目前很突出,但该杂志的范围远远超出了基于人工智能的成像,涵盖了包括CT、MRI、PET、SPECT、超声、光学和混合系统在内的全方位成像方法,图像重建和处理(从分析和迭代算法到新兴的深度成像方法),定量成像和分析(放射组学、生物标志物和健康分析),图像引导干预和治疗,以及影像与非影像数据融合的多模态、多尺度成像。
{"title":"Editorial Criteria for TMI Papers-Significance, Innovation, Evaluation, and Reproducibility.","authors":"Hongming Shan,Uwe Kruger,Ge Wang","doi":"10.1109/tmi.2025.3628662","DOIUrl":"https://doi.org/10.1109/tmi.2025.3628662","url":null,"abstract":"IEEE Transactions on Medical Imaging (TMI) publishes high-quality work that innovates imaging methods and advances medicine, science, and engineering. While artificial intelligence (AI) is currently prominent, the journal's scope extends well beyond AI-based imaging to encompass a full spectrum of imaging methods involving CT, MRI, PET, SPECT, ultrasound, optical, and hybrid systems, image reconstruction and processing (ranging from analytical and iterative algorithms to emerging deep imaging approaches), quantitative imaging and analysis (radiomics, biomarkers, and health analytics), image-guided interventions and therapy, as well as multimodal and multi-scale imaging with integration of imaging and non-imaging data.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"53 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145440605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuron Counting for Macaque Mesoscopic Brain Connectivity Research. 猕猴中观脑连通性研究中的神经元计数。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-11-04 DOI: 10.1109/tmi.2025.3628678
Zhenwei Dong,Xinyi Liu,Weiyang Shi,Yuheng Lu,Yanyan Liu,Xiaoxiao Hou,Hongji Sun,Ming Song,Zhengyi Yang,Tianzi Jiang
Precise quantification and localization of tracer-labeled neurons are essential for unraveling brain connectivity patterns and constructing a mesoscopic brain connectome atlas in macaques. However, methodological challenges and limitations in dataset development have impeded this scientific progress. This work introduced the Macaque Fluorescently Labeled Neurons (MFN) dataset, derived from retrograde tracing on three rhesus macaques. The dataset, meticulously annotated by six specialists, includes 1,600 images and 33,411 high-quality neuron annotations. Leveraging this dataset, we developed a Dense Convolutional Attention U-Net (DAUNet) cell counting model. By integrating Dense Convolutional blocks and a multi-scale attention module, the model exhibits robust feature extraction and representation capabilities while maintaining low complexity. On the MFN dataset, DAUNet achieved a Mean Absolute Error of 0.97 for cell counting and an F1-score of 96.29% for cell localization, outperforming several benchmark models. Extensive validation across four additional public datasets demonstrated the robust generalization ability of the model. Furthermore, the trained model was applied to quantify labeled neurons of a macaque brain, mapping the input connectivity patterns of two adjacent subregions in the lateral prefrontal cortex. This work provides a training dataset and algorithmic resource that advances mesoscopic brain connectivity research in macaques. The MFN dataset and source code are available at https://github.com/Gendwar/DAUnet.
精确量化和定位示踪标记的神经元对于揭示猕猴大脑连接模式和构建中观脑连接体图谱至关重要。然而,方法上的挑战和数据集开发的局限性阻碍了这一科学进步。本研究介绍了猕猴荧光标记神经元(MFN)数据集,该数据集来源于对三只恒河猴的逆行追踪。该数据集由六位专家精心注释,包括1600张图像和33411个高质量的神经元注释。利用这个数据集,我们开发了一个密集卷积注意力U-Net (DAUNet)细胞计数模型。通过集成密集卷积块和多尺度关注模块,该模型具有鲁棒的特征提取和表示能力,同时保持较低的复杂度。在MFN数据集上,DAUNet在细胞计数方面的平均绝对误差为0.97,在细胞定位方面的f1得分为96.29%,优于几个基准模型。在四个额外的公共数据集上进行了广泛的验证,证明了该模型具有强大的泛化能力。此外,将训练好的模型应用于猕猴大脑的标记神经元,绘制了外侧前额皮质两个相邻亚区的输入连接模式。本研究为猕猴中观脑连接研究提供了训练数据集和算法资源。MFN数据集和源代码可从https://github.com/Gendwar/DAUnet获得。
{"title":"Neuron Counting for Macaque Mesoscopic Brain Connectivity Research.","authors":"Zhenwei Dong,Xinyi Liu,Weiyang Shi,Yuheng Lu,Yanyan Liu,Xiaoxiao Hou,Hongji Sun,Ming Song,Zhengyi Yang,Tianzi Jiang","doi":"10.1109/tmi.2025.3628678","DOIUrl":"https://doi.org/10.1109/tmi.2025.3628678","url":null,"abstract":"Precise quantification and localization of tracer-labeled neurons are essential for unraveling brain connectivity patterns and constructing a mesoscopic brain connectome atlas in macaques. However, methodological challenges and limitations in dataset development have impeded this scientific progress. This work introduced the Macaque Fluorescently Labeled Neurons (MFN) dataset, derived from retrograde tracing on three rhesus macaques. The dataset, meticulously annotated by six specialists, includes 1,600 images and 33,411 high-quality neuron annotations. Leveraging this dataset, we developed a Dense Convolutional Attention U-Net (DAUNet) cell counting model. By integrating Dense Convolutional blocks and a multi-scale attention module, the model exhibits robust feature extraction and representation capabilities while maintaining low complexity. On the MFN dataset, DAUNet achieved a Mean Absolute Error of 0.97 for cell counting and an F1-score of 96.29% for cell localization, outperforming several benchmark models. Extensive validation across four additional public datasets demonstrated the robust generalization ability of the model. Furthermore, the trained model was applied to quantify labeled neurons of a macaque brain, mapping the input connectivity patterns of two adjacent subregions in the lateral prefrontal cortex. This work provides a training dataset and algorithmic resource that advances mesoscopic brain connectivity research in macaques. The MFN dataset and source code are available at https://github.com/Gendwar/DAUnet.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"1 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145440604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Contrast MRI Super-Resolution in Brain Tumors: Arbitrary-Scale Implicit Sampling and Unsupervised Fine-Tuning. 脑肿瘤的多对比MRI超分辨率:任意尺度隐式采样和无监督微调。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-11-03 DOI: 10.1109/tmi.2025.3628113
Wenxuan Chen,Yulin Wang,Zhongsen Li,Shuai Wang,Sirui Wu,Chuyu Liu,Yonghong Fan,Benqi Zhao,Zhuozhao Zheng,Dinggang Shen,Xiaolei Song
Multi-contrast magnetic resonance imaging (MRI) has important value in clinical applications because it can reflect comprehensive tissue characterization from anatomy and function to metabolism. Previous studies utilize abundant details in high-resolution (HR) reference (Ref) images to guide the super-resolution (SR) of low-resolution (LR) images, termed multi-contrast MRI SR. Yet, their clinical applications are hindered by: (1) discrepancies in MRI equipment and acquisition protocols across hospitals (which lead to gaps in data distribution), and (2) lack of paired LR and HR images in certain modalities for supervised training. Herein, we rethink multi-contrast MRI from a clinical perspective, and propose an implicit sampling and generation (ISG) network plus an unsupervised fine-tuning (FT) framework. Briefly, the ISG network possesses a powerful representation capability, enabling arbitrary-scale LR inputs and SR outputs. The fine-tuning framework, as a test-time training technique, allows models to be adapted to testing data. Experiments are conducted on two clinical datasets containing amide proton transfer weighted (APTw) images from tumor patients and fluid-attenuated inversion recovery (FLAIR) images from a 5T scanner, respectively. For tumor patients, our ISG+FT proves 4× SR capacity in APTw metabolic images, receiving good recognition from radiologists. In both quantitative and qualitative evaluations, ISG+FT outperforms state-of-the-art baselines. The ablation and robustness study further demonstrate the rationality of ISG+FT. Overall, our proposed method shows considerable promise in clinical scenarios.
多层对比磁共振成像(MRI)能全面反映从解剖、功能到代谢的组织特征,在临床应用中具有重要价值。先前的研究利用高分辨率(HR)参考(Ref)图像中的丰富细节来指导低分辨率(LR)图像的超分辨率(SR),称为多对比MRI SR。然而,它们的临床应用受到以下因素的阻碍:(1)不同医院的MRI设备和采集协议的差异(导致数据分布的差距),以及(2)在某些模式下缺乏成对的LR和HR图像进行监督训练。在此,我们从临床角度重新思考多对比MRI,并提出了一个隐式采样和生成(ISG)网络和一个无监督微调(FT)框架。简而言之,ISG网络具有强大的表示能力,可以实现任意尺度的LR输入和SR输出。作为一种测试时间训练技术,微调框架允许模型适应测试数据。实验在两个临床数据集上进行,分别包含来自肿瘤患者的酰胺质子转移加权(APTw)图像和来自5T扫描仪的液体衰减反转恢复(FLAIR)图像。对于肿瘤患者,我们的ISG+FT在APTw代谢图像上证明了4倍的SR能力,得到了放射科医生的认可。在定量和定性评估中,ISG+FT都优于最先进的基线。消融和鲁棒性研究进一步证明了ISG+FT的合理性。总的来说,我们提出的方法在临床场景中显示出相当大的希望。
{"title":"Multi-Contrast MRI Super-Resolution in Brain Tumors: Arbitrary-Scale Implicit Sampling and Unsupervised Fine-Tuning.","authors":"Wenxuan Chen,Yulin Wang,Zhongsen Li,Shuai Wang,Sirui Wu,Chuyu Liu,Yonghong Fan,Benqi Zhao,Zhuozhao Zheng,Dinggang Shen,Xiaolei Song","doi":"10.1109/tmi.2025.3628113","DOIUrl":"https://doi.org/10.1109/tmi.2025.3628113","url":null,"abstract":"Multi-contrast magnetic resonance imaging (MRI) has important value in clinical applications because it can reflect comprehensive tissue characterization from anatomy and function to metabolism. Previous studies utilize abundant details in high-resolution (HR) reference (Ref) images to guide the super-resolution (SR) of low-resolution (LR) images, termed multi-contrast MRI SR. Yet, their clinical applications are hindered by: (1) discrepancies in MRI equipment and acquisition protocols across hospitals (which lead to gaps in data distribution), and (2) lack of paired LR and HR images in certain modalities for supervised training. Herein, we rethink multi-contrast MRI from a clinical perspective, and propose an implicit sampling and generation (ISG) network plus an unsupervised fine-tuning (FT) framework. Briefly, the ISG network possesses a powerful representation capability, enabling arbitrary-scale LR inputs and SR outputs. The fine-tuning framework, as a test-time training technique, allows models to be adapted to testing data. Experiments are conducted on two clinical datasets containing amide proton transfer weighted (APTw) images from tumor patients and fluid-attenuated inversion recovery (FLAIR) images from a 5T scanner, respectively. For tumor patients, our ISG+FT proves 4× SR capacity in APTw metabolic images, receiving good recognition from radiologists. In both quantitative and qualitative evaluations, ISG+FT outperforms state-of-the-art baselines. The ablation and robustness study further demonstrate the rationality of ISG+FT. Overall, our proposed method shows considerable promise in clinical scenarios.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"25 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145433844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Teaching: Distilling Decomposed Multimodal Knowledge for Breast Cancer Biomarker Prediction. 在线教学:提炼分解的多模态知识用于乳腺癌生物标志物预测。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-11-03 DOI: 10.1109/tmi.2025.3628252
Qibin Zhang,Xinyu Hao,Tong Wang,Yanmei Zhu,Yaqi Du,Peng Gao,Fengyu Cong,Cheng Lu,Hongming Xu
Immunohistochemical (IHC) biomarker prediction greatly benefits from multimodal data fusion. However, the simultaneous acquisition of genomic and pathological data is often constrained by cost or technical limitations. To address this, we propose a novel Genomics-guided Multimodal Knowledge Decomposition Network (GMKDN), a framework that effectively integrates genomics and pathology data during training while dynamically adapting to available data during inference. GMKDN introduces two key innovations: (1) the Batch-Sample Multimodal Knowledge Decomposition (BMKD) module, which decomposes input features into pathology-specific, modality-general, and genomics-specific components to reduce redundancy and enhance knowledge transferability, and (2) the Online Similarity-Preserving Knowledge Distillation (OSKD) module, which optimizes activation similarity matrices to facilitate robust knowledge transfer between teacher and student models. The BMKD module improves generalization across modalities, while the OSKD module enhances model robustness, particularly when certain modalities are unavailable during inference. Extensive evaluations conducted on the TCGA-BRCA dataset and an external test cohort (QHSU) demonstrate that GMKDN consistently outperforms state-of-the-art (SOTA) slide-based multiple instance learning (MIL) approaches as well as existing multimodal learning models, establishing a new benchmark for breast cancer biomarker prediction. Our code is available at https://github.com/qiyuanzz/GMKDN.
免疫组织化学(IHC)生物标志物预测极大地受益于多模态数据融合。然而,同时获取基因组和病理数据常常受到成本或技术限制的限制。为了解决这个问题,我们提出了一种新的基因组学指导的多模态知识分解网络(GMKDN),该框架在训练期间有效地集成基因组学和病理学数据,同时在推理期间动态适应可用数据。GMKDN引入了两个关键创新:(1)批量样本多模态知识分解(BMKD)模块,该模块将输入特征分解为特定于病理、通用模态和特定于基因组的组件,以减少冗余并增强知识可转移性;(2)在线保持相似度知识升华(OSKD)模块,该模块优化激活相似矩阵,以促进教师和学生模型之间的稳健知识转移。BMKD模块改进了跨模式的泛化,而OSKD模块增强了模型的鲁棒性,特别是当某些模式在推理期间不可用时。对TCGA-BRCA数据集和外部测试队列(QHSU)进行的广泛评估表明,GMKDN始终优于最先进的(SOTA)基于幻灯片的多实例学习(MIL)方法以及现有的多模态学习模型,为乳腺癌生物标志物预测建立了新的基准。我们的代码可在https://github.com/qiyuanzz/GMKDN上获得。
{"title":"Online Teaching: Distilling Decomposed Multimodal Knowledge for Breast Cancer Biomarker Prediction.","authors":"Qibin Zhang,Xinyu Hao,Tong Wang,Yanmei Zhu,Yaqi Du,Peng Gao,Fengyu Cong,Cheng Lu,Hongming Xu","doi":"10.1109/tmi.2025.3628252","DOIUrl":"https://doi.org/10.1109/tmi.2025.3628252","url":null,"abstract":"Immunohistochemical (IHC) biomarker prediction greatly benefits from multimodal data fusion. However, the simultaneous acquisition of genomic and pathological data is often constrained by cost or technical limitations. To address this, we propose a novel Genomics-guided Multimodal Knowledge Decomposition Network (GMKDN), a framework that effectively integrates genomics and pathology data during training while dynamically adapting to available data during inference. GMKDN introduces two key innovations: (1) the Batch-Sample Multimodal Knowledge Decomposition (BMKD) module, which decomposes input features into pathology-specific, modality-general, and genomics-specific components to reduce redundancy and enhance knowledge transferability, and (2) the Online Similarity-Preserving Knowledge Distillation (OSKD) module, which optimizes activation similarity matrices to facilitate robust knowledge transfer between teacher and student models. The BMKD module improves generalization across modalities, while the OSKD module enhances model robustness, particularly when certain modalities are unavailable during inference. Extensive evaluations conducted on the TCGA-BRCA dataset and an external test cohort (QHSU) demonstrate that GMKDN consistently outperforms state-of-the-art (SOTA) slide-based multiple instance learning (MIL) approaches as well as existing multimodal learning models, establishing a new benchmark for breast cancer biomarker prediction. Our code is available at https://github.com/qiyuanzz/GMKDN.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"36 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145433845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1