首页 > 最新文献

Medical image analysis最新文献

英文 中文
Hippocampal surface morphological variation-based genome-wide association analysis network for biomarker detection of Alzheimer’s disease 基于海马表面形态变化的全基因组关联分析网络用于阿尔茨海默病的生物标志物检测
IF 11.8 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-05-01 Epub Date: 2026-01-18 DOI: 10.1016/j.media.2026.103952
Xiumei Chen , Xinyue Zhang , Wei Xiong , Tao Wang , Aiwei Jia , Qianjin Feng , Meiyan Huang
Performing genome-wide association analysis (GWAS) between hippocampus and whole-genome data can facilitate disease-related biomarker detection of Alzheimer’s disease (AD). However, most existing studies have prioritized hippocampal volume changes and ignored the morphological variations and subfield differences of the hippocampus in AD progression. This disregard restricts the comprehensive understanding of the associations between hippocampus and whole-genome data, which may result in some potentially specific biomarkers of AD being missed. Moreover, the representation of the complex associations between ultra-high-dimensional imaging and whole-genome data remains an unresolved problem in GWAS. To address these issues, we propose an end-to-end hippocampal surface morphological variation-based genome-wide association analysis network (HSM-GWAS) to explore the nonlinear associations between hippocampal surface morphological variations and whole-genome data for AD-related biomarker detection. First, a multi-modality feature extraction module that includes a graph convolution network and an improved diet network is presented to extract imaging and genetic features from non-Euclidean hippocampal surface and whole-genome data, respectively. Second, a dual contrastive learning-based association analysis module is introduced to map and align genetic features to imaging features, thus narrowing the gap between these features and helping explore the complex associations between hippocampal and whole-genome data. Last, a dual cross-attention fusion module is applied to combine imaging and genetic features for disease diagnosis and biomarker detection of AD. Extensive experiments on the real Alzheimer’s Disease Neuroimaging Initiative dataset and simulated data demonstrate that HSM-GWAS considerably improves biomarker detection and disease diagnosis. These findings highlight the ability of HSM-GWAS to discover disease-related biomarkers, suggesting its potential to provide new insights into pathological mechanisms and aid in AD diagnosis. The codes are to be made publicly available at https://github.com/Meiyan88/HSM-GWAS.
在海马和全基因组数据之间进行全基因组关联分析(GWAS)可以促进阿尔茨海默病(AD)疾病相关生物标志物的检测。然而,大多数现有的研究都优先考虑了海马体积的变化,而忽略了阿尔茨海默病进展过程中海马的形态变化和亚区差异。这种忽视限制了对海马体和全基因组数据之间关联的全面理解,这可能导致阿尔茨海默病的一些潜在特异性生物标志物被遗漏。此外,超高维成像和全基因组数据之间复杂关联的表征在GWAS中仍然是一个未解决的问题。为了解决这些问题,我们提出了一个端到端的基于海马表面形态变化的全基因组关联分析网络(HSM-GWAS),以探索海马表面形态变化与ad相关生物标志物检测的全基因组数据之间的非线性关联。首先,提出了包含图卷积网络和改进饮食网络的多模态特征提取模块,分别从非欧几里得海马表面和全基因组数据中提取成像特征和遗传特征。其次,引入了基于双对比学习的关联分析模块,将遗传特征与成像特征进行映射和比对,从而缩小这些特征之间的差距,并有助于探索海马与全基因组数据之间的复杂关联。最后,采用双交叉关注融合模块,结合影像学特征和遗传特征,对AD进行疾病诊断和生物标志物检测。在真实阿尔茨海默病神经成像倡议数据集和模拟数据上进行的大量实验表明,HSM-GWAS大大提高了生物标志物的检测和疾病诊断。这些发现强调了HSM-GWAS发现疾病相关生物标志物的能力,表明其可能为阿尔茨海默病的病理机制提供新的见解,并有助于阿尔茨海默病的诊断。这些代码将在https://github.com/Meiyan88/HSM-GWAS上公开发布。
{"title":"Hippocampal surface morphological variation-based genome-wide association analysis network for biomarker detection of Alzheimer’s disease","authors":"Xiumei Chen ,&nbsp;Xinyue Zhang ,&nbsp;Wei Xiong ,&nbsp;Tao Wang ,&nbsp;Aiwei Jia ,&nbsp;Qianjin Feng ,&nbsp;Meiyan Huang","doi":"10.1016/j.media.2026.103952","DOIUrl":"10.1016/j.media.2026.103952","url":null,"abstract":"<div><div>Performing genome-wide association analysis (GWAS) between hippocampus and whole-genome data can facilitate disease-related biomarker detection of Alzheimer’s disease (AD). However, most existing studies have prioritized hippocampal volume changes and ignored the morphological variations and subfield differences of the hippocampus in AD progression. This disregard restricts the comprehensive understanding of the associations between hippocampus and whole-genome data, which may result in some potentially specific biomarkers of AD being missed. Moreover, the representation of the complex associations between ultra-high-dimensional imaging and whole-genome data remains an unresolved problem in GWAS. To address these issues, we propose an end-to-end hippocampal surface morphological variation-based genome-wide association analysis network (HSM-GWAS) to explore the nonlinear associations between hippocampal surface morphological variations and whole-genome data for AD-related biomarker detection. First, a multi-modality feature extraction module that includes a graph convolution network and an improved diet network is presented to extract imaging and genetic features from non-Euclidean hippocampal surface and whole-genome data, respectively. Second, a dual contrastive learning-based association analysis module is introduced to map and align genetic features to imaging features, thus narrowing the gap between these features and helping explore the complex associations between hippocampal and whole-genome data. Last, a dual cross-attention fusion module is applied to combine imaging and genetic features for disease diagnosis and biomarker detection of AD. Extensive experiments on the real Alzheimer’s Disease Neuroimaging Initiative dataset and simulated data demonstrate that HSM-GWAS considerably improves biomarker detection and disease diagnosis. These findings highlight the ability of HSM-GWAS to discover disease-related biomarkers, suggesting its potential to provide new insights into pathological mechanisms and aid in AD diagnosis. The codes are to be made publicly available at <span><span>https://github.com/Meiyan88/HSM-GWAS</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103952"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145995246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Implicit Heart Coordinates: 3D cardiac shape reconstruction from sparse segmentations 神经隐式心脏坐标:稀疏分割的三维心脏形状重建
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-24 DOI: 10.1016/j.media.2026.104052
Marica Muffoletto, Uxio Hermida, Charlène Mauger, Avan Suinesiaputra, Yiyang Xu, Richard Burns, Lisa Pankewitz, Andrew D. Mcculloch, Steffen E. Petersen, Daniel Rueckert, Alistair A. Young
{"title":"Neural Implicit Heart Coordinates: 3D cardiac shape reconstruction from sparse segmentations","authors":"Marica Muffoletto, Uxio Hermida, Charlène Mauger, Avan Suinesiaputra, Yiyang Xu, Richard Burns, Lisa Pankewitz, Andrew D. Mcculloch, Steffen E. Petersen, Daniel Rueckert, Alistair A. Young","doi":"10.1016/j.media.2026.104052","DOIUrl":"https://doi.org/10.1016/j.media.2026.104052","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"33 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147502001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clinical priors-inspired Privileged Knowledge Distillation for Reliable Pancreatic Lesion Classification 基于临床经验的特权知识精馏可靠的胰腺病变分类
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-24 DOI: 10.1016/j.media.2026.104041
Qiaoyu Han, Gang Yang, Lei Zhang, Xiangpeng Hu, Huizhong Gan, Xun Chen, Aiping Liu, Yue Yu
{"title":"Clinical priors-inspired Privileged Knowledge Distillation for Reliable Pancreatic Lesion Classification","authors":"Qiaoyu Han, Gang Yang, Lei Zhang, Xiangpeng Hu, Huizhong Gan, Xun Chen, Aiping Liu, Yue Yu","doi":"10.1016/j.media.2026.104041","DOIUrl":"https://doi.org/10.1016/j.media.2026.104041","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"11 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147502261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GCN Combined with Snake Convolution for Enhanced Topological Perception in Thrombotic Hepatic Portal Vein Segmentation GCN结合Snake卷积增强血栓性肝门静脉分割的拓扑感知
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-21 DOI: 10.1016/j.media.2026.104050
Lijuan Ma, Weiguang Wang, Xingshun Qi, Yi Jing, Wei Cai, Xia Zhang
{"title":"GCN Combined with Snake Convolution for Enhanced Topological Perception in Thrombotic Hepatic Portal Vein Segmentation","authors":"Lijuan Ma, Weiguang Wang, Xingshun Qi, Yi Jing, Wei Cai, Xia Zhang","doi":"10.1016/j.media.2026.104050","DOIUrl":"https://doi.org/10.1016/j.media.2026.104050","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"118 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147495641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to "DSFNet: Dual-source and spatiotemporal-feature fusion network for bedside diagnosis of lung injuries with electrical impedance tomography" [Medical Image Analysis 110C (2026) 104003]. “DSFNet:电阻抗断层扫描肺损伤床边诊断的双源和时空特征融合网络”的更正[医学图像分析110C(2026) 104003]。
IF 11.8 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-21 DOI: 10.1016/j.media.2026.104042
Zhiwei Li, Yang Wu, Kai Liu, Yingqi Zhang, Bai Chen, Hao Wang, Jiafeng Yao
{"title":"Corrigendum to \"DSFNet: Dual-source and spatiotemporal-feature fusion network for bedside diagnosis of lung injuries with electrical impedance tomography\" [Medical Image Analysis 110C (2026) 104003].","authors":"Zhiwei Li, Yang Wu, Kai Liu, Yingqi Zhang, Bai Chen, Hao Wang, Jiafeng Yao","doi":"10.1016/j.media.2026.104042","DOIUrl":"https://doi.org/10.1016/j.media.2026.104042","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":" ","pages":"104042"},"PeriodicalIF":11.8,"publicationDate":"2026-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147494130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ViFIT-assisted Histopathology: From H&E Style Standardization to Virtual Fiber Image Transformation vifit辅助组织病理学:从H&E风格标准化到虚拟纤维图像转换
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-21 DOI: 10.1016/j.media.2026.104051
Shu Wang, Xiao Zhang, Xingfu Wang, Chenyong Lv, Xiahui Han, Xiong Lin, Deyong Kang, Ruolan Lin, Liwen Hu, Haohua Tu, Feng Huang, Wenxi Liu, Jianxin Chen
{"title":"ViFIT-assisted Histopathology: From H&E Style Standardization to Virtual Fiber Image Transformation","authors":"Shu Wang, Xiao Zhang, Xingfu Wang, Chenyong Lv, Xiahui Han, Xiong Lin, Deyong Kang, Ruolan Lin, Liwen Hu, Haohua Tu, Feng Huang, Wenxi Liu, Jianxin Chen","doi":"10.1016/j.media.2026.104051","DOIUrl":"https://doi.org/10.1016/j.media.2026.104051","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"16 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147495642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SEQUAL: Self-refining and Effective QUerying Active Learning with Pseudo Label Divergence Score for Carotid Intima-media Segmentation in Ultrasound 序列:基于伪标签发散评分的颈动脉内膜-中膜超声分割的自精炼和有效查询主动学习
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-18 DOI: 10.1016/j.media.2026.104048
Yucheng Tang, Yipeng Hu, Jing Li, Hu Lin, Chao Rong, Ciyuan Feng, Xiuzhen Yang, Xiang Xu, Ke Huang, Hongxiang Lin
{"title":"SEQUAL: Self-refining and Effective QUerying Active Learning with Pseudo Label Divergence Score for Carotid Intima-media Segmentation in Ultrasound","authors":"Yucheng Tang, Yipeng Hu, Jing Li, Hu Lin, Chao Rong, Ciyuan Feng, Xiuzhen Yang, Xiang Xu, Ke Huang, Hongxiang Lin","doi":"10.1016/j.media.2026.104048","DOIUrl":"https://doi.org/10.1016/j.media.2026.104048","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"52 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147495643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MOTDNet: Multi Organ Task Decoupling Network for Cell Segmentation 用于细胞分割的多器官任务解耦网络
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-18 DOI: 10.1016/j.media.2026.104045
Jinlin Yang, Xintao Pang, Chuan Lin, Tao Tan
{"title":"MOTDNet: Multi Organ Task Decoupling Network for Cell Segmentation","authors":"Jinlin Yang, Xintao Pang, Chuan Lin, Tao Tan","doi":"10.1016/j.media.2026.104045","DOIUrl":"https://doi.org/10.1016/j.media.2026.104045","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"16 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147495647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dose-aware Diffusion Model for 3D PET Image Denoising: Multi-institutional Validation with Reader Study and Real Low-dose Data 三维PET图像去噪的剂量感知扩散模型:基于阅读器研究和真实低剂量数据的多机构验证
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-17 DOI: 10.1016/j.media.2026.104039
Huidong Xie, Weijie Gan, Reimund Bayerlein, Bo Zhou, Ming-Kai Chen, Michal Kulon, Annemarie Boustani, Kuan-Yin Ko, Der-Shiun Wang, Benjamin A. Spencer, Wei Ji, Xiongchao Chen, Qiong Liu, Xueqi Guo, Menghua Xia, Yinchi Zhou, Hui Liu, Liang Guo, Hongyu An, Ulugbek S. Kamilov, Hanzhong Wang, Biao Li, Axel Rominger, Kuangyu Shi, Ge Wang, Ramsey D. Badawi, Chi Liu
Reducing scan times, radiation dose, and enhancing image quality, especially for lower-performance scanners, are critical in low-count/low-dose PET imaging. Deep learning (DL) techniques have been investigated for PET image denoising. However, existing models have often resulted in compromised image quality when achieving low-count/low-dose PET and have limited generalizability to different image noise levels, acquisition protocols, and patient populations. Recently, diffusion models have emerged as a state-of-the-art generative model to generate high-quality samples and have demonstrated strong potential for medical imaging tasks. However, for low-dose PET imaging, existing diffusion models fail to generate consistent 3D reconstructions (i.e., adjacent slices exhibit noticeable discontinuities or ”flickering” along the z-axis), struggle to generalize across varying noise levels, and often produce visually appealing but distorted details and biased tracer uptake. Here, we develop DDPET-3D, a dose-aware diffusion model for 3D low-dose PET imaging to address these challenges. In this work, ”3D” denotes 3D-consistent reconstruction achieved via a 2.5D conditioning backbone, rather than a fully 3D diffusion network. Collected from 4 medical centers globally with different scanners and clinical protocols, we extensively evaluated the proposed model using a total of 9,783 18F-FDG studies (1,596 patients) with low-dose/low-count levels ranging from 1% to 50%. With a cross-center, cross-scanner validation, the proposed DDPET-3D demonstrated its potential to generalize to different low-dose levels, different scanners, and different clinical protocols. As confirmed by reader studies conducted by board-certified nuclear medicine physicians, the readers rated the denoised images as comparable to—or better than—the full-dose images and prior DL baselines based on qualitative visual assessment. We also evaluated the lesion-level quantitative accuracy using a Monte Carlo simulation study and a lesion segmentation network. The presented results show the potential to achieve low-dose PET while maintaining image quality. Lastly, a group of real low-dose scans was also included for evaluation to demonstrate the clinical potential of DDPET-3D. Code and trained models are publicly available at https://github.com/HuidongXie/DDPET-3D
减少扫描时间、辐射剂量和提高图像质量,特别是对于性能较低的扫描仪,是低计数/低剂量PET成像的关键。深度学习(DL)技术被用于PET图像去噪的研究。然而,在实现低计数/低剂量PET时,现有模型通常会导致图像质量受损,并且对不同图像噪声水平、采集方案和患者群体的通用性有限。最近,扩散模型已经成为一种最先进的生成模型,可以生成高质量的样本,并在医学成像任务中显示出强大的潜力。然而,对于低剂量PET成像,现有的扩散模型无法产生一致的3D重建(即,相邻切片显示出明显的不连续或沿z轴“闪烁”),难以在不同的噪声水平上进行泛化,并且经常产生视觉上吸引人但扭曲的细节和有偏差的示踪剂摄取。在这里,我们开发了DDPET-3D,一种用于3D低剂量PET成像的剂量感知扩散模型来解决这些挑战。在这项工作中,“3D”表示通过2.5D调节主干实现的3D一致性重建,而不是完全的3D扩散网络。我们从全球4个医疗中心收集数据,采用不同的扫描仪和临床方案,共使用9,783项低剂量/低计数水平(1%至50%)的18F-FDG研究(1,596名患者)对所提出的模型进行了广泛评估。通过跨中心、跨扫描仪验证,所提出的DDPET-3D证明了其推广到不同低剂量水平、不同扫描仪和不同临床方案的潜力。经委员会认证的核医学医师进行的读者研究证实,读者对去噪图像的评价与全剂量图像和基于定性视觉评估的先前DL基线相当或更好。我们还使用蒙特卡罗模拟研究和病变分割网络评估了病变水平的定量准确性。所提出的结果显示了在保持图像质量的同时实现低剂量PET的潜力。最后,一组真实的低剂量扫描也被纳入评估,以证明DDPET-3D的临床潜力。代码和经过训练的模型可在https://github.com/HuidongXie/DDPET-3D上公开获得
{"title":"Dose-aware Diffusion Model for 3D PET Image Denoising: Multi-institutional Validation with Reader Study and Real Low-dose Data","authors":"Huidong Xie, Weijie Gan, Reimund Bayerlein, Bo Zhou, Ming-Kai Chen, Michal Kulon, Annemarie Boustani, Kuan-Yin Ko, Der-Shiun Wang, Benjamin A. Spencer, Wei Ji, Xiongchao Chen, Qiong Liu, Xueqi Guo, Menghua Xia, Yinchi Zhou, Hui Liu, Liang Guo, Hongyu An, Ulugbek S. Kamilov, Hanzhong Wang, Biao Li, Axel Rominger, Kuangyu Shi, Ge Wang, Ramsey D. Badawi, Chi Liu","doi":"10.1016/j.media.2026.104039","DOIUrl":"https://doi.org/10.1016/j.media.2026.104039","url":null,"abstract":"Reducing scan times, radiation dose, and enhancing image quality, especially for lower-performance scanners, are critical in low-count/low-dose PET imaging. Deep learning (DL) techniques have been investigated for PET image denoising. However, existing models have often resulted in compromised image quality when achieving low-count/low-dose PET and have limited generalizability to different image noise levels, acquisition protocols, and patient populations. Recently, diffusion models have emerged as a state-of-the-art generative model to generate high-quality samples and have demonstrated strong potential for medical imaging tasks. However, for low-dose PET imaging, existing diffusion models fail to generate consistent 3D reconstructions (i.e., adjacent slices exhibit noticeable discontinuities or ”flickering” along the z-axis), struggle to generalize across varying noise levels, and often produce visually appealing but distorted details and biased tracer uptake. Here, we develop DDPET-3D, a dose-aware diffusion model for 3D low-dose PET imaging to address these challenges. In this work, ”3D” denotes 3D-consistent reconstruction achieved via a 2.5D conditioning backbone, rather than a fully 3D diffusion network. Collected from 4 medical centers globally with different scanners and clinical protocols, we extensively evaluated the proposed model using a total of 9,783 <ce:sup loc=\"post\">18</ce:sup>F-FDG studies (1,596 patients) with low-dose/low-count levels ranging from 1% to 50%. With a cross-center, cross-scanner validation, the proposed DDPET-3D demonstrated its potential to generalize to different low-dose levels, different scanners, and different clinical protocols. As confirmed by reader studies conducted by board-certified nuclear medicine physicians, the readers rated the denoised images as comparable to—or better than—the full-dose images and prior DL baselines based on qualitative visual assessment. We also evaluated the lesion-level quantitative accuracy using a Monte Carlo simulation study and a lesion segmentation network. The presented results show the potential to achieve low-dose PET while maintaining image quality. Lastly, a group of real low-dose scans was also included for evaluation to demonstrate the clinical potential of DDPET-3D. Code and trained models are publicly available at <ce:inter-ref xlink:href=\"https://github.com/HuidongXie/DDPET-3D\" xlink:type=\"simple\">https://github.com/HuidongXie/DDPET-3D</ce:inter-ref>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"11 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147465098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Translating MRI to PET through Conditional Diffusion Models with Enhanced Pathology Awareness 通过增强病理意识的条件扩散模型将MRI转换为PET
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-17 DOI: 10.1016/j.media.2026.104035
Yitong Li, Igor Yakushev, Dennis M. Hedderich, Christian Wachinger
Positron emission tomography (PET) is a widely recognized technique for diagnosing neurodegenerative diseases, offering critical functional insights. However, its high costs and radiation exposure hinder its widespread use. In contrast, magnetic resonance imaging (MRI) does not involve such limitations. While MRI also detects neurodegenerative changes, it is less sensitive for diagnosis compared to PET. To overcome such limitations, one approach is to generate synthetic PET from MRI. Recent advances in generative models have paved the way for cross-modality medical image translation; however, existing methods largely emphasize structural preservation while neglecting the critical need for pathology awareness. To address this gap, we propose PASTA, a novel image translation framework built on conditional diffusion models with enhanced pathology awareness. PASTA surpasses state-of-the-art methods by preserving both structural and pathological details through its highly interactive dual-arm architecture and multi-modal condition integration. Additionally, we introduce a novel cycle exchange consistency and volumetric generation strategy that significantly enhances PASTA’s ability to produce high-quality 3D PET images. Our qualitative and quantitative results demonstrate the high quality and pathology awareness of the synthesized PET scans. For Alzheimer’s diagnosis, the performance of these synthesized scans improves over MRI by 4%, almost reaching the performance of actual PET. Our code is available at https://github.com/ai-med/PASTA.
正电子发射断层扫描(PET)是一种广泛认可的诊断神经退行性疾病的技术,提供了关键的功能见解。然而,它的高成本和辐射暴露阻碍了它的广泛使用。相比之下,磁共振成像(MRI)没有这些限制。虽然MRI也可以检测神经退行性变化,但与PET相比,它的诊断敏感性较低。为了克服这些限制,一种方法是从MRI中生成合成PET。生成模型的最新进展为跨模态医学图像翻译铺平了道路;然而,现有的方法在很大程度上强调结构保存,而忽视了病理意识的关键需求。为了解决这一差距,我们提出了PASTA,这是一种基于条件扩散模型的新型图像翻译框架,具有增强的病理意识。PASTA通过其高度互动的双臂结构和多模态状态集成,通过保留结构和病理细节,超越了最先进的方法。此外,我们引入了一种新的循环交换一致性和体积生成策略,显著增强了PASTA生成高质量3D PET图像的能力。我们的定性和定量结果证明了合成PET扫描的高质量和病理学意识。对于阿尔茨海默氏症的诊断,这些合成扫描的性能比MRI提高了4%,几乎达到了实际PET的性能。我们的代码可在https://github.com/ai-med/PASTA上获得。
{"title":"Translating MRI to PET through Conditional Diffusion Models with Enhanced Pathology Awareness","authors":"Yitong Li, Igor Yakushev, Dennis M. Hedderich, Christian Wachinger","doi":"10.1016/j.media.2026.104035","DOIUrl":"https://doi.org/10.1016/j.media.2026.104035","url":null,"abstract":"Positron emission tomography (PET) is a widely recognized technique for diagnosing neurodegenerative diseases, offering critical functional insights. However, its high costs and radiation exposure hinder its widespread use. In contrast, magnetic resonance imaging (MRI) does not involve such limitations. While MRI also detects neurodegenerative changes, it is less sensitive for diagnosis compared to PET. To overcome such limitations, one approach is to generate synthetic PET from MRI. Recent advances in generative models have paved the way for cross-modality medical image translation; however, existing methods largely emphasize structural preservation while neglecting the critical need for pathology awareness. To address this gap, we propose PASTA, a novel image translation framework built on conditional diffusion models with enhanced pathology awareness. PASTA surpasses state-of-the-art methods by preserving both structural and pathological details through its highly interactive dual-arm architecture and multi-modal condition integration. Additionally, we introduce a novel cycle exchange consistency and volumetric generation strategy that significantly enhances PASTA’s ability to produce high-quality 3D PET images. Our qualitative and quantitative results demonstrate the high quality and pathology awareness of the synthesized PET scans. For Alzheimer’s diagnosis, the performance of these synthesized scans improves over MRI by 4%, almost reaching the performance of actual PET. Our code is available at <ce:inter-ref xlink:href=\"https://github.com/ai-med/PASTA\" xlink:type=\"simple\">https://github.com/ai-med/PASTA</ce:inter-ref>.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"17 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147465100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image analysis
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1