Pub Date : 2026-05-01Epub Date: 2026-01-18DOI: 10.1016/j.media.2026.103952
Xiumei Chen , Xinyue Zhang , Wei Xiong , Tao Wang , Aiwei Jia , Qianjin Feng , Meiyan Huang
Performing genome-wide association analysis (GWAS) between hippocampus and whole-genome data can facilitate disease-related biomarker detection of Alzheimer’s disease (AD). However, most existing studies have prioritized hippocampal volume changes and ignored the morphological variations and subfield differences of the hippocampus in AD progression. This disregard restricts the comprehensive understanding of the associations between hippocampus and whole-genome data, which may result in some potentially specific biomarkers of AD being missed. Moreover, the representation of the complex associations between ultra-high-dimensional imaging and whole-genome data remains an unresolved problem in GWAS. To address these issues, we propose an end-to-end hippocampal surface morphological variation-based genome-wide association analysis network (HSM-GWAS) to explore the nonlinear associations between hippocampal surface morphological variations and whole-genome data for AD-related biomarker detection. First, a multi-modality feature extraction module that includes a graph convolution network and an improved diet network is presented to extract imaging and genetic features from non-Euclidean hippocampal surface and whole-genome data, respectively. Second, a dual contrastive learning-based association analysis module is introduced to map and align genetic features to imaging features, thus narrowing the gap between these features and helping explore the complex associations between hippocampal and whole-genome data. Last, a dual cross-attention fusion module is applied to combine imaging and genetic features for disease diagnosis and biomarker detection of AD. Extensive experiments on the real Alzheimer’s Disease Neuroimaging Initiative dataset and simulated data demonstrate that HSM-GWAS considerably improves biomarker detection and disease diagnosis. These findings highlight the ability of HSM-GWAS to discover disease-related biomarkers, suggesting its potential to provide new insights into pathological mechanisms and aid in AD diagnosis. The codes are to be made publicly available at https://github.com/Meiyan88/HSM-GWAS.
{"title":"Hippocampal surface morphological variation-based genome-wide association analysis network for biomarker detection of Alzheimer’s disease","authors":"Xiumei Chen , Xinyue Zhang , Wei Xiong , Tao Wang , Aiwei Jia , Qianjin Feng , Meiyan Huang","doi":"10.1016/j.media.2026.103952","DOIUrl":"10.1016/j.media.2026.103952","url":null,"abstract":"<div><div>Performing genome-wide association analysis (GWAS) between hippocampus and whole-genome data can facilitate disease-related biomarker detection of Alzheimer’s disease (AD). However, most existing studies have prioritized hippocampal volume changes and ignored the morphological variations and subfield differences of the hippocampus in AD progression. This disregard restricts the comprehensive understanding of the associations between hippocampus and whole-genome data, which may result in some potentially specific biomarkers of AD being missed. Moreover, the representation of the complex associations between ultra-high-dimensional imaging and whole-genome data remains an unresolved problem in GWAS. To address these issues, we propose an end-to-end hippocampal surface morphological variation-based genome-wide association analysis network (HSM-GWAS) to explore the nonlinear associations between hippocampal surface morphological variations and whole-genome data for AD-related biomarker detection. First, a multi-modality feature extraction module that includes a graph convolution network and an improved diet network is presented to extract imaging and genetic features from non-Euclidean hippocampal surface and whole-genome data, respectively. Second, a dual contrastive learning-based association analysis module is introduced to map and align genetic features to imaging features, thus narrowing the gap between these features and helping explore the complex associations between hippocampal and whole-genome data. Last, a dual cross-attention fusion module is applied to combine imaging and genetic features for disease diagnosis and biomarker detection of AD. Extensive experiments on the real Alzheimer’s Disease Neuroimaging Initiative dataset and simulated data demonstrate that HSM-GWAS considerably improves biomarker detection and disease diagnosis. These findings highlight the ability of HSM-GWAS to discover disease-related biomarkers, suggesting its potential to provide new insights into pathological mechanisms and aid in AD diagnosis. The codes are to be made publicly available at <span><span>https://github.com/Meiyan88/HSM-GWAS</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"110 ","pages":"Article 103952"},"PeriodicalIF":11.8,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145995246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-24DOI: 10.1016/j.media.2026.104052
Marica Muffoletto, Uxio Hermida, Charlène Mauger, Avan Suinesiaputra, Yiyang Xu, Richard Burns, Lisa Pankewitz, Andrew D. Mcculloch, Steffen E. Petersen, Daniel Rueckert, Alistair A. Young
{"title":"Neural Implicit Heart Coordinates: 3D cardiac shape reconstruction from sparse segmentations","authors":"Marica Muffoletto, Uxio Hermida, Charlène Mauger, Avan Suinesiaputra, Yiyang Xu, Richard Burns, Lisa Pankewitz, Andrew D. Mcculloch, Steffen E. Petersen, Daniel Rueckert, Alistair A. Young","doi":"10.1016/j.media.2026.104052","DOIUrl":"https://doi.org/10.1016/j.media.2026.104052","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"33 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147502001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-18DOI: 10.1016/j.media.2026.104045
Jinlin Yang, Xintao Pang, Chuan Lin, Tao Tan
{"title":"MOTDNet: Multi Organ Task Decoupling Network for Cell Segmentation","authors":"Jinlin Yang, Xintao Pang, Chuan Lin, Tao Tan","doi":"10.1016/j.media.2026.104045","DOIUrl":"https://doi.org/10.1016/j.media.2026.104045","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"16 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147495647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-17DOI: 10.1016/j.media.2026.104039
Huidong Xie, Weijie Gan, Reimund Bayerlein, Bo Zhou, Ming-Kai Chen, Michal Kulon, Annemarie Boustani, Kuan-Yin Ko, Der-Shiun Wang, Benjamin A. Spencer, Wei Ji, Xiongchao Chen, Qiong Liu, Xueqi Guo, Menghua Xia, Yinchi Zhou, Hui Liu, Liang Guo, Hongyu An, Ulugbek S. Kamilov, Hanzhong Wang, Biao Li, Axel Rominger, Kuangyu Shi, Ge Wang, Ramsey D. Badawi, Chi Liu
Reducing scan times, radiation dose, and enhancing image quality, especially for lower-performance scanners, are critical in low-count/low-dose PET imaging. Deep learning (DL) techniques have been investigated for PET image denoising. However, existing models have often resulted in compromised image quality when achieving low-count/low-dose PET and have limited generalizability to different image noise levels, acquisition protocols, and patient populations. Recently, diffusion models have emerged as a state-of-the-art generative model to generate high-quality samples and have demonstrated strong potential for medical imaging tasks. However, for low-dose PET imaging, existing diffusion models fail to generate consistent 3D reconstructions (i.e., adjacent slices exhibit noticeable discontinuities or ”flickering” along the z-axis), struggle to generalize across varying noise levels, and often produce visually appealing but distorted details and biased tracer uptake. Here, we develop DDPET-3D, a dose-aware diffusion model for 3D low-dose PET imaging to address these challenges. In this work, ”3D” denotes 3D-consistent reconstruction achieved via a 2.5D conditioning backbone, rather than a fully 3D diffusion network. Collected from 4 medical centers globally with different scanners and clinical protocols, we extensively evaluated the proposed model using a total of 9,783 18F-FDG studies (1,596 patients) with low-dose/low-count levels ranging from 1% to 50%. With a cross-center, cross-scanner validation, the proposed DDPET-3D demonstrated its potential to generalize to different low-dose levels, different scanners, and different clinical protocols. As confirmed by reader studies conducted by board-certified nuclear medicine physicians, the readers rated the denoised images as comparable to—or better than—the full-dose images and prior DL baselines based on qualitative visual assessment. We also evaluated the lesion-level quantitative accuracy using a Monte Carlo simulation study and a lesion segmentation network. The presented results show the potential to achieve low-dose PET while maintaining image quality. Lastly, a group of real low-dose scans was also included for evaluation to demonstrate the clinical potential of DDPET-3D. Code and trained models are publicly available at https://github.com/HuidongXie/DDPET-3D
{"title":"Dose-aware Diffusion Model for 3D PET Image Denoising: Multi-institutional Validation with Reader Study and Real Low-dose Data","authors":"Huidong Xie, Weijie Gan, Reimund Bayerlein, Bo Zhou, Ming-Kai Chen, Michal Kulon, Annemarie Boustani, Kuan-Yin Ko, Der-Shiun Wang, Benjamin A. Spencer, Wei Ji, Xiongchao Chen, Qiong Liu, Xueqi Guo, Menghua Xia, Yinchi Zhou, Hui Liu, Liang Guo, Hongyu An, Ulugbek S. Kamilov, Hanzhong Wang, Biao Li, Axel Rominger, Kuangyu Shi, Ge Wang, Ramsey D. Badawi, Chi Liu","doi":"10.1016/j.media.2026.104039","DOIUrl":"https://doi.org/10.1016/j.media.2026.104039","url":null,"abstract":"Reducing scan times, radiation dose, and enhancing image quality, especially for lower-performance scanners, are critical in low-count/low-dose PET imaging. Deep learning (DL) techniques have been investigated for PET image denoising. However, existing models have often resulted in compromised image quality when achieving low-count/low-dose PET and have limited generalizability to different image noise levels, acquisition protocols, and patient populations. Recently, diffusion models have emerged as a state-of-the-art generative model to generate high-quality samples and have demonstrated strong potential for medical imaging tasks. However, for low-dose PET imaging, existing diffusion models fail to generate consistent 3D reconstructions (i.e., adjacent slices exhibit noticeable discontinuities or ”flickering” along the z-axis), struggle to generalize across varying noise levels, and often produce visually appealing but distorted details and biased tracer uptake. Here, we develop DDPET-3D, a dose-aware diffusion model for 3D low-dose PET imaging to address these challenges. In this work, ”3D” denotes 3D-consistent reconstruction achieved via a 2.5D conditioning backbone, rather than a fully 3D diffusion network. Collected from 4 medical centers globally with different scanners and clinical protocols, we extensively evaluated the proposed model using a total of 9,783 <ce:sup loc=\"post\">18</ce:sup>F-FDG studies (1,596 patients) with low-dose/low-count levels ranging from 1% to 50%. With a cross-center, cross-scanner validation, the proposed DDPET-3D demonstrated its potential to generalize to different low-dose levels, different scanners, and different clinical protocols. As confirmed by reader studies conducted by board-certified nuclear medicine physicians, the readers rated the denoised images as comparable to—or better than—the full-dose images and prior DL baselines based on qualitative visual assessment. We also evaluated the lesion-level quantitative accuracy using a Monte Carlo simulation study and a lesion segmentation network. The presented results show the potential to achieve low-dose PET while maintaining image quality. Lastly, a group of real low-dose scans was also included for evaluation to demonstrate the clinical potential of DDPET-3D. Code and trained models are publicly available at <ce:inter-ref xlink:href=\"https://github.com/HuidongXie/DDPET-3D\" xlink:type=\"simple\">https://github.com/HuidongXie/DDPET-3D</ce:inter-ref>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"11 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147465098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-17DOI: 10.1016/j.media.2026.104035
Yitong Li, Igor Yakushev, Dennis M. Hedderich, Christian Wachinger
Positron emission tomography (PET) is a widely recognized technique for diagnosing neurodegenerative diseases, offering critical functional insights. However, its high costs and radiation exposure hinder its widespread use. In contrast, magnetic resonance imaging (MRI) does not involve such limitations. While MRI also detects neurodegenerative changes, it is less sensitive for diagnosis compared to PET. To overcome such limitations, one approach is to generate synthetic PET from MRI. Recent advances in generative models have paved the way for cross-modality medical image translation; however, existing methods largely emphasize structural preservation while neglecting the critical need for pathology awareness. To address this gap, we propose PASTA, a novel image translation framework built on conditional diffusion models with enhanced pathology awareness. PASTA surpasses state-of-the-art methods by preserving both structural and pathological details through its highly interactive dual-arm architecture and multi-modal condition integration. Additionally, we introduce a novel cycle exchange consistency and volumetric generation strategy that significantly enhances PASTA’s ability to produce high-quality 3D PET images. Our qualitative and quantitative results demonstrate the high quality and pathology awareness of the synthesized PET scans. For Alzheimer’s diagnosis, the performance of these synthesized scans improves over MRI by 4%, almost reaching the performance of actual PET. Our code is available at https://github.com/ai-med/PASTA.
{"title":"Translating MRI to PET through Conditional Diffusion Models with Enhanced Pathology Awareness","authors":"Yitong Li, Igor Yakushev, Dennis M. Hedderich, Christian Wachinger","doi":"10.1016/j.media.2026.104035","DOIUrl":"https://doi.org/10.1016/j.media.2026.104035","url":null,"abstract":"Positron emission tomography (PET) is a widely recognized technique for diagnosing neurodegenerative diseases, offering critical functional insights. However, its high costs and radiation exposure hinder its widespread use. In contrast, magnetic resonance imaging (MRI) does not involve such limitations. While MRI also detects neurodegenerative changes, it is less sensitive for diagnosis compared to PET. To overcome such limitations, one approach is to generate synthetic PET from MRI. Recent advances in generative models have paved the way for cross-modality medical image translation; however, existing methods largely emphasize structural preservation while neglecting the critical need for pathology awareness. To address this gap, we propose PASTA, a novel image translation framework built on conditional diffusion models with enhanced pathology awareness. PASTA surpasses state-of-the-art methods by preserving both structural and pathological details through its highly interactive dual-arm architecture and multi-modal condition integration. Additionally, we introduce a novel cycle exchange consistency and volumetric generation strategy that significantly enhances PASTA’s ability to produce high-quality 3D PET images. Our qualitative and quantitative results demonstrate the high quality and pathology awareness of the synthesized PET scans. For Alzheimer’s diagnosis, the performance of these synthesized scans improves over MRI by 4%, almost reaching the performance of actual PET. Our code is available at <ce:inter-ref xlink:href=\"https://github.com/ai-med/PASTA\" xlink:type=\"simple\">https://github.com/ai-med/PASTA</ce:inter-ref>.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"17 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2026-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147465100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}