Minhui Yu, Yunbi Liu, Jinjian Wu, Andrea Bozoki, Shijun Qiu, Ling Yue, Mingxia Liu
{"title":"Hybrid Multimodality Fusion with Cross-Domain Knowledge Transfer to Forecast Progression Trajectories in Cognitive Decline.","authors":"Minhui Yu, Yunbi Liu, Jinjian Wu, Andrea Bozoki, Shijun Qiu, Ling Yue, Mingxia Liu","doi":"10.1007/978-3-031-47425-5_24","DOIUrl":null,"url":null,"abstract":"<p><p>Magnetic resonance imaging (MRI) and positron emission tomography (PET) are increasingly used to forecast progression trajectories of cognitive decline caused by preclinical and prodromal Alzheimer's disease (AD). Many existing studies have explored the potential of these two distinct modalities with diverse machine and deep learning approaches. But successfully fusing MRI and PET can be complex due to their unique characteristics and missing modalities. To this end, we develop a hybrid multimodality fusion (HMF) framework with cross-domain knowledge transfer for joint MRI and PET representation learning, feature fusion, and cognitive decline progression forecasting. Our HMF consists of three modules: 1) a module to impute missing PET images, 2) a module to extract multimodality features from MRI and PET images, and 3) a module to fuse the extracted multimodality features. To address the issue of small sample sizes, we employ a cross-domain knowledge transfer strategy from the ADNI dataset, which includes 795 subjects, to independent small-scale AD-related cohorts, in order to leverage the rich knowledge present within the ADNI. The proposed HMF is extensively evaluated in three AD-related studies with 272 subjects across multiple disease stages, such as subjective cognitive decline and mild cognitive impairment. Experimental results demonstrate the superiority of our method over several state-of-the-art approaches in forecasting progression trajectories of AD-related cognitive decline.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14394 ","pages":"265-275"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10904401/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-031-47425-5_24","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/2/3 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Magnetic resonance imaging (MRI) and positron emission tomography (PET) are increasingly used to forecast progression trajectories of cognitive decline caused by preclinical and prodromal Alzheimer's disease (AD). Many existing studies have explored the potential of these two distinct modalities with diverse machine and deep learning approaches. But successfully fusing MRI and PET can be complex due to their unique characteristics and missing modalities. To this end, we develop a hybrid multimodality fusion (HMF) framework with cross-domain knowledge transfer for joint MRI and PET representation learning, feature fusion, and cognitive decline progression forecasting. Our HMF consists of three modules: 1) a module to impute missing PET images, 2) a module to extract multimodality features from MRI and PET images, and 3) a module to fuse the extracted multimodality features. To address the issue of small sample sizes, we employ a cross-domain knowledge transfer strategy from the ADNI dataset, which includes 795 subjects, to independent small-scale AD-related cohorts, in order to leverage the rich knowledge present within the ADNI. The proposed HMF is extensively evaluated in three AD-related studies with 272 subjects across multiple disease stages, such as subjective cognitive decline and mild cognitive impairment. Experimental results demonstrate the superiority of our method over several state-of-the-art approaches in forecasting progression trajectories of AD-related cognitive decline.
磁共振成像(MRI)和正电子发射断层扫描(PET)越来越多地被用于预测临床前和前驱阿尔茨海默病(AD)引起的认知能力下降的进展轨迹。现有的许多研究已经利用不同的机器学习和深度学习方法探索了这两种不同模式的潜力。但是,由于核磁共振成像和正电子发射计算机断层成像的独特性和缺失模式,成功融合这两种模式可能非常复杂。为此,我们开发了一种混合多模态融合(HMF)框架,该框架具有跨领域知识转移功能,可用于联合 MRI 和 PET 表征学习、特征融合和认知衰退进展预测。我们的混合多模态融合框架由三个模块组成:1)对缺失的 PET 图像进行补偿的模块;2)从 MRI 和 PET 图像中提取多模态特征的模块;3)对提取的多模态特征进行融合的模块。为了解决样本量小的问题,我们采用了跨领域知识转移策略,从包括 795 名受试者的 ADNI 数据集转移到独立的小规模 AD 相关队列,以充分利用 ADNI 中的丰富知识。拟议的 HMF 在三项 AD 相关研究中进行了广泛评估,研究对象包括 272 名受试者,涉及多个疾病阶段,如主观认知能力下降和轻度认知障碍。实验结果表明,在预测注意力缺失症相关认知能力下降的进展轨迹方面,我们的方法优于几种最先进的方法。