{"title":"Predicting brain age with global-local attention network from multimodal neuroimaging data: Accuracy, generalizability, and behavioral associations","authors":"SungHwan Moon, Junhyeok Lee, Won Hee Lee","doi":"10.1016/j.compbiomed.2024.109411","DOIUrl":null,"url":null,"abstract":"<div><div>Brain age, an emerging biomarker for brain diseases and aging, is typically predicted using single-modality T1-weighted structural MRI data. This study investigates the benefits of integrating structural MRI with diffusion MRI to enhance brain age prediction. We propose an attention-based deep learning model that fuses global-context information from structural MRI with local details from diffusion metrics. The model was evaluated using two large datasets: the Human Connectome Project (HCP, n = 1064, age 22–37) and the Cambridge Center for Aging and Neuroscience (Cam-CAN, n = 639, age 18–88). It was tested for generalizability and robustness on three independent datasets (n = 546, age 20–86), reproducibility on a test-retest dataset (n = 44, age 22–35), and longitudinal consistency (n = 129, age 46–92). We also examined the relationship between predicted brain age and behavioral measures. Results showed that the multimodal model improved prediction accuracy, achieving mean absolute errors (MAEs) of 2.44 years in the HCP dataset (sagittal plane) and 4.36 years in the Cam-CAN dataset (axial plane). The corresponding R<sup>2</sup> values were 0.258 and 0.914, respectively, reflecting the model's ability to explain variance in the predictions across both datasets. Compared to single-modality models, the multimodal approach showed better generalization, reducing MAEs by 10–76 % and enhancing robustness by 22–82 %. While the multimodal model exhibited superior reproducibility, the sMRI model showed slightly better longitudinal consistency. Importantly, the multimodal model revealed unique associations between predicted brain age and behavioral measures, such as walking endurance and loneliness in the HCP dataset, which were not detected with chronological age alone. In the Cam-CAN dataset, brain age and chronological age exhibited similar correlations with behavioral measures. By integrating sMRI and dMRI through an attention-based model, our proposed approach enhances predictive accuracy and provides deeper insights into the relationship between brain aging and behavior.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"184 ","pages":"Article 109411"},"PeriodicalIF":7.0000,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in biology and medicine","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010482524014963","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Brain age, an emerging biomarker for brain diseases and aging, is typically predicted using single-modality T1-weighted structural MRI data. This study investigates the benefits of integrating structural MRI with diffusion MRI to enhance brain age prediction. We propose an attention-based deep learning model that fuses global-context information from structural MRI with local details from diffusion metrics. The model was evaluated using two large datasets: the Human Connectome Project (HCP, n = 1064, age 22–37) and the Cambridge Center for Aging and Neuroscience (Cam-CAN, n = 639, age 18–88). It was tested for generalizability and robustness on three independent datasets (n = 546, age 20–86), reproducibility on a test-retest dataset (n = 44, age 22–35), and longitudinal consistency (n = 129, age 46–92). We also examined the relationship between predicted brain age and behavioral measures. Results showed that the multimodal model improved prediction accuracy, achieving mean absolute errors (MAEs) of 2.44 years in the HCP dataset (sagittal plane) and 4.36 years in the Cam-CAN dataset (axial plane). The corresponding R2 values were 0.258 and 0.914, respectively, reflecting the model's ability to explain variance in the predictions across both datasets. Compared to single-modality models, the multimodal approach showed better generalization, reducing MAEs by 10–76 % and enhancing robustness by 22–82 %. While the multimodal model exhibited superior reproducibility, the sMRI model showed slightly better longitudinal consistency. Importantly, the multimodal model revealed unique associations between predicted brain age and behavioral measures, such as walking endurance and loneliness in the HCP dataset, which were not detected with chronological age alone. In the Cam-CAN dataset, brain age and chronological age exhibited similar correlations with behavioral measures. By integrating sMRI and dMRI through an attention-based model, our proposed approach enhances predictive accuracy and provides deeper insights into the relationship between brain aging and behavior.
期刊介绍:
Computers in Biology and Medicine is an international forum for sharing groundbreaking advancements in the use of computers in bioscience and medicine. This journal serves as a medium for communicating essential research, instruction, ideas, and information regarding the rapidly evolving field of computer applications in these domains. By encouraging the exchange of knowledge, we aim to facilitate progress and innovation in the utilization of computers in biology and medicine.