从多模态神经影像数据中利用全局-局部注意力网络预测大脑年龄:准确性、普遍性和行为关联

IF 7 2区 医学 Q1 BIOLOGY Computers in biology and medicine Pub Date : 2024-11-17 DOI:10.1016/j.compbiomed.2024.109411
SungHwan Moon, Junhyeok Lee, Won Hee Lee
{"title":"从多模态神经影像数据中利用全局-局部注意力网络预测大脑年龄:准确性、普遍性和行为关联","authors":"SungHwan Moon,&nbsp;Junhyeok Lee,&nbsp;Won Hee Lee","doi":"10.1016/j.compbiomed.2024.109411","DOIUrl":null,"url":null,"abstract":"<div><div>Brain age, an emerging biomarker for brain diseases and aging, is typically predicted using single-modality T1-weighted structural MRI data. This study investigates the benefits of integrating structural MRI with diffusion MRI to enhance brain age prediction. We propose an attention-based deep learning model that fuses global-context information from structural MRI with local details from diffusion metrics. The model was evaluated using two large datasets: the Human Connectome Project (HCP, n = 1064, age 22–37) and the Cambridge Center for Aging and Neuroscience (Cam-CAN, n = 639, age 18–88). It was tested for generalizability and robustness on three independent datasets (n = 546, age 20–86), reproducibility on a test-retest dataset (n = 44, age 22–35), and longitudinal consistency (n = 129, age 46–92). We also examined the relationship between predicted brain age and behavioral measures. Results showed that the multimodal model improved prediction accuracy, achieving mean absolute errors (MAEs) of 2.44 years in the HCP dataset (sagittal plane) and 4.36 years in the Cam-CAN dataset (axial plane). The corresponding R<sup>2</sup> values were 0.258 and 0.914, respectively, reflecting the model's ability to explain variance in the predictions across both datasets. Compared to single-modality models, the multimodal approach showed better generalization, reducing MAEs by 10–76 % and enhancing robustness by 22–82 %. While the multimodal model exhibited superior reproducibility, the sMRI model showed slightly better longitudinal consistency. Importantly, the multimodal model revealed unique associations between predicted brain age and behavioral measures, such as walking endurance and loneliness in the HCP dataset, which were not detected with chronological age alone. In the Cam-CAN dataset, brain age and chronological age exhibited similar correlations with behavioral measures. By integrating sMRI and dMRI through an attention-based model, our proposed approach enhances predictive accuracy and provides deeper insights into the relationship between brain aging and behavior.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"184 ","pages":"Article 109411"},"PeriodicalIF":7.0000,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Predicting brain age with global-local attention network from multimodal neuroimaging data: Accuracy, generalizability, and behavioral associations\",\"authors\":\"SungHwan Moon,&nbsp;Junhyeok Lee,&nbsp;Won Hee Lee\",\"doi\":\"10.1016/j.compbiomed.2024.109411\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Brain age, an emerging biomarker for brain diseases and aging, is typically predicted using single-modality T1-weighted structural MRI data. This study investigates the benefits of integrating structural MRI with diffusion MRI to enhance brain age prediction. We propose an attention-based deep learning model that fuses global-context information from structural MRI with local details from diffusion metrics. The model was evaluated using two large datasets: the Human Connectome Project (HCP, n = 1064, age 22–37) and the Cambridge Center for Aging and Neuroscience (Cam-CAN, n = 639, age 18–88). It was tested for generalizability and robustness on three independent datasets (n = 546, age 20–86), reproducibility on a test-retest dataset (n = 44, age 22–35), and longitudinal consistency (n = 129, age 46–92). We also examined the relationship between predicted brain age and behavioral measures. Results showed that the multimodal model improved prediction accuracy, achieving mean absolute errors (MAEs) of 2.44 years in the HCP dataset (sagittal plane) and 4.36 years in the Cam-CAN dataset (axial plane). The corresponding R<sup>2</sup> values were 0.258 and 0.914, respectively, reflecting the model's ability to explain variance in the predictions across both datasets. Compared to single-modality models, the multimodal approach showed better generalization, reducing MAEs by 10–76 % and enhancing robustness by 22–82 %. While the multimodal model exhibited superior reproducibility, the sMRI model showed slightly better longitudinal consistency. Importantly, the multimodal model revealed unique associations between predicted brain age and behavioral measures, such as walking endurance and loneliness in the HCP dataset, which were not detected with chronological age alone. In the Cam-CAN dataset, brain age and chronological age exhibited similar correlations with behavioral measures. By integrating sMRI and dMRI through an attention-based model, our proposed approach enhances predictive accuracy and provides deeper insights into the relationship between brain aging and behavior.</div></div>\",\"PeriodicalId\":10578,\"journal\":{\"name\":\"Computers in biology and medicine\",\"volume\":\"184 \",\"pages\":\"Article 109411\"},\"PeriodicalIF\":7.0000,\"publicationDate\":\"2024-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in biology and medicine\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0010482524014963\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"BIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in biology and medicine","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010482524014963","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

脑年龄是脑部疾病和衰老的新兴生物标志物,通常使用单模态 T1 加权结构磁共振成像数据进行预测。本研究探讨了将结构磁共振成像与弥散磁共振成像整合以增强脑年龄预测的益处。我们提出了一种基于注意力的深度学习模型,该模型融合了结构磁共振成像的全局上下文信息和扩散指标的局部细节信息。我们使用两个大型数据集对该模型进行了评估:人类连接组项目(HCP,n = 1064,年龄 22-37 岁)和剑桥老龄化与神经科学中心(Cam-CAN,n = 639,年龄 18-88岁)。我们在三个独立数据集(n = 546,年龄在 20-86 岁之间)上测试了它的普适性和稳健性,在测试-重复数据集(n = 44,年龄在 22-35 岁之间)上测试了它的可重复性,并在纵向一致性(n = 129,年龄在 46-92 岁之间)上进行了测试。我们还研究了预测脑龄与行为测量之间的关系。结果表明,多模态模型提高了预测的准确性,在 HCP 数据集(矢状面)和 Cam-CAN 数据集(轴向面)中的平均绝对误差(MAE)分别为 2.44 岁和 4.36 岁。相应的 R2 值分别为 0.258 和 0.914,反映了模型对两个数据集预测差异的解释能力。与单模态模型相比,多模态方法显示出更好的泛化能力,最大误差降低了 10-76%,稳健性提高了 22-82%。多模态模型的再现性更好,而 sMRI 模型的纵向一致性稍好。重要的是,多模态模型揭示了预测脑年龄与行为测量(如 HCP 数据集中的行走耐力和孤独感)之间的独特关联,而仅凭年代年龄是无法检测到这些关联的。在Cam-CAN数据集中,脑年龄和纪年年龄与行为测量表现出相似的相关性。通过基于注意力的模型整合 sMRI 和 dMRI,我们提出的方法提高了预测的准确性,并为大脑衰老与行为之间的关系提供了更深入的见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Predicting brain age with global-local attention network from multimodal neuroimaging data: Accuracy, generalizability, and behavioral associations
Brain age, an emerging biomarker for brain diseases and aging, is typically predicted using single-modality T1-weighted structural MRI data. This study investigates the benefits of integrating structural MRI with diffusion MRI to enhance brain age prediction. We propose an attention-based deep learning model that fuses global-context information from structural MRI with local details from diffusion metrics. The model was evaluated using two large datasets: the Human Connectome Project (HCP, n = 1064, age 22–37) and the Cambridge Center for Aging and Neuroscience (Cam-CAN, n = 639, age 18–88). It was tested for generalizability and robustness on three independent datasets (n = 546, age 20–86), reproducibility on a test-retest dataset (n = 44, age 22–35), and longitudinal consistency (n = 129, age 46–92). We also examined the relationship between predicted brain age and behavioral measures. Results showed that the multimodal model improved prediction accuracy, achieving mean absolute errors (MAEs) of 2.44 years in the HCP dataset (sagittal plane) and 4.36 years in the Cam-CAN dataset (axial plane). The corresponding R2 values were 0.258 and 0.914, respectively, reflecting the model's ability to explain variance in the predictions across both datasets. Compared to single-modality models, the multimodal approach showed better generalization, reducing MAEs by 10–76 % and enhancing robustness by 22–82 %. While the multimodal model exhibited superior reproducibility, the sMRI model showed slightly better longitudinal consistency. Importantly, the multimodal model revealed unique associations between predicted brain age and behavioral measures, such as walking endurance and loneliness in the HCP dataset, which were not detected with chronological age alone. In the Cam-CAN dataset, brain age and chronological age exhibited similar correlations with behavioral measures. By integrating sMRI and dMRI through an attention-based model, our proposed approach enhances predictive accuracy and provides deeper insights into the relationship between brain aging and behavior.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers in biology and medicine
Computers in biology and medicine 工程技术-工程:生物医学
CiteScore
11.70
自引率
10.40%
发文量
1086
审稿时长
74 days
期刊介绍: Computers in Biology and Medicine is an international forum for sharing groundbreaking advancements in the use of computers in bioscience and medicine. This journal serves as a medium for communicating essential research, instruction, ideas, and information regarding the rapidly evolving field of computer applications in these domains. By encouraging the exchange of knowledge, we aim to facilitate progress and innovation in the utilization of computers in biology and medicine.
期刊最新文献
An adaptive enhanced human memory algorithm for multi-level image segmentation for pathological lung cancer images. Integrating multimodal learning for improved vital health parameter estimation. Riemannian manifold-based geometric clustering of continuous glucose monitoring to improve personalized diabetes management. Transformative artificial intelligence in gastric cancer: Advancements in diagnostic techniques. Artificial intelligence and deep learning algorithms for epigenetic sequence analysis: A review for epigeneticists and AI experts.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1