用于阿尔茨海默病诊断的多尺度多模态深度学习框架。

IF 7 2区 医学 Q1 BIOLOGY Computers in biology and medicine Pub Date : 2024-11-22 DOI:10.1016/j.compbiomed.2024.109438
Mohammed Abdelaziz , Tianfu Wang , Waqas Anwaar , Ahmed Elazab
{"title":"用于阿尔茨海默病诊断的多尺度多模态深度学习框架。","authors":"Mohammed Abdelaziz ,&nbsp;Tianfu Wang ,&nbsp;Waqas Anwaar ,&nbsp;Ahmed Elazab","doi":"10.1016/j.compbiomed.2024.109438","DOIUrl":null,"url":null,"abstract":"<div><div>Multimodal neuroimaging data, including magnetic resonance imaging (MRI) and positron emission tomography (PET), provides complementary information about the brain that can aid in Alzheimer's disease (AD) diagnosis. However, most existing deep learning methods still rely on patch-based extraction from neuroimaging data, which typically yields suboptimal performance due to its isolation from the subsequent network and does not effectively capture the varying scales of structural changes in the cerebrum. Moreover, these methods often simply concatenate multimodal data, ignoring the interactions between them that can highlight discriminative regions and thereby improve the diagnosis of AD. To tackle these issues, we develop a multimodal and multi-scale deep learning model that effectively leverages the interaction between the multimodal and multiscale of the neuroimaging data. First, we employ a convolutional neural network to embed each scale of the multimodal images. Second, we propose multimodal scale fusion mechanisms that utilize both multi-head self-attention and multi-head cross-attention, which capture global relations among the embedded features and weigh each modality's contribution to another, and hence enhancing feature extraction and interaction between each scale of MRI and PET images. Third, we introduce a cross-modality fusion module that includes a multi-head cross-attention to fuse MRI and PET data at different scales and promote global features from the previous attention layers. Finally, all the features from every scale are fused to discriminate between the different stages of AD. We evaluated our proposed method on the ADNI dataset, and the results show that our model achieves better performance than the state-of-the-art methods.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"184 ","pages":"Article 109438"},"PeriodicalIF":7.0000,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-scale multimodal deep learning framework for Alzheimer's disease diagnosis\",\"authors\":\"Mohammed Abdelaziz ,&nbsp;Tianfu Wang ,&nbsp;Waqas Anwaar ,&nbsp;Ahmed Elazab\",\"doi\":\"10.1016/j.compbiomed.2024.109438\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Multimodal neuroimaging data, including magnetic resonance imaging (MRI) and positron emission tomography (PET), provides complementary information about the brain that can aid in Alzheimer's disease (AD) diagnosis. However, most existing deep learning methods still rely on patch-based extraction from neuroimaging data, which typically yields suboptimal performance due to its isolation from the subsequent network and does not effectively capture the varying scales of structural changes in the cerebrum. Moreover, these methods often simply concatenate multimodal data, ignoring the interactions between them that can highlight discriminative regions and thereby improve the diagnosis of AD. To tackle these issues, we develop a multimodal and multi-scale deep learning model that effectively leverages the interaction between the multimodal and multiscale of the neuroimaging data. First, we employ a convolutional neural network to embed each scale of the multimodal images. Second, we propose multimodal scale fusion mechanisms that utilize both multi-head self-attention and multi-head cross-attention, which capture global relations among the embedded features and weigh each modality's contribution to another, and hence enhancing feature extraction and interaction between each scale of MRI and PET images. Third, we introduce a cross-modality fusion module that includes a multi-head cross-attention to fuse MRI and PET data at different scales and promote global features from the previous attention layers. Finally, all the features from every scale are fused to discriminate between the different stages of AD. We evaluated our proposed method on the ADNI dataset, and the results show that our model achieves better performance than the state-of-the-art methods.</div></div>\",\"PeriodicalId\":10578,\"journal\":{\"name\":\"Computers in biology and medicine\",\"volume\":\"184 \",\"pages\":\"Article 109438\"},\"PeriodicalIF\":7.0000,\"publicationDate\":\"2024-11-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in biology and medicine\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0010482524015233\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"BIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in biology and medicine","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010482524015233","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

包括磁共振成像(MRI)和正电子发射断层扫描(PET)在内的多模态神经成像数据提供了有关大脑的补充信息,有助于阿尔茨海默病(AD)的诊断。然而,现有的大多数深度学习方法仍然依赖于从神经成像数据中提取基于斑块的信息,这种方法由于与后续网络隔离,通常无法有效捕捉大脑中不同规模的结构变化,因此性能不佳。此外,这些方法通常只是简单地将多模态数据连接起来,忽略了它们之间的相互作用,而这种相互作用可以突出具有鉴别力的区域,从而改善对注意力缺失症的诊断。为了解决这些问题,我们开发了一种多模态和多尺度深度学习模型,它能有效利用神经影像数据的多模态和多尺度之间的相互作用。首先,我们采用卷积神经网络嵌入多模态图像的每个尺度。其次,我们提出了多模态尺度融合机制,利用多头自注意和多头交叉注意,捕捉嵌入特征之间的全局关系,权衡每种模态对另一种模态的贡献,从而加强特征提取以及 MRI 和 PET 图像各尺度之间的互动。第三,我们引入了跨模态融合模块,其中包括多头交叉注意,以融合不同尺度的 MRI 和 PET 数据,并提升前几层注意的全局特征。最后,融合每个尺度的所有特征来区分 AD 的不同阶段。我们在 ADNI 数据集上评估了我们提出的方法,结果表明我们的模型比最先进的方法取得了更好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Multi-scale multimodal deep learning framework for Alzheimer's disease diagnosis
Multimodal neuroimaging data, including magnetic resonance imaging (MRI) and positron emission tomography (PET), provides complementary information about the brain that can aid in Alzheimer's disease (AD) diagnosis. However, most existing deep learning methods still rely on patch-based extraction from neuroimaging data, which typically yields suboptimal performance due to its isolation from the subsequent network and does not effectively capture the varying scales of structural changes in the cerebrum. Moreover, these methods often simply concatenate multimodal data, ignoring the interactions between them that can highlight discriminative regions and thereby improve the diagnosis of AD. To tackle these issues, we develop a multimodal and multi-scale deep learning model that effectively leverages the interaction between the multimodal and multiscale of the neuroimaging data. First, we employ a convolutional neural network to embed each scale of the multimodal images. Second, we propose multimodal scale fusion mechanisms that utilize both multi-head self-attention and multi-head cross-attention, which capture global relations among the embedded features and weigh each modality's contribution to another, and hence enhancing feature extraction and interaction between each scale of MRI and PET images. Third, we introduce a cross-modality fusion module that includes a multi-head cross-attention to fuse MRI and PET data at different scales and promote global features from the previous attention layers. Finally, all the features from every scale are fused to discriminate between the different stages of AD. We evaluated our proposed method on the ADNI dataset, and the results show that our model achieves better performance than the state-of-the-art methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers in biology and medicine
Computers in biology and medicine 工程技术-工程:生物医学
CiteScore
11.70
自引率
10.40%
发文量
1086
审稿时长
74 days
期刊介绍: Computers in Biology and Medicine is an international forum for sharing groundbreaking advancements in the use of computers in bioscience and medicine. This journal serves as a medium for communicating essential research, instruction, ideas, and information regarding the rapidly evolving field of computer applications in these domains. By encouraging the exchange of knowledge, we aim to facilitate progress and innovation in the utilization of computers in biology and medicine.
期刊最新文献
An adaptive enhanced human memory algorithm for multi-level image segmentation for pathological lung cancer images. Integrating multimodal learning for improved vital health parameter estimation. Riemannian manifold-based geometric clustering of continuous glucose monitoring to improve personalized diabetes management. Transformative artificial intelligence in gastric cancer: Advancements in diagnostic techniques. Artificial intelligence and deep learning algorithms for epigenetic sequence analysis: A review for epigeneticists and AI experts.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1