Deep learning based joint fusion approach to exploit anatomical and functional brain information in autism spectrum disorders.

Q1 Computer Science Brain Informatics Pub Date : 2024-01-09 DOI:10.1186/s40708-023-00217-4
Sara Saponaro, Francesca Lizzi, Giacomo Serra, Francesca Mainas, Piernicola Oliva, Alessia Giuliano, Sara Calderoni, Alessandra Retico
{"title":"Deep learning based joint fusion approach to exploit anatomical and functional brain information in autism spectrum disorders.","authors":"Sara Saponaro, Francesca Lizzi, Giacomo Serra, Francesca Mainas, Piernicola Oliva, Alessia Giuliano, Sara Calderoni, Alessandra Retico","doi":"10.1186/s40708-023-00217-4","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The integration of the information encoded in multiparametric MRI images can enhance the performance of machine-learning classifiers. In this study, we investigate whether the combination of structural and functional MRI might improve the performances of a deep learning (DL) model trained to discriminate subjects with Autism Spectrum Disorders (ASD) with respect to typically developing controls (TD).</p><p><strong>Material and methods: </strong>We analyzed both structural and functional MRI brain scans publicly available within the ABIDE I and II data collections. We considered 1383 male subjects with age between 5 and 40 years, including 680 subjects with ASD and 703 TD from 35 different acquisition sites. We extracted morphometric and functional brain features from MRI scans with the Freesurfer and the CPAC analysis packages, respectively. Then, due to the multisite nature of the dataset, we implemented a data harmonization protocol. The ASD vs. TD classification was carried out with a multiple-input DL model, consisting in a neural network which generates a fixed-length feature representation of the data of each modality (FR-NN), and a Dense Neural Network for classification (C-NN). Specifically, we implemented a joint fusion approach to multiple source data integration. The main advantage of the latter is that the loss is propagated back to the FR-NN during the training, thus creating informative feature representations for each data modality. Then, a C-NN, with a number of layers and neurons per layer to be optimized during the model training, performs the ASD-TD discrimination. The performance was evaluated by computing the Area under the Receiver Operating Characteristic curve within a nested 10-fold cross-validation. The brain features that drive the DL classification were identified by the SHAP explainability framework.</p><p><strong>Results: </strong>The AUC values of 0.66±0.05 and of 0.76±0.04 were obtained in the ASD vs. TD discrimination when only structural or functional features are considered, respectively. The joint fusion approach led to an AUC of 0.78±0.04. The set of structural and functional connectivity features identified as the most important for the two-class discrimination supports the idea that brain changes tend to occur in individuals with ASD in regions belonging to the Default Mode Network and to the Social Brain.</p><p><strong>Conclusions: </strong>Our results demonstrate that the multimodal joint fusion approach outperforms the classification results obtained with data acquired by a single MRI modality as it efficiently exploits the complementarity of structural and functional brain information.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"2"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10776521/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Brain Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s40708-023-00217-4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0

Abstract

Background: The integration of the information encoded in multiparametric MRI images can enhance the performance of machine-learning classifiers. In this study, we investigate whether the combination of structural and functional MRI might improve the performances of a deep learning (DL) model trained to discriminate subjects with Autism Spectrum Disorders (ASD) with respect to typically developing controls (TD).

Material and methods: We analyzed both structural and functional MRI brain scans publicly available within the ABIDE I and II data collections. We considered 1383 male subjects with age between 5 and 40 years, including 680 subjects with ASD and 703 TD from 35 different acquisition sites. We extracted morphometric and functional brain features from MRI scans with the Freesurfer and the CPAC analysis packages, respectively. Then, due to the multisite nature of the dataset, we implemented a data harmonization protocol. The ASD vs. TD classification was carried out with a multiple-input DL model, consisting in a neural network which generates a fixed-length feature representation of the data of each modality (FR-NN), and a Dense Neural Network for classification (C-NN). Specifically, we implemented a joint fusion approach to multiple source data integration. The main advantage of the latter is that the loss is propagated back to the FR-NN during the training, thus creating informative feature representations for each data modality. Then, a C-NN, with a number of layers and neurons per layer to be optimized during the model training, performs the ASD-TD discrimination. The performance was evaluated by computing the Area under the Receiver Operating Characteristic curve within a nested 10-fold cross-validation. The brain features that drive the DL classification were identified by the SHAP explainability framework.

Results: The AUC values of 0.66±0.05 and of 0.76±0.04 were obtained in the ASD vs. TD discrimination when only structural or functional features are considered, respectively. The joint fusion approach led to an AUC of 0.78±0.04. The set of structural and functional connectivity features identified as the most important for the two-class discrimination supports the idea that brain changes tend to occur in individuals with ASD in regions belonging to the Default Mode Network and to the Social Brain.

Conclusions: Our results demonstrate that the multimodal joint fusion approach outperforms the classification results obtained with data acquired by a single MRI modality as it efficiently exploits the complementarity of structural and functional brain information.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于深度学习的联合融合方法,利用自闭症谱系障碍的大脑解剖和功能信息。
背景:整合多参数磁共振成像图像中编码的信息可以提高机器学习分类器的性能。在本研究中,我们探讨了结构性和功能性核磁共振成像的结合是否能提高经过训练的深度学习(DL)模型的性能,从而区分自闭症谱系障碍(ASD)受试者和发育正常的对照组(TD):我们分析了 ABIDE I 和 II 数据集中公开提供的结构性和功能性 MRI 脑部扫描结果。我们考虑了来自 35 个不同采集地点的 1383 名年龄在 5 至 40 岁之间的男性受试者,其中包括 680 名 ASD 受试者和 703 名 TD 受试者。我们分别使用 Freesurfer 和 CPAC 分析软件包从核磁共振扫描图像中提取了大脑的形态和功能特征。然后,由于数据集的多站点性质,我们实施了数据协调协议。ASD 与 TD 的分类是通过一个多输入 DL 模型进行的,该模型由一个神经网络和一个密集神经网络组成,前者可对每种模式的数据生成固定长度的特征表示(FR-NN),后者则用于分类(C-NN)。具体来说,我们采用了一种联合融合方法来进行多源数据整合。后者的主要优势在于,在训练过程中将损失传回到 FR-NN 中,从而为每种数据模态创建信息丰富的特征表征。然后,由 C-NN 进行 ASD-TD 识别,C-NN 的层数和每层神经元的数量将在模型训练过程中进行优化。在嵌套的 10 倍交叉验证中,通过计算接收者操作特征曲线下的面积来评估其性能。SHAP可解释性框架确定了驱动DL分类的大脑特征:结果:当只考虑结构或功能特征时,ASD vs. TD分辨的AUC值分别为0.66±0.05和0.76±0.04。联合融合方法的AUC值为0.78±0.04。被确定为对两类分辨最重要的结构和功能连接特征集支持了这一观点,即ASD患者的大脑变化往往发生在属于默认模式网络和社交脑的区域:我们的研究结果表明,多模态联合融合方法有效地利用了大脑结构和功能信息的互补性,其分类结果优于通过单一磁共振成像模式获取的数据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Brain Informatics
Brain Informatics Computer Science-Computer Science Applications
CiteScore
9.50
自引率
0.00%
发文量
27
审稿时长
13 weeks
期刊介绍: Brain Informatics is an international, peer-reviewed, interdisciplinary open-access journal published under the brand SpringerOpen, which provides a unique platform for researchers and practitioners to disseminate original research on computational and informatics technologies related to brain. This journal addresses the computational, cognitive, physiological, biological, physical, ecological and social perspectives of brain informatics. It also welcomes emerging information technologies and advanced neuro-imaging technologies, such as big data analytics and interactive knowledge discovery related to various large-scale brain studies and their applications. This journal will publish high-quality original research papers, brief reports and critical reviews in all theoretical, technological, clinical and interdisciplinary studies that make up the field of brain informatics and its applications in brain-machine intelligence, brain-inspired intelligent systems, mental health and brain disorders, etc. The scope of papers includes the following five tracks: Track 1: Cognitive and Computational Foundations of Brain Science Track 2: Human Information Processing Systems Track 3: Brain Big Data Analytics, Curation and Management Track 4: Informatics Paradigms for Brain and Mental Health Research Track 5: Brain-Machine Intelligence and Brain-Inspired Computing
期刊最新文献
Novel machine learning-driven comparative analysis of CSP, STFT, and CSP-STFT fusion for EEG data classification across multiple meditation and non-meditation sessions in BCI pipeline. Rethinking the residual approach: leveraging statistical learning to operationalize cognitive resilience in Alzheimer's disease. CalciumZero: a toolbox for fluorescence calcium imaging on iPSC derived brain organoids. Blockchain-enabled digital twin system for brain stroke prediction. A temporal-spectral graph convolutional neural network model for EEG emotion recognition within and across subjects.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1