首页 > 最新文献

Machine learning in clinical neuroimaging : 7th international workshop, MLCN 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024, proceedings. MLCN (Workshop) (7th : 2024 : Marrakesh, Morocco)最新文献

英文 中文
SpaRG: Sparsely Reconstructed Graphs for Generalizable fMRI Analysis. 稀疏重构图用于可推广的fMRI分析。
Camila González, Yanis Miraoui, Yiran Fan, Ehsan Adeli, Kilian M Pohl

Deep learning can help uncover patterns in resting-state functional Magnetic Resonance Imaging (rs-fMRI) associated with psychiatric disorders and personal traits. Yet the problem of interpreting deep learning findings is rarely more evident than in fMRI analyses, as the data is sensitive to scanning effects and inherently difficult to visualize. We propose a simple approach to mitigate these challenges grounded on sparsification and self-supervision. Instead of extracting post-hoc feature attributions to uncover functional connections that are important to the target task, we identify a small subset of highly informative connections during training and occlude the rest. To this end, we jointly train a (1) sparse input mask, (2) variational autoencoder (VAE), and (3) downstream classifier in an end-to-end fashion. While we need a portion of labeled samples to train the classifier, we optimize the sparse mask and VAE with unlabeled data from additional acquisition sites, retaining only the input features that generalize well. We evaluate our method - Sparsely Reconstructed Graphs (SpaRG) - on the public ABIDE dataset for the task of sex classification, training with labeled cases from 18 sites and adapting the model to two additional out-of-distribution sites with a portion of unlabeled samples. For a relatively coarse parcellation (64 regions), SpaRG utilizes only 1% of the original connections while improving the classification accuracy across domains. Our code can be found at www.github.com/yanismiraoui/SpaRG.

深度学习可以帮助发现与精神疾病和个人特征相关的静息状态功能性磁共振成像(rs-fMRI)模式。然而,解释深度学习发现的问题很少比在功能磁共振成像分析中更明显,因为数据对扫描效果很敏感,并且本质上难以可视化。我们提出了一种基于分散和自我监督的简单方法来缓解这些挑战。我们不是提取事后特征属性来发现对目标任务很重要的功能连接,而是在训练过程中识别一小部分高信息量的连接,并遮挡其余的连接。为此,我们以端到端方式联合训练(1)稀疏输入掩码,(2)变分自编码器(VAE)和(3)下游分类器。虽然我们需要一部分标记样本来训练分类器,但我们使用来自其他采集站点的未标记数据来优化稀疏掩码和VAE,仅保留泛化良好的输入特征。我们评估了我们的方法-稀疏重建图(SpaRG) -在公共遵守数据集上进行性别分类任务,使用来自18个站点的标记案例进行训练,并使模型适应另外两个具有部分未标记样本的分布外站点。对于相对粗糙的分割(64个区域),SpaRG只利用了原始连接的1%,同时提高了跨域的分类精度。我们的代码可以在www.github.com/yanismiraoui/SpaRG上找到。
{"title":"SpaRG: Sparsely Reconstructed Graphs for Generalizable fMRI Analysis.","authors":"Camila González, Yanis Miraoui, Yiran Fan, Ehsan Adeli, Kilian M Pohl","doi":"10.1007/978-3-031-78761-4_5","DOIUrl":"https://doi.org/10.1007/978-3-031-78761-4_5","url":null,"abstract":"<p><p>Deep learning can help uncover patterns in resting-state functional Magnetic Resonance Imaging (rs-fMRI) associated with psychiatric disorders and personal traits. Yet the problem of interpreting deep learning findings is rarely more evident than in fMRI analyses, as the data is sensitive to scanning effects and inherently difficult to visualize. We propose a simple approach to mitigate these challenges grounded on sparsification and self-supervision. Instead of extracting post-hoc feature attributions to uncover functional connections that are important to the target task, we identify a small subset of highly informative connections during training and occlude the rest. To this end, we jointly train a (1) sparse input mask, (2) variational autoencoder (VAE), and (3) downstream classifier in an end-to-end fashion. While we need a portion of labeled samples to train the classifier, we optimize the sparse mask and VAE with unlabeled data from additional acquisition sites, retaining only the input features that generalize well. We evaluate our method - <b>Spa</b>rsely <b>R</b>econstructed <b>G</b>raphs (<b>SpaRG</b>) - on the public ABIDE dataset for the task of sex classification, training with labeled cases from 18 sites and adapting the model to two additional out-of-distribution sites with a portion of unlabeled samples. For a relatively coarse parcellation (64 regions), SpaRG utilizes only 1% of the original connections while improving the classification accuracy across domains. Our code can be found at www.github.com/yanismiraoui/SpaRG.</p>","PeriodicalId":520367,"journal":{"name":"Machine learning in clinical neuroimaging : 7th international workshop, MLCN 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024, proceedings. MLCN (Workshop) (7th : 2024 : Marrakesh, Morocco)","volume":"15266 ","pages":"46-56"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11694515/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142934418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain-Cognition Fingerprinting via Graph-GCCA with Contrastive Learning.
Yixin Wang, Wei Peng, Yu Zhang, Ehsan Adeli, Qingyu Zhao, Kilian M Pohl

Many longitudinal neuroimaging studies aim to improve the understanding of brain aging and diseases by studying the dynamic interactions between brain function and cognition. Doing so requires accurate encoding of their multidimensional relationship while accounting for individual variability over time. For this purpose, we propose an unsupervised learning model (called Contrastive Learning-based Graph Generalized Canonical Correlation Analysis (CoGraCa)) that encodes their relationship via Graph Attention Networks and generalized Canonical Correlational Analysis. To create brain-cognition fingerprints reflecting unique neural and cognitive phenotype of each person, the model also relies on individualized and multimodal contrastive learning. We apply CoGraCa to longitudinal dataset of healthy individuals consisting of resting-state functional MRI and cognitive measures acquired at multiple visits for each participant. The generated fingerprints effectively capture significant individual differences and outperform current single-modal and CCA-based multimodal models in identifying sex and age. More importantly, our encoding provides interpretable interactions between those two modalities.

{"title":"Brain-Cognition Fingerprinting via Graph-GCCA with Contrastive Learning.","authors":"Yixin Wang, Wei Peng, Yu Zhang, Ehsan Adeli, Qingyu Zhao, Kilian M Pohl","doi":"10.1007/978-3-031-78761-4_3","DOIUrl":"10.1007/978-3-031-78761-4_3","url":null,"abstract":"<p><p>Many longitudinal neuroimaging studies aim to improve the understanding of brain aging and diseases by studying the dynamic interactions between brain function and cognition. Doing so requires accurate encoding of their multidimensional relationship while accounting for individual variability over time. For this purpose, we propose an unsupervised learning model (called <b>Co</b>ntrastive Learning-based <b>Gra</b>ph Generalized <b>Ca</b>nonical Correlation Analysis (CoGraCa)) that encodes their relationship via Graph Attention Networks and generalized Canonical Correlational Analysis. To create brain-cognition fingerprints reflecting unique neural and cognitive phenotype of each person, the model also relies on individualized and multimodal contrastive learning. We apply CoGraCa to longitudinal dataset of healthy individuals consisting of resting-state functional MRI and cognitive measures acquired at multiple visits for each participant. The generated fingerprints effectively capture significant individual differences and outperform current single-modal and CCA-based multimodal models in identifying sex and age. More importantly, our encoding provides interpretable interactions between those two modalities.</p>","PeriodicalId":520367,"journal":{"name":"Machine learning in clinical neuroimaging : 7th international workshop, MLCN 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024, proceedings. MLCN (Workshop) (7th : 2024 : Marrakesh, Morocco)","volume":"15266 ","pages":"24-34"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11772010/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143054664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ProxiMO: Proximal Multi-operator Networks for Quantitative Susceptibility Mapping. ProxiMO:定量敏感性映射的近端多算子网络。
Shmuel Orenstein, Zhenghan Fang, Hyeong-Geol Shin, Peter van Zijl, Xu Li, Jeremias Sulam

Quantitative Susceptibility Mapping (QSM) is a technique that derives tissue magnetic susceptibility distributions from phase measurements obtained through Magnetic Resonance (MR) imaging. This involves solving an ill-posed dipole inversion problem, however, and thus time-consuming and cumbersome data acquisition from several distinct head orientations becomes necessary to obtain an accurate solution. Most recent (supervised) deep learning methods for single-phase QSM require training data obtained via multiple orientations. In this work, we present an alternative unsupervised learning approach that can efficiently train on single-orientation measurement data alone, named ProxiMO (Proximal Multi-Operator), combining Learned Proximal Convolutional Neural Networks (LP-CNN) with multi-operator imaging (MOI). This integration enables LP-CNN training for QSM on single-phase data without ground truth reconstructions. We further introduce a semi-supervised variant, which further boosts the reconstruction performance, compared to the traditional supervised fashions. Extensive experiments on multicenter datasets illustrate the advantage of unsupervised training and the superiority of the proposed approach for QSM reconstruction. Code is available at https://github.com/shmuelor/ProxiMO.

定量磁化率制图(QSM)是一种通过磁共振成像(MR)获得的相位测量来获得组织磁化率分布的技术。然而,这涉及到求解不适定偶极子反演问题,因此需要从几个不同的头部方向采集耗时且繁琐的数据以获得准确的解。最新的单相QSM(监督式)深度学习方法需要通过多个方向获得训练数据。在这项工作中,我们提出了一种替代的无监督学习方法,可以有效地单独训练单方向测量数据,称为ProxiMO (Proximal Multi-Operator),将学习的Proximal卷积神经网络(LP-CNN)与多算子成像(MOI)相结合。这种集成使LP-CNN在单相数据上训练QSM而不需要地面真值重建。我们进一步引入了一种半监督变体,与传统的监督模型相比,它进一步提高了重建性能。在多中心数据集上的大量实验证明了无监督训练的优点和所提方法在QSM重构中的优越性。代码可从https://github.com/shmuelor/ProxiMO获得。
{"title":"ProxiMO: Proximal Multi-operator Networks for Quantitative Susceptibility Mapping.","authors":"Shmuel Orenstein, Zhenghan Fang, Hyeong-Geol Shin, Peter van Zijl, Xu Li, Jeremias Sulam","doi":"10.1007/978-3-031-78761-4_2","DOIUrl":"https://doi.org/10.1007/978-3-031-78761-4_2","url":null,"abstract":"<p><p>Quantitative Susceptibility Mapping (QSM) is a technique that derives tissue magnetic susceptibility distributions from phase measurements obtained through Magnetic Resonance (MR) imaging. This involves solving an ill-posed dipole inversion problem, however, and thus time-consuming and cumbersome data acquisition from several distinct head orientations becomes necessary to obtain an accurate solution. Most recent (supervised) deep learning methods for single-phase QSM require training data obtained via multiple orientations. In this work, we present an alternative unsupervised learning approach that can efficiently train on single-orientation measurement data alone, named ProxiMO (Proximal Multi-Operator), combining Learned Proximal Convolutional Neural Networks (LP-CNN) with multi-operator imaging (MOI). This integration enables LP-CNN training for QSM on single-phase data without ground truth reconstructions. We further introduce a semi-supervised variant, which further boosts the reconstruction performance, compared to the traditional supervised fashions. Extensive experiments on multicenter datasets illustrate the advantage of unsupervised training and the superiority of the proposed approach for QSM reconstruction. Code is available at https://github.com/shmuelor/ProxiMO.</p>","PeriodicalId":520367,"journal":{"name":"Machine learning in clinical neuroimaging : 7th international workshop, MLCN 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024, proceedings. MLCN (Workshop) (7th : 2024 : Marrakesh, Morocco)","volume":"15266 ","pages":"13-23"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11705005/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142961163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Machine learning in clinical neuroimaging : 7th international workshop, MLCN 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024, proceedings. MLCN (Workshop) (7th : 2024 : Marrakesh, Morocco)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1