Fengling Hu, Alfredo Lucas, Andrew A. Chen, Kyle Coleman, Hannah Horng, Raymond W. S. Ng, Nicholas J. Tustison, Kathryn A. Davis, Haochang Shou, Mingyao Li, Russell T. Shinohara, The Alzheimer's Disease Neuroimaging Initiative
Neuroimaging data acquired using multiple scanners or protocols are increasingly available. However, such data exhibit technical artifacts across batches which introduce confounding and decrease reproducibility. This is especially true when multi-batch data are analyzed using complex downstream models which are more likely to pick up on and implicitly incorporate batch-related information. Previously proposed image harmonization methods have sought to remove these batch effects; however, batch effects remain detectable in the data after applying these methods. We present DeepComBat, a deep learning harmonization method based on a conditional variational autoencoder and the ComBat method. DeepComBat combines the strengths of statistical and deep learning methods in order to account for the multivariate relationships between features while simultaneously relaxing strong assumptions made by previous deep learning harmonization methods. As a result, DeepComBat can perform multivariate harmonization while preserving data structure and avoiding the introduction of synthetic artifacts. We apply this method to cortical thickness measurements from a cognitive-aging cohort and show DeepComBat qualitatively and quantitatively outperforms existing methods in removing batch effects while preserving biological heterogeneity. Additionally, DeepComBat provides a new perspective for statistically motivated deep learning harmonization methods.
{"title":"DeepComBat: A statistically motivated, hyperparameter-robust, deep learning approach to harmonization of neuroimaging data","authors":"Fengling Hu, Alfredo Lucas, Andrew A. Chen, Kyle Coleman, Hannah Horng, Raymond W. S. Ng, Nicholas J. Tustison, Kathryn A. Davis, Haochang Shou, Mingyao Li, Russell T. Shinohara, The Alzheimer's Disease Neuroimaging Initiative","doi":"10.1002/hbm.26708","DOIUrl":"10.1002/hbm.26708","url":null,"abstract":"<p>Neuroimaging data acquired using multiple scanners or protocols are increasingly available. However, such data exhibit technical artifacts across batches which introduce confounding and decrease reproducibility. This is especially true when multi-batch data are analyzed using complex downstream models which are more likely to pick up on and implicitly incorporate batch-related information. Previously proposed image harmonization methods have sought to remove these batch effects; however, batch effects remain detectable in the data after applying these methods. We present DeepComBat, a deep learning harmonization method based on a conditional variational autoencoder and the ComBat method. DeepComBat combines the strengths of statistical and deep learning methods in order to account for the multivariate relationships between features while simultaneously relaxing strong assumptions made by previous deep learning harmonization methods. As a result, DeepComBat can perform multivariate harmonization while preserving data structure and avoiding the introduction of synthetic artifacts. We apply this method to cortical thickness measurements from a cognitive-aging cohort and show DeepComBat qualitatively and quantitatively outperforms existing methods in removing batch effects while preserving biological heterogeneity. Additionally, DeepComBat provides a new perspective for statistically motivated deep learning harmonization methods.</p>","PeriodicalId":13019,"journal":{"name":"Human Brain Mapping","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11273293/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141758408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Krishna Pusuluri, Zening Fu, Robyn Miller, Godfrey Pearlson, Peter Kochunov, Theo G. M. Van Erp, Armin Iraji, Vince D. Calhoun
Despite increasing interest in the dynamics of functional brain networks, most studies focus on the changing relationships over time between spatially static networks or regions. Here we propose an approach to study dynamic spatial brain networks in human resting state functional magnetic resonance imaging (rsfMRI) data and evaluate the temporal changes in the volumes of these 4D networks. Our results show significant volumetric coupling (i.e., synchronized shrinkage and growth) between networks during the scan, that we refer to as dynamic spatial network connectivity (dSNC). We find that several features of such dynamic spatial brain networks are associated with cognition, with higher dynamic variability in these networks and higher volumetric coupling between network pairs positively associated with cognitive performance. We show that these networks are modulated differently in individuals with schizophrenia versus typical controls, resulting in network growth or shrinkage, as well as altered focus of activity within a network. Schizophrenia also shows lower spatial dynamical variability in several networks, and lower volumetric coupling between pairs of networks, thus upholding the role of dynamic spatial brain networks in cognitive impairment seen in schizophrenia. Our data show evidence for the importance of studying the typically overlooked voxel-wise changes within and between brain networks.
{"title":"4D dynamic spatial brain networks at rest linked to cognition show atypical variability and coupling in schizophrenia","authors":"Krishna Pusuluri, Zening Fu, Robyn Miller, Godfrey Pearlson, Peter Kochunov, Theo G. M. Van Erp, Armin Iraji, Vince D. Calhoun","doi":"10.1002/hbm.26773","DOIUrl":"10.1002/hbm.26773","url":null,"abstract":"<p>Despite increasing interest in the dynamics of functional brain networks, most studies focus on the changing relationships over time between spatially static networks or regions. Here we propose an approach to study dynamic spatial brain networks in human resting state functional magnetic resonance imaging (rsfMRI) data and evaluate the temporal changes in the volumes of these 4D networks. Our results show significant volumetric coupling (i.e., synchronized shrinkage and growth) between networks during the scan, that we refer to as dynamic spatial network connectivity (dSNC). We find that several features of such dynamic spatial brain networks are associated with cognition, with higher dynamic variability in these networks and higher volumetric coupling between network pairs positively associated with cognitive performance. We show that these networks are modulated differently in individuals with schizophrenia versus typical controls, resulting in network growth or shrinkage, as well as altered focus of activity within a network. Schizophrenia also shows lower spatial dynamical variability in several networks, and lower volumetric coupling between pairs of networks, thus upholding the role of dynamic spatial brain networks in cognitive impairment seen in schizophrenia. Our data show evidence for the importance of studying the typically overlooked voxel-wise changes within and between brain networks.</p>","PeriodicalId":13019,"journal":{"name":"Human Brain Mapping","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11267451/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141751545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
James K. Ruffle, Henry Watkins, Robert J. Gray, Harpreet Hyare, Michel Thiebaut de Schotten, Parashkev Nachev
The architecture of the brain is too complex to be intuitively surveyable without the use of compressed representations that project its variation into a compact, navigable space. The task is especially challenging with high-dimensional data, such as gene expression, where the joint complexity of anatomical and transcriptional patterns demands maximum compression. The established practice is to use standard principal component analysis (PCA), whose computational felicity is offset by limited expressivity, especially at great compression ratios. Employing whole-brain, voxel-wise Allen Brain Atlas transcription data, here we systematically compare compressed representations based on the most widely supported linear and non-linear methods—PCA, kernel PCA, non-negative matrix factorisation (NMF), t-stochastic neighbour embedding (t-SNE), uniform manifold approximation and projection (UMAP), and deep auto-encoding—quantifying reconstruction fidelity, anatomical coherence, and predictive utility across signalling, microstructural, and metabolic targets, drawn from large-scale open-source MRI and PET data. We show that deep auto-encoders yield superior representations across all metrics of performance and target domains, supporting their use as the reference standard for representing transcription patterns in the human brain.
大脑的结构过于复杂,如果不使用压缩表示法将其变化投射到一个紧凑、可浏览的空间,就无法直观地对其进行勘测。对于基因表达等高维数据来说,这项任务尤其具有挑战性,因为解剖和转录模式的共同复杂性要求最大程度的压缩。传统的做法是使用标准的主成分分析(PCA),但其计算的便利性被有限的表达能力所抵消,尤其是在压缩比很大的情况下。利用全脑体素艾伦脑图谱转录数据,我们系统地比较了基于最广泛支持的线性和非线性方法--主成分分析(PCA)、核主成分分析(PCA)、非负矩阵因式分解(NMF)、t-随机邻域嵌入(t-SNE)的压缩表示、统一流形近似和投影(UMAP)以及深度自动编码--对重建保真度、解剖一致性以及信号、微结构和代谢目标的预测效用进行量化,这些数据来自大规模开源 MRI 和 PET 数据。我们的研究表明,深度自动编码器在所有性能指标和目标领域都能产生卓越的表示,支持将其作为表示人脑转录模式的参考标准。
{"title":"Compressed representation of brain genetic transcription","authors":"James K. Ruffle, Henry Watkins, Robert J. Gray, Harpreet Hyare, Michel Thiebaut de Schotten, Parashkev Nachev","doi":"10.1002/hbm.26795","DOIUrl":"10.1002/hbm.26795","url":null,"abstract":"<p>The architecture of the brain is too complex to be intuitively surveyable without the use of compressed representations that project its variation into a compact, navigable space. The task is especially challenging with high-dimensional data, such as gene expression, where the joint complexity of anatomical and transcriptional patterns demands maximum compression. The established practice is to use standard principal component analysis (PCA), whose computational felicity is offset by limited expressivity, especially at great compression ratios. Employing whole-brain, voxel-wise Allen Brain Atlas transcription data, here we systematically compare compressed representations based on the most widely supported linear and non-linear methods—PCA, kernel PCA, non-negative matrix factorisation (NMF), t-stochastic neighbour embedding (t-SNE), uniform manifold approximation and projection (UMAP), and deep auto-encoding—quantifying reconstruction fidelity, anatomical coherence, and predictive utility across signalling, microstructural, and metabolic targets, drawn from large-scale open-source MRI and PET data. We show that deep auto-encoders yield superior representations across all metrics of performance and target domains, supporting their use as the reference standard for representing transcription patterns in the human brain.</p>","PeriodicalId":13019,"journal":{"name":"Human Brain Mapping","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11267301/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141751546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}