With an increasing amount of observations on the dynamics of many complex systems, it is required to reveal the underlying mechanisms behind these complex dynamics, which is fundamentally important in many scientific fields such as climate, financial, ecological, and neural systems. The underlying mechanisms are commonly encoded into network structures, e.g., capturing how constituents interact with each other to produce emergent behavior. Here, we address whether a good network reconstruction suggests a good dynamics prediction. The answer is quite dependent on the nature of the supplied (observed) dynamics sequences measured on the complex system. When the dynamics are not chaotic, network reconstruction implies dynamics prediction. In contrast, even if a network can be well reconstructed from the chaotic time series (chaos means that many unstable dynamics states coexist), the prediction of the future dynamics can become impossible as at some future point the prediction error will be amplified. This is explained by using dynamical mean-field theory on a toy model of random recurrent neural networks.
{"title":"Network reconstruction may not mean dynamics prediction","authors":"Zhendong Yu, Haiping Huang","doi":"arxiv-2409.04240","DOIUrl":"https://doi.org/arxiv-2409.04240","url":null,"abstract":"With an increasing amount of observations on the dynamics of many complex\u0000systems, it is required to reveal the underlying mechanisms behind these\u0000complex dynamics, which is fundamentally important in many scientific fields\u0000such as climate, financial, ecological, and neural systems. The underlying\u0000mechanisms are commonly encoded into network structures, e.g., capturing how\u0000constituents interact with each other to produce emergent behavior. Here, we\u0000address whether a good network reconstruction suggests a good dynamics\u0000prediction. The answer is quite dependent on the nature of the supplied\u0000(observed) dynamics sequences measured on the complex system. When the dynamics\u0000are not chaotic, network reconstruction implies dynamics prediction. In\u0000contrast, even if a network can be well reconstructed from the chaotic time\u0000series (chaos means that many unstable dynamics states coexist), the prediction\u0000of the future dynamics can become impossible as at some future point the\u0000prediction error will be amplified. This is explained by using dynamical\u0000mean-field theory on a toy model of random recurrent neural networks.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alzheimer's disease (AD) is a neurodegenerative disorder marked by memory loss and cognitive decline, making early detection vital for timely intervention. However, early diagnosis is challenging due to the heterogeneous presentation of symptoms. Resting-state fMRI (rs-fMRI) captures spontaneous brain activity and functional connectivity, which are known to be disrupted in AD and mild cognitive impairment (MCI). Traditional methods, such as Pearson's correlation, have been used to calculate association matrices, but these approaches often overlook the dynamic and non-stationary nature of brain activity. In this study, we introduce a novel method that integrates discrete wavelet transform (DWT) and graph theory to model the dynamic behavior of brain networks. By decomposing rs-fMRI signals using DWT, our approach captures the time-frequency representation of brain activity, allowing for a more nuanced analysis of the underlying network dynamics. Graph theory provides a robust mathematical framework to analyze these complex networks, while machine learning is employed to automate the discrimination of different stages of AD based on learned patterns from different frequency bands. We applied our method to a dataset of rs-fMRI images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, demonstrating its potential as an early diagnostic tool for AD and for monitoring disease progression. Our statistical analysis identifies specific brain regions and connections that are affected in AD and MCI, at different frequency bands, offering deeper insights into the disease's impact on brain function.
{"title":"Study of Brain Network in Alzheimers Disease Using Wavelet-Based Graph Theory Method","authors":"Ali Khazaee, Abdolreza Mohammadi, Ruairi Oreally","doi":"arxiv-2409.04072","DOIUrl":"https://doi.org/arxiv-2409.04072","url":null,"abstract":"Alzheimer's disease (AD) is a neurodegenerative disorder marked by memory\u0000loss and cognitive decline, making early detection vital for timely\u0000intervention. However, early diagnosis is challenging due to the heterogeneous\u0000presentation of symptoms. Resting-state fMRI (rs-fMRI) captures spontaneous\u0000brain activity and functional connectivity, which are known to be disrupted in\u0000AD and mild cognitive impairment (MCI). Traditional methods, such as Pearson's\u0000correlation, have been used to calculate association matrices, but these\u0000approaches often overlook the dynamic and non-stationary nature of brain\u0000activity. In this study, we introduce a novel method that integrates discrete\u0000wavelet transform (DWT) and graph theory to model the dynamic behavior of brain\u0000networks. By decomposing rs-fMRI signals using DWT, our approach captures the\u0000time-frequency representation of brain activity, allowing for a more nuanced\u0000analysis of the underlying network dynamics. Graph theory provides a robust\u0000mathematical framework to analyze these complex networks, while machine\u0000learning is employed to automate the discrimination of different stages of AD\u0000based on learned patterns from different frequency bands. We applied our method\u0000to a dataset of rs-fMRI images from the Alzheimer's Disease Neuroimaging\u0000Initiative (ADNI) database, demonstrating its potential as an early diagnostic\u0000tool for AD and for monitoring disease progression. Our statistical analysis\u0000identifies specific brain regions and connections that are affected in AD and\u0000MCI, at different frequency bands, offering deeper insights into the disease's\u0000impact on brain function.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"7 Suppl 8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tommaso Gili, Bryant Avila, Luca Pasquini, Andrei Holodny, David Phillips, Paolo Boldi, Andrea Gabrielli, Guido Caldarelli, Manuel Zimmer, Hernán A. Makse
In his book 'A Beautiful Question', physicist Frank Wilczek argues that symmetry is 'nature's deep design,' governing the behavior of the universe, from the smallest particles to the largest structures. While symmetry is a cornerstone of physics, it has not yet been found widespread applicability to describe biological systems, particularly the human brain. In this context, we study the human brain network engaged in language and explore the relationship between the structural connectivity (connectome or structural network) and the emergent synchronization of the mesoscopic regions of interest (functional network). We explain this relationship through a different kind of symmetry than physical symmetry, derived from the categorical notion of Grothendieck fibrations. This introduces a new understanding of the human brain by proposing a local symmetry theory of the connectome, which accounts for how the structure of the brain's network determines its coherent activity. Among the allowed patterns of structural connectivity, synchronization elicits different symmetry subsets according to the functional engagement of the brain. We show that the resting state is a particular realization of the cerebral synchronization pattern characterized by a fibration symmetry that is broken in the transition from rest to language. Our findings suggest that the brain's network symmetry at the local level determines its coherent function, and we can understand this relationship from theoretical principles.
{"title":"Fibration symmetry-breaking supports functional transitions in a brain network engaged in language","authors":"Tommaso Gili, Bryant Avila, Luca Pasquini, Andrei Holodny, David Phillips, Paolo Boldi, Andrea Gabrielli, Guido Caldarelli, Manuel Zimmer, Hernán A. Makse","doi":"arxiv-2409.02674","DOIUrl":"https://doi.org/arxiv-2409.02674","url":null,"abstract":"In his book 'A Beautiful Question', physicist Frank Wilczek argues that\u0000symmetry is 'nature's deep design,' governing the behavior of the universe,\u0000from the smallest particles to the largest structures. While symmetry is a\u0000cornerstone of physics, it has not yet been found widespread applicability to\u0000describe biological systems, particularly the human brain. In this context, we\u0000study the human brain network engaged in language and explore the relationship\u0000between the structural connectivity (connectome or structural network) and the\u0000emergent synchronization of the mesoscopic regions of interest (functional\u0000network). We explain this relationship through a different kind of symmetry\u0000than physical symmetry, derived from the categorical notion of Grothendieck\u0000fibrations. This introduces a new understanding of the human brain by proposing\u0000a local symmetry theory of the connectome, which accounts for how the structure\u0000of the brain's network determines its coherent activity. Among the allowed\u0000patterns of structural connectivity, synchronization elicits different symmetry\u0000subsets according to the functional engagement of the brain. We show that the\u0000resting state is a particular realization of the cerebral synchronization\u0000pattern characterized by a fibration symmetry that is broken in the transition\u0000from rest to language. Our findings suggest that the brain's network symmetry\u0000at the local level determines its coherent function, and we can understand this\u0000relationship from theoretical principles.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"89 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bryant Avila, Pedro Augusto, David Phillips, Tommaso Gili, Manuel Zimmer, Hernán A. Makse
Understanding the dynamical behavior of complex systems from their underlying network architectures is a long-standing question in complexity theory. Therefore, many metrics have been devised to extract network features like motifs, centrality, and modularity measures. It has previously been proposed that network symmetries are of particular importance since they are expected to underly the synchronization of a system's units, which is ubiquitously observed in nervous system activity patterns. However, perfectly symmetrical structures are difficult to assess in noisy measurements of biological systems, like neuronal connectomes. Here, we devise a principled method to infer network symmetries from combined connectome and neuronal activity data. Using nervous system-wide population activity recordings of the textit{C.elegans} backward locomotor system, we infer structures in the connectome called fibration symmetries, which can explain which group of neurons synchronize their activity. Our analysis suggests functional building blocks in the animal's motor periphery, providing new testable hypotheses on how descending interneuron circuits communicate with the motor periphery to control behavior. Our approach opens a new door to exploring the structure-function relations in other complex systems, like the nervous systems of larger animals.
{"title":"Symmetries and synchronization from whole-neural activity in {it C. elegans} connectome: Integration of functional and structural networks","authors":"Bryant Avila, Pedro Augusto, David Phillips, Tommaso Gili, Manuel Zimmer, Hernán A. Makse","doi":"arxiv-2409.02682","DOIUrl":"https://doi.org/arxiv-2409.02682","url":null,"abstract":"Understanding the dynamical behavior of complex systems from their underlying\u0000network architectures is a long-standing question in complexity theory.\u0000Therefore, many metrics have been devised to extract network features like\u0000motifs, centrality, and modularity measures. It has previously been proposed\u0000that network symmetries are of particular importance since they are expected to\u0000underly the synchronization of a system's units, which is ubiquitously observed\u0000in nervous system activity patterns. However, perfectly symmetrical structures\u0000are difficult to assess in noisy measurements of biological systems, like\u0000neuronal connectomes. Here, we devise a principled method to infer network\u0000symmetries from combined connectome and neuronal activity data. Using nervous\u0000system-wide population activity recordings of the textit{C.elegans} backward\u0000locomotor system, we infer structures in the connectome called fibration\u0000symmetries, which can explain which group of neurons synchronize their\u0000activity. Our analysis suggests functional building blocks in the animal's\u0000motor periphery, providing new testable hypotheses on how descending\u0000interneuron circuits communicate with the motor periphery to control behavior.\u0000Our approach opens a new door to exploring the structure-function relations in\u0000other complex systems, like the nervous systems of larger animals.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Su, Fang Cai, Shu-Kuo Zhao, Xin-Yi Wang, Tian-Yi Qian, Da-Hui Wang, Bo Hong
Uncovering the fundamental neural correlates of biological intelligence, developing mathematical models, and conducting computational simulations are critical for advancing new paradigms in artificial intelligence (AI). In this study, we implemented a comprehensive visual decision-making model that spans from visual input to behavioral output, using a neural dynamics modeling approach. Drawing inspiration from the key components of the dorsal visual pathway in primates, our model not only aligns closely with human behavior but also reflects neural activities in primates, and achieving accuracy comparable to convolutional neural networks (CNNs). Moreover, magnetic resonance imaging (MRI) identified key neuroimaging features such as structural connections and functional connectivity that are associated with performance in perceptual decision-making tasks. A neuroimaging-informed fine-tuning approach was introduced and applied to the model, leading to performance improvements that paralleled the behavioral variations observed among subjects. Compared to classical deep learning models, our model more accurately replicates the behavioral performance of biological intelligence, relying on the structural characteristics of biological neural networks rather than extensive training data, and demonstrating enhanced resilience to perturbation.
{"title":"Neural Dynamics Model of Visual Decision-Making: Learning from Human Experts","authors":"Jie Su, Fang Cai, Shu-Kuo Zhao, Xin-Yi Wang, Tian-Yi Qian, Da-Hui Wang, Bo Hong","doi":"arxiv-2409.02390","DOIUrl":"https://doi.org/arxiv-2409.02390","url":null,"abstract":"Uncovering the fundamental neural correlates of biological intelligence,\u0000developing mathematical models, and conducting computational simulations are\u0000critical for advancing new paradigms in artificial intelligence (AI). In this\u0000study, we implemented a comprehensive visual decision-making model that spans\u0000from visual input to behavioral output, using a neural dynamics modeling\u0000approach. Drawing inspiration from the key components of the dorsal visual\u0000pathway in primates, our model not only aligns closely with human behavior but\u0000also reflects neural activities in primates, and achieving accuracy comparable\u0000to convolutional neural networks (CNNs). Moreover, magnetic resonance imaging\u0000(MRI) identified key neuroimaging features such as structural connections and\u0000functional connectivity that are associated with performance in perceptual\u0000decision-making tasks. A neuroimaging-informed fine-tuning approach was\u0000introduced and applied to the model, leading to performance improvements that\u0000paralleled the behavioral variations observed among subjects. Compared to\u0000classical deep learning models, our model more accurately replicates the\u0000behavioral performance of biological intelligence, relying on the structural\u0000characteristics of biological neural networks rather than extensive training\u0000data, and demonstrating enhanced resilience to perturbation.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maytus Piriyajitakonkij, Sirawaj Itthipuripat, Ian Ballard, Ioannis Pappas
In visual decision making, high-level features, such as object categories, have a strong influence on choice. However, the impact of low-level features on behavior is less understood partly due to the high correlation between high- and low-level features in the stimuli presented (e.g., objects of the same category are more likely to share low-level features). To disentangle these effects, we propose a method that de-correlates low- and high-level visual properties in a novel set of stimuli. Our method uses two Convolutional Neural Networks (CNNs) as candidate models of the ventral visual stream: the CORnet-S that has high neural predictivity in high-level, IT-like responses and the VGG-16 that has high neural predictivity in low-level responses. Triplets (root, image1, image2) of stimuli are parametrized by the level of low- and high-level similarity of images extracted from the different layers. These stimuli are then used in a decision-making task where participants are tasked to choose the most similar-to-the-root image. We found that different networks show differing abilities to predict the effects of low-versus-high-level similarity: while CORnet-S outperforms VGG-16 in explaining human choices based on high-level similarity, VGG-16 outperforms CORnet-S in explaining human choices based on low-level similarity. Using Brain-Score, we observed that the behavioral prediction abilities of different layers of these networks qualitatively corresponded to their ability to explain neural activity at different levels of the visual hierarchy. In summary, our algorithm for stimulus set generation enables the study of how different representations in the visual stream affect high-level cognitive behaviors.
{"title":"What makes a face looks like a hat: Decoupling low-level and high-level Visual Properties with Image Triplets","authors":"Maytus Piriyajitakonkij, Sirawaj Itthipuripat, Ian Ballard, Ioannis Pappas","doi":"arxiv-2409.02241","DOIUrl":"https://doi.org/arxiv-2409.02241","url":null,"abstract":"In visual decision making, high-level features, such as object categories,\u0000have a strong influence on choice. However, the impact of low-level features on\u0000behavior is less understood partly due to the high correlation between high-\u0000and low-level features in the stimuli presented (e.g., objects of the same\u0000category are more likely to share low-level features). To disentangle these\u0000effects, we propose a method that de-correlates low- and high-level visual\u0000properties in a novel set of stimuli. Our method uses two Convolutional Neural\u0000Networks (CNNs) as candidate models of the ventral visual stream: the CORnet-S\u0000that has high neural predictivity in high-level, IT-like responses and the\u0000VGG-16 that has high neural predictivity in low-level responses. Triplets\u0000(root, image1, image2) of stimuli are parametrized by the level of low- and\u0000high-level similarity of images extracted from the different layers. These\u0000stimuli are then used in a decision-making task where participants are tasked\u0000to choose the most similar-to-the-root image. We found that different networks\u0000show differing abilities to predict the effects of low-versus-high-level\u0000similarity: while CORnet-S outperforms VGG-16 in explaining human choices based\u0000on high-level similarity, VGG-16 outperforms CORnet-S in explaining human\u0000choices based on low-level similarity. Using Brain-Score, we observed that the\u0000behavioral prediction abilities of different layers of these networks\u0000qualitatively corresponded to their ability to explain neural activity at\u0000different levels of the visual hierarchy. In summary, our algorithm for\u0000stimulus set generation enables the study of how different representations in\u0000the visual stream affect high-level cognitive behaviors.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David G. Clark, Owen Marschall, Alexander van Meegen, Ashok Litwin-Kumar
We develop a theory to analyze how structure in connectivity shapes the high-dimensional, internally generated activity of nonlinear recurrent neural networks. Using two complementary methods -- a path-integral calculation of fluctuations around the saddle point, and a recently introduced two-site cavity approach -- we derive analytic expressions that characterize important features of collective activity, including its dimensionality and temporal correlations. To model structure in the coupling matrices of real neural circuits, such as synaptic connectomes obtained through electron microscopy, we introduce the random-mode model, which parameterizes a coupling matrix using random input and output modes and a specified spectrum. This model enables systematic study of the effects of low-dimensional structure in connectivity on neural activity. These effects manifest in features of collective activity, that we calculate, and can be undetectable when analyzing only single-neuron activities. We derive a relation between the effective rank of the coupling matrix and the dimension of activity. By extending the random-mode model, we compare the effects of single-neuron heterogeneity and low-dimensional connectivity. We also investigate the impact of structured overlaps between input and output modes, a feature of biological coupling matrices. Our theory provides tools to relate neural-network architecture and collective dynamics in artificial and biological systems.
{"title":"Connectivity structure and dynamics of nonlinear recurrent neural networks","authors":"David G. Clark, Owen Marschall, Alexander van Meegen, Ashok Litwin-Kumar","doi":"arxiv-2409.01969","DOIUrl":"https://doi.org/arxiv-2409.01969","url":null,"abstract":"We develop a theory to analyze how structure in connectivity shapes the\u0000high-dimensional, internally generated activity of nonlinear recurrent neural\u0000networks. Using two complementary methods -- a path-integral calculation of\u0000fluctuations around the saddle point, and a recently introduced two-site cavity\u0000approach -- we derive analytic expressions that characterize important features\u0000of collective activity, including its dimensionality and temporal correlations.\u0000To model structure in the coupling matrices of real neural circuits, such as\u0000synaptic connectomes obtained through electron microscopy, we introduce the\u0000random-mode model, which parameterizes a coupling matrix using random input and\u0000output modes and a specified spectrum. This model enables systematic study of\u0000the effects of low-dimensional structure in connectivity on neural activity.\u0000These effects manifest in features of collective activity, that we calculate,\u0000and can be undetectable when analyzing only single-neuron activities. We derive\u0000a relation between the effective rank of the coupling matrix and the dimension\u0000of activity. By extending the random-mode model, we compare the effects of\u0000single-neuron heterogeneity and low-dimensional connectivity. We also\u0000investigate the impact of structured overlaps between input and output modes, a\u0000feature of biological coupling matrices. Our theory provides tools to relate\u0000neural-network architecture and collective dynamics in artificial and\u0000biological systems.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diffusion MRI is a powerful tool that serves as a bridge between brain microstructure and cognition. Recent advancements in cognitive neuroscience have highlighted the persistent challenge of understanding how individual differences in brain structure influence behavior, especially in healthy people. While traditional linear models like Canonical Correlation Analysis (CCA) and Partial Least Squares (PLS) have been fundamental in this analysis, they face limitations, particularly with high-dimensional data analysis outside the training sample. To address these issues, we introduce a novel approach using deep learninga multivariate autoencoder model-to explore the complex non-linear relationships between brain microstructure and cognitive functions. The model's architecture involves separate encoder modules for brain structure and cognitive data, with a shared decoder, facilitating the analysis of multivariate patterns across these domains. Both encoders were trained simultaneously, before the decoder, to ensure a good latent representation that captures the phenomenon. Using data from the Human Connectome Project, our study centres on the insula's role in cognitive processes. Through rigorous validation, including 5 sample analyses for out-of-sample analysis, our results demonstrate that the multivariate autoencoder model outperforms traditional methods in capturing and generalizing correlations between brain and behavior beyond the training sample. These findings underscore the potential of deep learning models to enhance our understanding of brain-behavior relationships in cognitive neuroscience, offering more accurate and comprehensive insights despite the complexities inherent in neuroimaging studies.
{"title":"Deep multivariate autoencoder for capturing complexity in Brain Structure and Behaviour Relationships","authors":"Gabriela Gómez JiménezMIND, Demian WassermannMIND","doi":"arxiv-2409.01638","DOIUrl":"https://doi.org/arxiv-2409.01638","url":null,"abstract":"<div><p>Diffusion MRI is a powerful tool that serves as a bridge between\u0000brain microstructure and cognition. Recent advancements in cognitive\u0000neuroscience have highlighted the persistent challenge of understanding how\u0000individual differences in brain structure influence behavior, especially in\u0000healthy people. While traditional linear models like Canonical Correlation\u0000Analysis (CCA) and Partial Least Squares (PLS) have been fundamental in this\u0000analysis, they face limitations, particularly with high-dimensional data\u0000analysis outside the training sample. To address these issues, we introduce a\u0000novel approach using deep learninga multivariate autoencoder model-to explore\u0000the complex non-linear relationships between brain microstructure and cognitive\u0000functions. The model's architecture involves separate encoder modules for brain\u0000structure and cognitive data, with a shared decoder, facilitating the analysis\u0000of multivariate patterns across these domains. Both encoders were trained\u0000simultaneously, before the decoder, to ensure a good latent representation that\u0000captures the phenomenon. Using data from the Human Connectome Project, our\u0000study centres on the insula's role in cognitive processes. Through rigorous\u0000validation, including 5 sample analyses for out-of-sample analysis, our results\u0000demonstrate that the multivariate autoencoder model outperforms traditional\u0000methods in capturing and generalizing correlations between brain and behavior\u0000beyond the training sample. These findings underscore the potential of deep\u0000learning models to enhance our understanding of brain-behavior relationships in\u0000cognitive neuroscience, offering more accurate and comprehensive insights\u0000despite the complexities inherent in neuroimaging studies.</p></div>","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nathan Evans, Sarah J. Gascoigne, Guillermo M. Besne, Chris Thornton, Gabrielle M. Schroeder, Fahmida A Chowdhury, Beate Diehl, John S Duncan, Andrew W McEvoy, Anna Miserocchi, Jane de Tisi, Peter N. Taylor, Yujiang Wang
Anti-seizure medications (ASMs) are the mainstay of treatment for epilepsy, yet their effect on seizure spread is not fully understood. Higher ASM doses have been associated with shorter and less severe seizures. Our objective was to test if this effect was due to limiting seizure spread through early termination of otherwise unchanged seizures. We retrospectively examined intracranial EEG (iEEG) recordings in 15 subjects that underwent ASM tapering during pre-surgical monitoring. We estimated ASM plasma concentrations based on pharmaco-kinetic modelling. In each subject, we identified seizures that followed the same onset and initial spread patterns, but some seizures terminated early (truncated seizures), and other seizures continued to spread (continuing seizures). We compared ASM concentrations at the times of truncated seizures and continuing seizures. We found no substantial difference between ASM concentrations when truncated vs. continuing seizures occurred (Mean difference = 4%, sd = 29%, p=0.6). Our results indicate that ASM did not appear to halt established seizures in this cohort. Further research is needed to understand how ASM may modulate seizure duration and severity.
{"title":"Anti-seizure medication load is not correlated with early termination of seizure spread","authors":"Nathan Evans, Sarah J. Gascoigne, Guillermo M. Besne, Chris Thornton, Gabrielle M. Schroeder, Fahmida A Chowdhury, Beate Diehl, John S Duncan, Andrew W McEvoy, Anna Miserocchi, Jane de Tisi, Peter N. Taylor, Yujiang Wang","doi":"arxiv-2409.01767","DOIUrl":"https://doi.org/arxiv-2409.01767","url":null,"abstract":"Anti-seizure medications (ASMs) are the mainstay of treatment for epilepsy,\u0000yet their effect on seizure spread is not fully understood. Higher ASM doses\u0000have been associated with shorter and less severe seizures. Our objective was\u0000to test if this effect was due to limiting seizure spread through early\u0000termination of otherwise unchanged seizures. We retrospectively examined intracranial EEG (iEEG) recordings in 15 subjects\u0000that underwent ASM tapering during pre-surgical monitoring. We estimated ASM\u0000plasma concentrations based on pharmaco-kinetic modelling. In each subject, we\u0000identified seizures that followed the same onset and initial spread patterns,\u0000but some seizures terminated early (truncated seizures), and other seizures\u0000continued to spread (continuing seizures). We compared ASM concentrations at\u0000the times of truncated seizures and continuing seizures. We found no substantial difference between ASM concentrations when truncated\u0000vs. continuing seizures occurred (Mean difference = 4%, sd = 29%, p=0.6). Our results indicate that ASM did not appear to halt established seizures in\u0000this cohort. Further research is needed to understand how ASM may modulate\u0000seizure duration and severity.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"125 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Invasive cortical brain-machine interfaces (BMIs) can significantly improve the life quality of motor-impaired patients. Nonetheless, externally mounted pedestals pose an infection risk, which calls for fully implanted systems. Such systems, however, must meet strict latency and energy constraints while providing reliable decoding performance. While recurrent spiking neural networks (RSNNs) are ideally suited for ultra-low-power, low-latency processing on neuromorphic hardware, it is unclear whether they meet the above requirements. To address this question, we trained RSNNs to decode finger velocity from cortical spike trains (CSTs) of two macaque monkeys. First, we found that a large RSNN model outperformed existing feedforward spiking neural networks (SNNs) and artificial neural networks (ANNs) in terms of their decoding accuracy. We next developed a tiny RSNN with a smaller memory footprint, low firing rates, and sparse connectivity. Despite its reduced computational requirements, the resulting model performed substantially better than existing SNN and ANN decoders. Our results thus demonstrate that RSNNs offer competitive CST decoding performance under tight resource constraints and are promising candidates for fully implanted ultra-low-power BMIs with the potential to revolutionize patient care.
{"title":"Decoding finger velocity from cortical spike trains with recurrent spiking neural networks","authors":"Tengjun Liu, Julia Gygax, Julian Rossbroich, Yansong Chua, Shaomin Zhang, Friedemann Zenke","doi":"arxiv-2409.01762","DOIUrl":"https://doi.org/arxiv-2409.01762","url":null,"abstract":"Invasive cortical brain-machine interfaces (BMIs) can significantly improve\u0000the life quality of motor-impaired patients. Nonetheless, externally mounted\u0000pedestals pose an infection risk, which calls for fully implanted systems. Such\u0000systems, however, must meet strict latency and energy constraints while\u0000providing reliable decoding performance. While recurrent spiking neural\u0000networks (RSNNs) are ideally suited for ultra-low-power, low-latency processing\u0000on neuromorphic hardware, it is unclear whether they meet the above\u0000requirements. To address this question, we trained RSNNs to decode finger\u0000velocity from cortical spike trains (CSTs) of two macaque monkeys. First, we\u0000found that a large RSNN model outperformed existing feedforward spiking neural\u0000networks (SNNs) and artificial neural networks (ANNs) in terms of their\u0000decoding accuracy. We next developed a tiny RSNN with a smaller memory\u0000footprint, low firing rates, and sparse connectivity. Despite its reduced\u0000computational requirements, the resulting model performed substantially better\u0000than existing SNN and ANN decoders. Our results thus demonstrate that RSNNs\u0000offer competitive CST decoding performance under tight resource constraints and\u0000are promising candidates for fully implanted ultra-low-power BMIs with the\u0000potential to revolutionize patient care.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}