Extracting causal connections can advance interpretable AI and machine learning. Granger causality (GC) is a robust statistical method for estimating directed influences (DC) between signals. While GC has been widely applied to analysing neuronal signals in biological neural networks and other domains, its application to complex, nonlinear, and multistable neural networks is less explored. In this study, we applied time-domain multi-variate Granger causality (MVGC) to the time series neural activity of all nodes in a trained multistable biologically based decision neural network model with real-time decision uncertainty monitoring. Our analysis demonstrated that challenging two-choice decisions, where input signals could be closely matched, and the appropriate application of fine-grained sliding time windows, could readily reveal the original model's DC. Furthermore, the identified DC varied based on whether the network had correct or error decisions. Integrating the identified DC from different decision outcomes recovered most of the original model's architecture, despite some spurious and missing connectivity. This approach could be used as an initial exploration to enhance the interpretability and transparency of dynamic multistable and nonlinear biological or AI systems by revealing causal connections throughout different phases of neural network dynamics and outcomes.
{"title":"Can multivariate Granger causality detect directed connectivity of a multistable and dynamic biological decision network model?","authors":"Abdoreza Asadpour, KongFatt Wong-Lin","doi":"arxiv-2408.01528","DOIUrl":"https://doi.org/arxiv-2408.01528","url":null,"abstract":"Extracting causal connections can advance interpretable AI and machine\u0000learning. Granger causality (GC) is a robust statistical method for estimating\u0000directed influences (DC) between signals. While GC has been widely applied to\u0000analysing neuronal signals in biological neural networks and other domains, its\u0000application to complex, nonlinear, and multistable neural networks is less\u0000explored. In this study, we applied time-domain multi-variate Granger causality\u0000(MVGC) to the time series neural activity of all nodes in a trained multistable\u0000biologically based decision neural network model with real-time decision\u0000uncertainty monitoring. Our analysis demonstrated that challenging two-choice\u0000decisions, where input signals could be closely matched, and the appropriate\u0000application of fine-grained sliding time windows, could readily reveal the\u0000original model's DC. Furthermore, the identified DC varied based on whether the\u0000network had correct or error decisions. Integrating the identified DC from\u0000different decision outcomes recovered most of the original model's\u0000architecture, despite some spurious and missing connectivity. This approach\u0000could be used as an initial exploration to enhance the interpretability and\u0000transparency of dynamic multistable and nonlinear biological or AI systems by\u0000revealing causal connections throughout different phases of neural network\u0000dynamics and outcomes.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141938808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valeria Simonelli, Davide Nuzzi, Gian Luca Lancia, Giovanni Pezzulo
Foraging is a crucial activity, yet the extent to which humans employ flexible versus rigid strategies remains unclear. This study investigates how individuals adapt their foraging strategies in response to resource distribution and foraging time constraints. For this, we designed a video-game-like foraging task that requires participants to navigate a four-areas environment to collect coins from treasure boxes within a limited time. This task engages multiple cognitive abilities, such as navigation, learning, and memorization of treasure box locations. Findings indicate that participants adjust their foraging strategies -- encompassing both stay-or-leave decisions, such as the number of boxes opened in initial areas and behavioral aspects, such as the time to navigate from box to box -- depending on both resource distribution and foraging time. Additionally, they improved their performance over time as an effect of both enhanced navigation skills and adaptation of foraging strategies. Finally, participants' performance was initially distant from the reward-maximizing performance of optimal agents due to the learning process humans undergo; however, it approximated the optimal agent's performance towards the end of the task, without fully reaching it. These results highlight the flexibility of human foraging behavior and underscore the importance of employing optimality models and ecologically rich scenarios to study foraging.
{"title":"Human foraging strategies flexibly adapt to resource distribution and time constraints","authors":"Valeria Simonelli, Davide Nuzzi, Gian Luca Lancia, Giovanni Pezzulo","doi":"arxiv-2408.01350","DOIUrl":"https://doi.org/arxiv-2408.01350","url":null,"abstract":"Foraging is a crucial activity, yet the extent to which humans employ\u0000flexible versus rigid strategies remains unclear. This study investigates how\u0000individuals adapt their foraging strategies in response to resource\u0000distribution and foraging time constraints. For this, we designed a\u0000video-game-like foraging task that requires participants to navigate a\u0000four-areas environment to collect coins from treasure boxes within a limited\u0000time. This task engages multiple cognitive abilities, such as navigation,\u0000learning, and memorization of treasure box locations. Findings indicate that\u0000participants adjust their foraging strategies -- encompassing both\u0000stay-or-leave decisions, such as the number of boxes opened in initial areas\u0000and behavioral aspects, such as the time to navigate from box to box --\u0000depending on both resource distribution and foraging time. Additionally, they\u0000improved their performance over time as an effect of both enhanced navigation\u0000skills and adaptation of foraging strategies. Finally, participants'\u0000performance was initially distant from the reward-maximizing performance of\u0000optimal agents due to the learning process humans undergo; however, it\u0000approximated the optimal agent's performance towards the end of the task,\u0000without fully reaching it. These results highlight the flexibility of human\u0000foraging behavior and underscore the importance of employing optimality models\u0000and ecologically rich scenarios to study foraging.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141969059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding how brain networks learn and manage multiple tasks simultaneously is of interest in both neuroscience and artificial intelligence. In this regard, a recent research thread in theoretical neuroscience has focused on how recurrent neural network models and their internal dynamics enact multi-task learning. To manage different tasks requires a mechanism to convey information about task identity or context into the model, which from a biological perspective may involve mechanisms of neuromodulation. In this study, we use recurrent network models to probe the distinctions between two forms of contextual modulation of neural dynamics, at the level of neuronal excitability and at the level of synaptic strength. We characterize these mechanisms in terms of their functional outcomes, focusing on their robustness to context ambiguity and, relatedly, their efficiency with respect to packing multiple tasks into finite size networks. We also demonstrate distinction between these mechanisms at the level of the neuronal dynamics they induce. Together, these characterizations indicate complementarity and synergy in how these mechanisms act, potentially over multiple time-scales, toward enhancing robustness of multi-task learning.
{"title":"Synergistic pathways of modulation enable robust task packing within neural dynamics","authors":"Giacomo Vedovati, ShiNung Ching","doi":"arxiv-2408.01316","DOIUrl":"https://doi.org/arxiv-2408.01316","url":null,"abstract":"Understanding how brain networks learn and manage multiple tasks\u0000simultaneously is of interest in both neuroscience and artificial intelligence.\u0000In this regard, a recent research thread in theoretical neuroscience has\u0000focused on how recurrent neural network models and their internal dynamics\u0000enact multi-task learning. To manage different tasks requires a mechanism to\u0000convey information about task identity or context into the model, which from a\u0000biological perspective may involve mechanisms of neuromodulation. In this\u0000study, we use recurrent network models to probe the distinctions between two\u0000forms of contextual modulation of neural dynamics, at the level of neuronal\u0000excitability and at the level of synaptic strength. We characterize these\u0000mechanisms in terms of their functional outcomes, focusing on their robustness\u0000to context ambiguity and, relatedly, their efficiency with respect to packing\u0000multiple tasks into finite size networks. We also demonstrate distinction\u0000between these mechanisms at the level of the neuronal dynamics they induce.\u0000Together, these characterizations indicate complementarity and synergy in how\u0000these mechanisms act, potentially over multiple time-scales, toward enhancing\u0000robustness of multi-task learning.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141938810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In cognitive neuroscience and brain-computer interface research, accurately predicting imagined stimuli is crucial. This study investigates the effectiveness of Domain Adaptation (DA) in enhancing imagery prediction using primarily visual data from fMRI scans of 18 subjects. Initially, we train a baseline model on visual stimuli to predict imagined stimuli, utilizing data from 14 brain regions. We then develop several models to improve imagery prediction, comparing different DA methods. Our results demonstrate that DA significantly enhances imagery prediction, especially with the Regular Transfer approach. We then conduct a DA-enhanced searchlight analysis using Regular Transfer, followed by permutation-based statistical tests to identify brain regions where imagery decoding is consistently above chance across subjects. Our DA-enhanced searchlight predicts imagery contents in a highly distributed set of brain regions, including the visual cortex and the frontoparietal cortex, thereby outperforming standard cross-domain classification methods. The complete code and data for this paper have been made openly available for the use of the scientific community.
在认知神经科学和脑机接口研究中,准确预测想象中的刺激至关重要。本研究利用 18 名受试者的 fMRI 扫描数据(主要是视觉数据)研究了领域适应(DA)在增强想象预测方面的效果。首先,我们利用来自 14 个大脑区域的数据,训练视觉刺激的基准模型,以预测想象中的刺激。然后,我们开发了几种模型来改进想象预测,并对不同的 DA 方法进行了比较。我们的结果表明,DA 能显著增强意象预测,尤其是使用常规转移方法时。然后,我们使用正则转移法进行了 DA 增强探照灯分析,随后进行了基于置换的统计检验,以确定在不同受试者中意象解码始终高于偶然性的脑区。我们的 DA 增强探照灯预测了高度分布的脑区(包括视觉皮层和顶叶前部皮层)中的意象内容,从而超越了标准的跨域分类方法。本文的完整代码和数据已经公开,供科学界使用。
{"title":"Domain Adaptation-Enhanced Searchlight: Enabling brain decoding from visual perception to mental imagery","authors":"Alexander Olza, David Soto, Roberto Santana","doi":"arxiv-2408.01163","DOIUrl":"https://doi.org/arxiv-2408.01163","url":null,"abstract":"In cognitive neuroscience and brain-computer interface research, accurately\u0000predicting imagined stimuli is crucial. This study investigates the\u0000effectiveness of Domain Adaptation (DA) in enhancing imagery prediction using\u0000primarily visual data from fMRI scans of 18 subjects. Initially, we train a\u0000baseline model on visual stimuli to predict imagined stimuli, utilizing data\u0000from 14 brain regions. We then develop several models to improve imagery\u0000prediction, comparing different DA methods. Our results demonstrate that DA\u0000significantly enhances imagery prediction, especially with the Regular Transfer\u0000approach. We then conduct a DA-enhanced searchlight analysis using Regular\u0000Transfer, followed by permutation-based statistical tests to identify brain\u0000regions where imagery decoding is consistently above chance across subjects.\u0000Our DA-enhanced searchlight predicts imagery contents in a highly distributed\u0000set of brain regions, including the visual cortex and the frontoparietal\u0000cortex, thereby outperforming standard cross-domain classification methods. The\u0000complete code and data for this paper have been made openly available for the\u0000use of the scientific community.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141938811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vito Mengers, Nicolas Roth, Oliver Brock, Klaus Obermayer, Martin Rolfs
How we perceive objects around us depends on what we actively attend to, yet our eye movements depend on the perceived objects. Still, object segmentation and gaze behavior are typically treated as two independent processes. Drawing on an information processing pattern from robotics, we present a mechanistic model that simulates these processes for dynamic real-world scenes. Our image-computable model uses the current scene segmentation for object-based saccadic decision-making while using the foveated object to refine its scene segmentation recursively. To model this refinement, we use a Bayesian filter, which also provides an uncertainty estimate for the segmentation that we use to guide active scene exploration. We demonstrate that this model closely resembles observers' free viewing behavior, measured by scanpath statistics, including foveation duration and saccade amplitude distributions used for parameter fitting and higher-level statistics not used for fitting. These include how object detections, inspections, and returns are balanced and a delay of returning saccades without an explicit implementation of such temporal inhibition of return. Extensive simulations and ablation studies show that uncertainty promotes balanced exploration and that semantic object cues are crucial to form the perceptual units used in object-based attention. Moreover, we show how our model's modular design allows for extensions, such as incorporating saccadic momentum or pre-saccadic attention, to further align its output with human scanpaths.
{"title":"A Robotics-Inspired Scanpath Model Reveals the Importance of Uncertainty and Semantic Object Cues for Gaze Guidance in Dynamic Scenes","authors":"Vito Mengers, Nicolas Roth, Oliver Brock, Klaus Obermayer, Martin Rolfs","doi":"arxiv-2408.01322","DOIUrl":"https://doi.org/arxiv-2408.01322","url":null,"abstract":"How we perceive objects around us depends on what we actively attend to, yet\u0000our eye movements depend on the perceived objects. Still, object segmentation\u0000and gaze behavior are typically treated as two independent processes. Drawing\u0000on an information processing pattern from robotics, we present a mechanistic\u0000model that simulates these processes for dynamic real-world scenes. Our\u0000image-computable model uses the current scene segmentation for object-based\u0000saccadic decision-making while using the foveated object to refine its scene\u0000segmentation recursively. To model this refinement, we use a Bayesian filter,\u0000which also provides an uncertainty estimate for the segmentation that we use to\u0000guide active scene exploration. We demonstrate that this model closely\u0000resembles observers' free viewing behavior, measured by scanpath statistics,\u0000including foveation duration and saccade amplitude distributions used for\u0000parameter fitting and higher-level statistics not used for fitting. These\u0000include how object detections, inspections, and returns are balanced and a\u0000delay of returning saccades without an explicit implementation of such temporal\u0000inhibition of return. Extensive simulations and ablation studies show that\u0000uncertainty promotes balanced exploration and that semantic object cues are\u0000crucial to form the perceptual units used in object-based attention. Moreover,\u0000we show how our model's modular design allows for extensions, such as\u0000incorporating saccadic momentum or pre-saccadic attention, to further align its\u0000output with human scanpaths.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141938812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Niamh Fennelly, Alannah Neff, Renaud Lambiotte, Andrew Keane, Áine Byrne
Synaptic plasticity is a key component of neuronal dynamics, describing the process by which the connections between neurons change in response to experiences. In this study, we extend a network model of $theta$-neuron oscillators to include a realistic form of adaptive plasticity. In place of the less tractable spike-timing-dependent plasticity, we employ recently validated phase-difference-dependent plasticity rules, which adjust coupling strengths based on the relative phases of $theta$-neuron oscillators. We investigate two approaches for implementing this plasticity: pairwise coupling strength updates and global coupling strength updates. A mean-field approximation of the system is derived and we investigate its validity through comparison with the $theta$-neuron simulations across various stability states. The synchrony of the system is examined using the Kuramoto order parameter. A bifurcation analysis, by means of numerical continuation and the calculation of maximal Lyapunov exponents, reveals interesting phenomena, including bistability and evidence of period-doubling and boundary crisis routes to chaos, that would otherwise not exist in the absence of adaptive coupling.
{"title":"Mean-field approximation for networks with synchrony-driven adaptive coupling","authors":"Niamh Fennelly, Alannah Neff, Renaud Lambiotte, Andrew Keane, Áine Byrne","doi":"arxiv-2407.21393","DOIUrl":"https://doi.org/arxiv-2407.21393","url":null,"abstract":"Synaptic plasticity is a key component of neuronal dynamics, describing the\u0000process by which the connections between neurons change in response to\u0000experiences. In this study, we extend a network model of $theta$-neuron\u0000oscillators to include a realistic form of adaptive plasticity. In place of the\u0000less tractable spike-timing-dependent plasticity, we employ recently validated\u0000phase-difference-dependent plasticity rules, which adjust coupling strengths\u0000based on the relative phases of $theta$-neuron oscillators. We investigate two\u0000approaches for implementing this plasticity: pairwise coupling strength updates\u0000and global coupling strength updates. A mean-field approximation of the system\u0000is derived and we investigate its validity through comparison with the\u0000$theta$-neuron simulations across various stability states. The synchrony of\u0000the system is examined using the Kuramoto order parameter. A bifurcation\u0000analysis, by means of numerical continuation and the calculation of maximal\u0000Lyapunov exponents, reveals interesting phenomena, including bistability and\u0000evidence of period-doubling and boundary crisis routes to chaos, that would\u0000otherwise not exist in the absence of adaptive coupling.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"1410 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ábel Ságodi, Guillermo Martín-Sánchez, Piotr Sokół, Il Memming Park
Continuous attractors offer a unique class of solutions for storing continuous-valued variables in recurrent system states for indefinitely long time intervals. Unfortunately, continuous attractors suffer from severe structural instability in general--they are destroyed by most infinitesimal changes of the dynamical law that defines them. This fragility limits their utility especially in biological systems as their recurrent dynamics are subject to constant perturbations. We observe that the bifurcations from continuous attractors in theoretical neuroscience models display various structurally stable forms. Although their asymptotic behaviors to maintain memory are categorically distinct, their finite-time behaviors are similar. We build on the persistent manifold theory to explain the commonalities between bifurcations from and approximations of continuous attractors. Fast-slow decomposition analysis uncovers the persistent manifold that survives the seemingly destructive bifurcation. Moreover, recurrent neural networks trained on analog memory tasks display approximate continuous attractors with predicted slow manifold structures. Therefore, continuous attractors are functionally robust and remain useful as a universal analogy for understanding analog memory.
{"title":"Back to the Continuous Attractor","authors":"Ábel Ságodi, Guillermo Martín-Sánchez, Piotr Sokół, Il Memming Park","doi":"arxiv-2408.00109","DOIUrl":"https://doi.org/arxiv-2408.00109","url":null,"abstract":"Continuous attractors offer a unique class of solutions for storing\u0000continuous-valued variables in recurrent system states for indefinitely long\u0000time intervals. Unfortunately, continuous attractors suffer from severe\u0000structural instability in general--they are destroyed by most infinitesimal\u0000changes of the dynamical law that defines them. This fragility limits their\u0000utility especially in biological systems as their recurrent dynamics are\u0000subject to constant perturbations. We observe that the bifurcations from\u0000continuous attractors in theoretical neuroscience models display various\u0000structurally stable forms. Although their asymptotic behaviors to maintain\u0000memory are categorically distinct, their finite-time behaviors are similar. We\u0000build on the persistent manifold theory to explain the commonalities between\u0000bifurcations from and approximations of continuous attractors. Fast-slow\u0000decomposition analysis uncovers the persistent manifold that survives the\u0000seemingly destructive bifurcation. Moreover, recurrent neural networks trained\u0000on analog memory tasks display approximate continuous attractors with predicted\u0000slow manifold structures. Therefore, continuous attractors are functionally\u0000robust and remain useful as a universal analogy for understanding analog\u0000memory.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141885172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan D. McCart, Andrew R. Sedler, Christopher Versteeg, Domenick Mifsud, Mattia Rigotti-Thompson, Chethan Pandarinath
Recent advances in recording technology have allowed neuroscientists to monitor activity from thousands of neurons simultaneously. Latent variable models are increasingly valuable for distilling these recordings into compact and interpretable representations. Here we propose a new approach to neural data analysis that leverages advances in conditional generative modeling to enable the unsupervised inference of disentangled behavioral variables from recorded neural activity. Our approach builds on InfoDiffusion, which augments diffusion models with a set of latent variables that capture important factors of variation in the data. We apply our model, called Generating Neural Observations Conditioned on Codes with High Information (GNOCCHI), to time series neural data and test its application to synthetic and biological recordings of neural activity during reaching. In comparison to a VAE-based sequential autoencoder, GNOCCHI learns higher-quality latent spaces that are more clearly structured and more disentangled with respect to key behavioral variables. These properties enable accurate generation of novel samples (unseen behavioral conditions) through simple linear traversal of the latent spaces produced by GNOCCHI. Our work demonstrates the potential of unsupervised, information-based models for the discovery of interpretable latent spaces from neural data, enabling researchers to generate high-quality samples from unseen conditions.
{"title":"Diffusion-Based Generation of Neural Activity from Disentangled Latent Codes","authors":"Jonathan D. McCart, Andrew R. Sedler, Christopher Versteeg, Domenick Mifsud, Mattia Rigotti-Thompson, Chethan Pandarinath","doi":"arxiv-2407.21195","DOIUrl":"https://doi.org/arxiv-2407.21195","url":null,"abstract":"Recent advances in recording technology have allowed neuroscientists to\u0000monitor activity from thousands of neurons simultaneously. Latent variable\u0000models are increasingly valuable for distilling these recordings into compact\u0000and interpretable representations. Here we propose a new approach to neural\u0000data analysis that leverages advances in conditional generative modeling to\u0000enable the unsupervised inference of disentangled behavioral variables from\u0000recorded neural activity. Our approach builds on InfoDiffusion, which augments\u0000diffusion models with a set of latent variables that capture important factors\u0000of variation in the data. We apply our model, called Generating Neural\u0000Observations Conditioned on Codes with High Information (GNOCCHI), to time\u0000series neural data and test its application to synthetic and biological\u0000recordings of neural activity during reaching. In comparison to a VAE-based\u0000sequential autoencoder, GNOCCHI learns higher-quality latent spaces that are\u0000more clearly structured and more disentangled with respect to key behavioral\u0000variables. These properties enable accurate generation of novel samples (unseen\u0000behavioral conditions) through simple linear traversal of the latent spaces\u0000produced by GNOCCHI. Our work demonstrates the potential of unsupervised,\u0000information-based models for the discovery of interpretable latent spaces from\u0000neural data, enabling researchers to generate high-quality samples from unseen\u0000conditions.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A number of recent articles have employed the Lorentz ansatz to reduce a network of Izhikevich neurons to a tractable mean-field description. In this letter, we construct an equivalent phase model for the Izhikevich model and apply the Ott-Antonsen ansatz, to derive the mean field dynamics in terms of the Kuramoto order parameter. In addition, we show that by defining an appropriate order parameter in the voltage-firing rate framework, the conformal mapping of Montbri'o et al., which relates the two mean-field descriptions, remains valid.
{"title":"Phase transformation and synchrony for a network of coupled Izhikevich neurons","authors":"Áine Byrne","doi":"arxiv-2407.20055","DOIUrl":"https://doi.org/arxiv-2407.20055","url":null,"abstract":"A number of recent articles have employed the Lorentz ansatz to reduce a\u0000network of Izhikevich neurons to a tractable mean-field description. In this\u0000letter, we construct an equivalent phase model for the Izhikevich model and\u0000apply the Ott-Antonsen ansatz, to derive the mean field dynamics in terms of\u0000the Kuramoto order parameter. In addition, we show that by defining an\u0000appropriate order parameter in the voltage-firing rate framework, the conformal\u0000mapping of Montbri'o et al., which relates the two mean-field descriptions,\u0000remains valid.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"212 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generative models of brain activity have been instrumental in testing hypothesized mechanisms underlying brain dynamics against experimental datasets. Beyond capturing the key mechanisms underlying spontaneous brain dynamics, these models hold an exciting potential for understanding the mechanisms underlying the dynamics evoked by targeted brain-stimulation techniques. This paper delves into this emerging application, using concepts from dynamical systems theory to argue that the stimulus-evoked dynamics in such experiments may be shaped by new types of mechanisms distinct from those that dominate spontaneous dynamics. We review and discuss: (i) the targeted experimental techniques across spatial scales that can both perturb the brain to novel states and resolve its relaxation trajectory back to spontaneous dynamics; and (ii) how we can understand these dynamics in terms of mechanisms using physiological, phenomenological, and data-driven models. A tight integration of targeted stimulation experiments with generative quantitative modeling provides an important opportunity to uncover novel mechanisms of brain dynamics that are difficult to detect in spontaneous settings.
{"title":"Analyzing the Brain's Dynamic Response to Targeted Stimulation using Generative Modeling","authors":"Rishikesan Maran, Eli J. Müller, Ben D. Fulcher","doi":"arxiv-2407.19737","DOIUrl":"https://doi.org/arxiv-2407.19737","url":null,"abstract":"Generative models of brain activity have been instrumental in testing\u0000hypothesized mechanisms underlying brain dynamics against experimental\u0000datasets. Beyond capturing the key mechanisms underlying spontaneous brain\u0000dynamics, these models hold an exciting potential for understanding the\u0000mechanisms underlying the dynamics evoked by targeted brain-stimulation\u0000techniques. This paper delves into this emerging application, using concepts\u0000from dynamical systems theory to argue that the stimulus-evoked dynamics in\u0000such experiments may be shaped by new types of mechanisms distinct from those\u0000that dominate spontaneous dynamics. We review and discuss: (i) the targeted\u0000experimental techniques across spatial scales that can both perturb the brain\u0000to novel states and resolve its relaxation trajectory back to spontaneous\u0000dynamics; and (ii) how we can understand these dynamics in terms of mechanisms\u0000using physiological, phenomenological, and data-driven models. A tight\u0000integration of targeted stimulation experiments with generative quantitative\u0000modeling provides an important opportunity to uncover novel mechanisms of brain\u0000dynamics that are difficult to detect in spontaneous settings.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"108 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141873157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}