We have proved in both human-based and computer-based tests that natural concepts generally `entangle' when they combine to form complex sentences, violating the rules of classical compositional semantics. In this article, we present the results of an innovative video-based cognitive test on a specific conceptual combination, which significantly violates the Clauser--Horne--Shimony--Holt version of Bell's inequalities (`CHSH inequality'). We also show that collected data can be faithfully modelled within a quantum-theoretic framework elaborated by ourselves and a `strong form of entanglement' occurs between the component concepts. While the video-based test confirms previous empirical results on entanglement in human cognition, our ground-breaking empirical approach surpasses language barriers and eliminates the need for prior knowledge, enabling universal accessibility. Finally, this transformative methodology allows one to unravel the underlying connections that drive our perception of reality. As a matter of fact, we provide a novel explanation for the appearance of entanglement in both physics and cognitive realms.
{"title":"Turing Video-based Cognitive Tests to Handle Entangled Concepts","authors":"Diederik Aerts, Roberto Leporini, Sandro Sozzo","doi":"arxiv-2409.08868","DOIUrl":"https://doi.org/arxiv-2409.08868","url":null,"abstract":"We have proved in both human-based and computer-based tests that natural\u0000concepts generally `entangle' when they combine to form complex sentences,\u0000violating the rules of classical compositional semantics. In this article, we\u0000present the results of an innovative video-based cognitive test on a specific\u0000conceptual combination, which significantly violates the\u0000Clauser--Horne--Shimony--Holt version of Bell's inequalities (`CHSH\u0000inequality'). We also show that collected data can be faithfully modelled\u0000within a quantum-theoretic framework elaborated by ourselves and a `strong form\u0000of entanglement' occurs between the component concepts. While the video-based\u0000test confirms previous empirical results on entanglement in human cognition,\u0000our ground-breaking empirical approach surpasses language barriers and\u0000eliminates the need for prior knowledge, enabling universal accessibility.\u0000Finally, this transformative methodology allows one to unravel the underlying\u0000connections that drive our perception of reality. As a matter of fact, we\u0000provide a novel explanation for the appearance of entanglement in both physics\u0000and cognitive realms.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142248944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding of how biological neural networks process information is one of the biggest open scientific questions of our time. Advances in machine learning and artificial neural networks have enabled the modeling of neuronal behavior, but classical models often require a large number of parameters, complicating interpretability. Quantum computing offers an alternative approach through quantum machine learning, which can achieve efficient training with fewer parameters. In this work, we introduce a quantum generative model framework for generating synthetic data that captures the spatial and temporal correlations of biological neuronal activity. Our model demonstrates the ability to achieve reliable outcomes with fewer trainable parameters compared to classical methods. These findings highlight the potential of quantum generative models to provide new tools for modeling and understanding neuronal behavior, offering a promising avenue for future research in neuroscience.
{"title":"Exploring Biological Neuronal Correlations with Quantum Generative Models","authors":"Vinicius Hernandes, Eliska Greplova","doi":"arxiv-2409.09125","DOIUrl":"https://doi.org/arxiv-2409.09125","url":null,"abstract":"Understanding of how biological neural networks process information is one of\u0000the biggest open scientific questions of our time. Advances in machine learning\u0000and artificial neural networks have enabled the modeling of neuronal behavior,\u0000but classical models often require a large number of parameters, complicating\u0000interpretability. Quantum computing offers an alternative approach through\u0000quantum machine learning, which can achieve efficient training with fewer\u0000parameters. In this work, we introduce a quantum generative model framework for\u0000generating synthetic data that captures the spatial and temporal correlations\u0000of biological neuronal activity. Our model demonstrates the ability to achieve\u0000reliable outcomes with fewer trainable parameters compared to classical\u0000methods. These findings highlight the potential of quantum generative models to\u0000provide new tools for modeling and understanding neuronal behavior, offering a\u0000promising avenue for future research in neuroscience.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142248942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel de Llanza Varona, Christopher L. Buckley, Beren Millidge
Organisms have to keep track of the information in the environment that is relevant for adaptive behaviour. Transmitting information in an economical and efficient way becomes crucial for limited-resourced agents living in high-dimensional environments. The efficient coding hypothesis claims that organisms seek to maximize the information about the sensory input in an efficient manner. Under Bayesian inference, this means that the role of the brain is to efficiently allocate resources in order to make predictions about the hidden states that cause sensory data. However, neither of those frameworks accounts for how that information is exploited downstream, leaving aside the action-oriented role of the perceptual system. Rate-distortion theory, which defines optimal lossy compression under constraints, has gained attention as a formal framework to explore goal-oriented efficient coding. In this work, we explore action-centric representations in the context of rate-distortion theory. We also provide a mathematical definition of abstractions and we argue that, as a summary of the relevant details, they can be used to fix the content of action-centric representations. We model action-centric representations using VAEs and we find that such representations i) are efficient lossy compressions of the data; ii) capture the task-dependent invariances necessary to achieve successful behaviour; and iii) are not in service of reconstructing the data. Thus, we conclude that full reconstruction of the data is rarely needed to achieve optimal behaviour, consistent with a teleological approach to perception.
{"title":"Exploring Action-Centric Representations Through the Lens of Rate-Distortion Theory","authors":"Miguel de Llanza Varona, Christopher L. Buckley, Beren Millidge","doi":"arxiv-2409.08892","DOIUrl":"https://doi.org/arxiv-2409.08892","url":null,"abstract":"Organisms have to keep track of the information in the environment that is\u0000relevant for adaptive behaviour. Transmitting information in an economical and\u0000efficient way becomes crucial for limited-resourced agents living in\u0000high-dimensional environments. The efficient coding hypothesis claims that\u0000organisms seek to maximize the information about the sensory input in an\u0000efficient manner. Under Bayesian inference, this means that the role of the\u0000brain is to efficiently allocate resources in order to make predictions about\u0000the hidden states that cause sensory data. However, neither of those frameworks\u0000accounts for how that information is exploited downstream, leaving aside the\u0000action-oriented role of the perceptual system. Rate-distortion theory, which\u0000defines optimal lossy compression under constraints, has gained attention as a\u0000formal framework to explore goal-oriented efficient coding. In this work, we\u0000explore action-centric representations in the context of rate-distortion\u0000theory. We also provide a mathematical definition of abstractions and we argue\u0000that, as a summary of the relevant details, they can be used to fix the content\u0000of action-centric representations. We model action-centric representations\u0000using VAEs and we find that such representations i) are efficient lossy\u0000compressions of the data; ii) capture the task-dependent invariances necessary\u0000to achieve successful behaviour; and iii) are not in service of reconstructing\u0000the data. Thus, we conclude that full reconstruction of the data is rarely\u0000needed to achieve optimal behaviour, consistent with a teleological approach to\u0000perception.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142248946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Response to a poll can be manipulated by means of a series of leading questions. We show that such phenomena cannot be explained by use of classical probability theory, whereas quantum probability theory admits a possibility of offering an explanation. Admissible transformation rules in quantum probability, however, do impose some constraints on the modelling of cognitive behaviour, which are highlighted here. Focusing on a recent poll conducted by Ipsos on a set of questions posed by Sir Humphrey Appleby in an episode of the British political satire textit{Yes, Prime Minister}, we show that the resulting data cannot be explained quite so simply using quantum rules, although it seems not impossible.
通过一系列引导性问题,可以操纵对民意调查的回应。我们证明,使用经典概率论无法解释这种现象,而量子概率论则有可能提供解释。不过,量子概率论中的可容许变换规则确实对认知行为的建模造成了一些限制,在此重点加以说明。最近,益普索(Ipsos)就英国政治讽刺剧《是的,首相》(textit{Yes, Prime Minister})中汉弗莱-阿普比爵士(Sir Humphrey Appleby)提出的一组问题进行了民意调查,我们以此为重点,说明虽然用量子规则似乎并非不可能,但却无法如此简单地解释得出的数据。
{"title":"Yes, Prime Minister, question order does matter -- and it's certainly not classical! But is it quantum?","authors":"Dorje C. Brody","doi":"arxiv-2409.08930","DOIUrl":"https://doi.org/arxiv-2409.08930","url":null,"abstract":"Response to a poll can be manipulated by means of a series of leading\u0000questions. We show that such phenomena cannot be explained by use of classical\u0000probability theory, whereas quantum probability theory admits a possibility of\u0000offering an explanation. Admissible transformation rules in quantum\u0000probability, however, do impose some constraints on the modelling of cognitive\u0000behaviour, which are highlighted here. Focusing on a recent poll conducted by\u0000Ipsos on a set of questions posed by Sir Humphrey Appleby in an episode of the\u0000British political satire textit{Yes, Prime Minister}, we show that the\u0000resulting data cannot be explained quite so simply using quantum rules,\u0000although it seems not impossible.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142248992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hairong Lu, Dimitri van der Linden, Arnold B. Bakker
People often strive for deep engagement in activities which is usually associated with feelings of flow: a state of full task absorption accompanied by a sense of control and fulfillment. The intrinsic factors driving such engagement and facilitating subjective feelings of flow remain unclear. Building on computational theories of intrinsic motivation, this study examines how learning progress predicts engagement and directs cognitive control. Results showed that task engagement, indicated by feelings of flow and distractibility, is a function of learning progress. Electroencephalography data further revealed that learning progress is associated with enhanced proactive preparation (e.g., reduced pre-stimulus contingent negativity variance and parietal alpha desynchronization) and improved feedback processing (e.g., increased P3b amplitude and parietal alpha desynchronization). The impact of learning progress on cognitive control is observed at the task-block and goal-episode levels, but not at the trial level. This suggests that learning progress shapes cognitive control over extended periods as progress accumulates. These findings highlight the critical role of learning progress in sustaining engagement and cognitive control in goal-directed behavior.
{"title":"The Neuroscientific Basis of Flow: Learning Progress Guides Task Engagement and Cognitive Control","authors":"Hairong Lu, Dimitri van der Linden, Arnold B. Bakker","doi":"arxiv-2409.06592","DOIUrl":"https://doi.org/arxiv-2409.06592","url":null,"abstract":"People often strive for deep engagement in activities which is usually\u0000associated with feelings of flow: a state of full task absorption accompanied\u0000by a sense of control and fulfillment. The intrinsic factors driving such\u0000engagement and facilitating subjective feelings of flow remain unclear.\u0000Building on computational theories of intrinsic motivation, this study examines\u0000how learning progress predicts engagement and directs cognitive control.\u0000Results showed that task engagement, indicated by feelings of flow and\u0000distractibility, is a function of learning progress. Electroencephalography\u0000data further revealed that learning progress is associated with enhanced\u0000proactive preparation (e.g., reduced pre-stimulus contingent negativity\u0000variance and parietal alpha desynchronization) and improved feedback processing\u0000(e.g., increased P3b amplitude and parietal alpha desynchronization). The\u0000impact of learning progress on cognitive control is observed at the task-block\u0000and goal-episode levels, but not at the trial level. This suggests that\u0000learning progress shapes cognitive control over extended periods as progress\u0000accumulates. These findings highlight the critical role of learning progress in\u0000sustaining engagement and cognitive control in goal-directed behavior.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Thebaud, Anna Favaro, Casey Chen, Gabrielle Chavez, Laureano Moro-Velazquez, Ankur Butala, Najim Dehak
Motor changes are early signs of neurodegenerative diseases (NDs) such as Parkinson's disease (PD) and Alzheimer's disease (AD), but are often difficult to detect, especially in the early stages. In this work, we examine the behavior of a wide array of explainable metrics extracted from the handwriting signals of 113 subjects performing multiple tasks on a digital tablet. The aim is to measure their effectiveness in characterizing and assessing multiple NDs, including AD and PD. To this end, task-agnostic and task-specific metrics are extracted from 14 distinct tasks. Subsequently, through statistical analysis and a series of classification experiments, we investigate which metrics provide greater discriminative power between NDs and healthy controls and among different NDs. Preliminary results indicate that the various tasks at hand can all be effectively leveraged to distinguish between the considered set of NDs, specifically by measuring the stability, the speed of writing, the time spent not writing, and the pressure variations between groups from our handcrafted explainable metrics, which shows p-values lower than 0.0001 for multiple tasks. Using various classification algorithms on the computed metrics, we obtain up to 87% accuracy to discriminate AD and healthy controls (CTL), and up to 69% for PD vs CTL.
运动变化是帕金森病(PD)和阿尔茨海默病(AD)等神经退行性疾病(ND)的早期征兆,但往往难以检测,尤其是在早期阶段。在这项工作中,我们研究了从 113 名受试者在数字平板电脑上执行多项任务时的手写信号中提取的一系列可解释指标的行为。目的是测量这些指标在表征和评估包括注意力缺失症和帕金森病在内的多种非痴呆症方面的有效性。为此,研究人员从 14 项不同的任务中提取了任务诊断指标和任务特定指标。随后,通过统计分析和一系列分类实验,我们研究了哪些指标在NDs和健康对照组之间以及不同NDs之间具有更强的分辨力。初步结果表明,我们可以有效地利用手头的各种任务来区分所考虑的 NDs,特别是通过测量稳定性、书写速度、不书写所花费的时间以及我们手工制作的可解释度量指标中各组之间的压力变化,多个任务的 p 值均低于 0.0001。
{"title":"Explainable Metrics for the Assessment of Neurodegenerative Diseases through Handwriting Analysis","authors":"Thomas Thebaud, Anna Favaro, Casey Chen, Gabrielle Chavez, Laureano Moro-Velazquez, Ankur Butala, Najim Dehak","doi":"arxiv-2409.08303","DOIUrl":"https://doi.org/arxiv-2409.08303","url":null,"abstract":"Motor changes are early signs of neurodegenerative diseases (NDs) such as\u0000Parkinson's disease (PD) and Alzheimer's disease (AD), but are often difficult\u0000to detect, especially in the early stages. In this work, we examine the\u0000behavior of a wide array of explainable metrics extracted from the handwriting\u0000signals of 113 subjects performing multiple tasks on a digital tablet. The aim\u0000is to measure their effectiveness in characterizing and assessing multiple NDs,\u0000including AD and PD. To this end, task-agnostic and task-specific metrics are\u0000extracted from 14 distinct tasks. Subsequently, through statistical analysis\u0000and a series of classification experiments, we investigate which metrics\u0000provide greater discriminative power between NDs and healthy controls and among\u0000different NDs. Preliminary results indicate that the various tasks at hand can\u0000all be effectively leveraged to distinguish between the considered set of NDs,\u0000specifically by measuring the stability, the speed of writing, the time spent\u0000not writing, and the pressure variations between groups from our handcrafted\u0000explainable metrics, which shows p-values lower than 0.0001 for multiple tasks.\u0000Using various classification algorithms on the computed metrics, we obtain up\u0000to 87% accuracy to discriminate AD and healthy controls (CTL), and up to 69%\u0000for PD vs CTL.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The dynamics of exchangeable or spatially-structured networks of $N$ interacting stochastic neurons can be described by deterministic population equations in the mean-field limit $Ntoinfty$, when synaptic weights scale as $O(1/N)$. This asymptotic behavior has been proven in several works but a general question has remained unanswered: does the $O(1/N)$ scaling of synaptic weights, by itself, suffice to guarantee the convergence of network dynamics to a deterministic population equation, even when networks are not assumed to be exchangeable or spatially structured? In this work, we consider networks of stochastic integrate-and-fire neurons with arbitrary synaptic weights satisfying only a $O(1/N)$ scaling condition. Borrowing results from the theory of dense graph limits (graphons), we prove that, as $Ntoinfty$, and up to the extraction of a subsequence, the empirical measure of the neurons' membrane potentials converges to the solution of a spatially-extended mean-field partial differential equation (PDE). Our proof requires analytical techniques that go beyond standard propagation of chaos methods. In particular, we introduce a weak metric that depends on the dense graph limit kernel and we show how the weak convergence of the initial data can be obtained by propagating the regularity of the limit kernel along the dual-backward equation associated with the spatially-extended mean-field PDE. Overall, this result invites us to re-interpret spatially-extended population equations as universal mean-field limits of networks of neurons with $O(1/N)$ synaptic weight scaling.
{"title":"Non-exchangeable networks of integrate-and-fire neurons: spatially-extended mean-field limit of the empirical measure","authors":"Pierre-Emmanuel Jabin, Valentin Schmutz, Datong Zhou","doi":"arxiv-2409.06325","DOIUrl":"https://doi.org/arxiv-2409.06325","url":null,"abstract":"The dynamics of exchangeable or spatially-structured networks of $N$\u0000interacting stochastic neurons can be described by deterministic population\u0000equations in the mean-field limit $Ntoinfty$, when synaptic weights scale as\u0000$O(1/N)$. This asymptotic behavior has been proven in several works but a\u0000general question has remained unanswered: does the $O(1/N)$ scaling of synaptic\u0000weights, by itself, suffice to guarantee the convergence of network dynamics to\u0000a deterministic population equation, even when networks are not assumed to be\u0000exchangeable or spatially structured? In this work, we consider networks of\u0000stochastic integrate-and-fire neurons with arbitrary synaptic weights\u0000satisfying only a $O(1/N)$ scaling condition. Borrowing results from the theory\u0000of dense graph limits (graphons), we prove that, as $Ntoinfty$, and up to the\u0000extraction of a subsequence, the empirical measure of the neurons' membrane\u0000potentials converges to the solution of a spatially-extended mean-field partial\u0000differential equation (PDE). Our proof requires analytical techniques that go\u0000beyond standard propagation of chaos methods. In particular, we introduce a\u0000weak metric that depends on the dense graph limit kernel and we show how the\u0000weak convergence of the initial data can be obtained by propagating the\u0000regularity of the limit kernel along the dual-backward equation associated with\u0000the spatially-extended mean-field PDE. Overall, this result invites us to\u0000re-interpret spatially-extended population equations as universal mean-field\u0000limits of networks of neurons with $O(1/N)$ synaptic weight scaling.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antony W. N'dri, William Gebhardt, Céline Teulière, Fleur Zeldenrust, Rajesh P. N. Rao, Jochen Triesch, Alexander Ororbia
In this article, we review a class of neuro-mimetic computational models that we place under the label of spiking predictive coding. Specifically, we review the general framework of predictive processing in the context of neurons that emit discrete action potentials, i.e., spikes. Theoretically, we structure our survey around how prediction errors are represented, which results in an organization of historical neuromorphic generalizations that is centered around three broad classes of approaches: prediction errors in explicit groups of error neurons, in membrane potentials, and implicit prediction error encoding. Furthermore, we examine some applications of spiking predictive coding that utilize more energy-efficient, edge-computing hardware platforms. Finally, we highlight important future directions and challenges in this emerging line of inquiry in brain-inspired computing. Building on the prior results of work in computational cognitive neuroscience, machine intelligence, and neuromorphic engineering, we hope that this review of neuromorphic formulations and implementations of predictive coding will encourage and guide future research and development in this emerging research area.
{"title":"Predictive Coding with Spiking Neural Networks: a Survey","authors":"Antony W. N'dri, William Gebhardt, Céline Teulière, Fleur Zeldenrust, Rajesh P. N. Rao, Jochen Triesch, Alexander Ororbia","doi":"arxiv-2409.05386","DOIUrl":"https://doi.org/arxiv-2409.05386","url":null,"abstract":"In this article, we review a class of neuro-mimetic computational models that\u0000we place under the label of spiking predictive coding. Specifically, we review\u0000the general framework of predictive processing in the context of neurons that\u0000emit discrete action potentials, i.e., spikes. Theoretically, we structure our\u0000survey around how prediction errors are represented, which results in an\u0000organization of historical neuromorphic generalizations that is centered around\u0000three broad classes of approaches: prediction errors in explicit groups of\u0000error neurons, in membrane potentials, and implicit prediction error encoding.\u0000Furthermore, we examine some applications of spiking predictive coding that\u0000utilize more energy-efficient, edge-computing hardware platforms. Finally, we\u0000highlight important future directions and challenges in this emerging line of\u0000inquiry in brain-inspired computing. Building on the prior results of work in\u0000computational cognitive neuroscience, machine intelligence, and neuromorphic\u0000engineering, we hope that this review of neuromorphic formulations and\u0000implementations of predictive coding will encourage and guide future research\u0000and development in this emerging research area.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sparse connectivity is a hallmark of the brain and a desired property of artificial neural networks. It promotes energy efficiency, simplifies training, and enhances the robustness of network function. Thus, a detailed understanding of how to achieve sparsity without jeopardizing network performance is beneficial for neuroscience, deep learning, and neuromorphic computing applications. We used an exactly solvable model of associative learning to evaluate the effects of various sparsity-inducing constraints on connectivity and function. We determine the optimal level of sparsity achieved by the $l_0$ norm constraint and find that nearly the same efficiency can be obtained by eliminating weak connections. We show that this method of achieving sparsity can be implemented online, making it compatible with neuroscience and machine learning applications.
{"title":"Sparse learning enabled by constraints on connectivity and function","authors":"Mirza M. Junaid Baig, Armen Stepanyants","doi":"arxiv-2409.04946","DOIUrl":"https://doi.org/arxiv-2409.04946","url":null,"abstract":"Sparse connectivity is a hallmark of the brain and a desired property of\u0000artificial neural networks. It promotes energy efficiency, simplifies training,\u0000and enhances the robustness of network function. Thus, a detailed understanding\u0000of how to achieve sparsity without jeopardizing network performance is\u0000beneficial for neuroscience, deep learning, and neuromorphic computing\u0000applications. We used an exactly solvable model of associative learning to\u0000evaluate the effects of various sparsity-inducing constraints on connectivity\u0000and function. We determine the optimal level of sparsity achieved by the $l_0$\u0000norm constraint and find that nearly the same efficiency can be obtained by\u0000eliminating weak connections. We show that this method of achieving sparsity\u0000can be implemented online, making it compatible with neuroscience and machine\u0000learning applications.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandru Vasilache, Jann Krausse, Klaus Knobloch, Juergen Becker
Intra-cortical brain-machine interfaces (iBMIs) have the potential to dramatically improve the lives of people with paraplegia by restoring their ability to perform daily activities. However, current iBMIs suffer from scalability and mobility limitations due to bulky hardware and wiring. Wireless iBMIs offer a solution but are constrained by a limited data rate. To overcome this challenge, we are investigating hybrid spiking neural networks for embedded neural decoding in wireless iBMIs. The networks consist of a temporal convolution-based compression followed by recurrent processing and a final interpolation back to the original sequence length. As recurrent units, we explore gated recurrent units (GRUs), leaky integrate-and-fire (LIF) neurons, and a combination of both - spiking GRUs (sGRUs) and analyze their differences in terms of accuracy, footprint, and activation sparsity. To that end, we train decoders on the "Nonhuman Primate Reaching with Multichannel Sensorimotor Cortex Electrophysiology" dataset and evaluate it using the NeuroBench framework, targeting both tracks of the IEEE BioCAS Grand Challenge on Neural Decoding. Our approach achieves high accuracy in predicting velocities of primate reaching movements from multichannel primary motor cortex recordings while maintaining a low number of synaptic operations, surpassing the current baseline models in the NeuroBench framework. This work highlights the potential of hybrid neural networks to facilitate wireless iBMIs with high decoding precision and a substantial increase in the number of monitored neurons, paving the way toward more advanced neuroprosthetic technologies.
{"title":"Hybrid Spiking Neural Networks for Low-Power Intra-Cortical Brain-Machine Interfaces","authors":"Alexandru Vasilache, Jann Krausse, Klaus Knobloch, Juergen Becker","doi":"arxiv-2409.04428","DOIUrl":"https://doi.org/arxiv-2409.04428","url":null,"abstract":"Intra-cortical brain-machine interfaces (iBMIs) have the potential to\u0000dramatically improve the lives of people with paraplegia by restoring their\u0000ability to perform daily activities. However, current iBMIs suffer from\u0000scalability and mobility limitations due to bulky hardware and wiring. Wireless\u0000iBMIs offer a solution but are constrained by a limited data rate. To overcome\u0000this challenge, we are investigating hybrid spiking neural networks for\u0000embedded neural decoding in wireless iBMIs. The networks consist of a temporal\u0000convolution-based compression followed by recurrent processing and a final\u0000interpolation back to the original sequence length. As recurrent units, we\u0000explore gated recurrent units (GRUs), leaky integrate-and-fire (LIF) neurons,\u0000and a combination of both - spiking GRUs (sGRUs) and analyze their differences\u0000in terms of accuracy, footprint, and activation sparsity. To that end, we train\u0000decoders on the \"Nonhuman Primate Reaching with Multichannel Sensorimotor\u0000Cortex Electrophysiology\" dataset and evaluate it using the NeuroBench\u0000framework, targeting both tracks of the IEEE BioCAS Grand Challenge on Neural\u0000Decoding. Our approach achieves high accuracy in predicting velocities of\u0000primate reaching movements from multichannel primary motor cortex recordings\u0000while maintaining a low number of synaptic operations, surpassing the current\u0000baseline models in the NeuroBench framework. This work highlights the potential\u0000of hybrid neural networks to facilitate wireless iBMIs with high decoding\u0000precision and a substantial increase in the number of monitored neurons, paving\u0000the way toward more advanced neuroprosthetic technologies.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142226735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}