The cortex plays a crucial role in various perceptual and cognitive functions, driven by its basic unit, the canonical cortical microcircuit. Yet, we remain short of a framework that definitively explains the structure-function relationships of this fundamental neuroanatomical motif. To better understand how physical substrates of cortical circuitry facilitate their neuronal dynamics, we employ a computational approach using recurrent neural networks and representational analyses. We examine the differences manifested by the inclusion and exclusion of biologically motivated interareal laminar connections on the computational roles of different neuronal populations in the microcircuit of hierarchically related areas throughout learning. Our findings show that the presence of feedback connections correlates with the functional modularization of cortical populations in different layers and provides the microcircuit with a natural inductive bias to differentiate expected and unexpected inputs at initialization, which we justify mathematically. Furthermore, when testing the effects of training the microcircuit and its variants with a predictive-coding-inspired strategy, we find that doing so helps better encode noisy stimuli in areas of the cortex that receive feedback, all of which combine to suggest evidence for a predictive-coding mechanism serving as an intrinsic operative logic in the cortex.
{"title":"Exploring the Architectural Biases of the Cortical Microcircuit","authors":"Aishwarya Balwani;Suhee Cho;Hannah Choi","doi":"10.1162/neco.a.23","DOIUrl":"10.1162/neco.a.23","url":null,"abstract":"The cortex plays a crucial role in various perceptual and cognitive functions, driven by its basic unit, the canonical cortical microcircuit. Yet, we remain short of a framework that definitively explains the structure-function relationships of this fundamental neuroanatomical motif. To better understand how physical substrates of cortical circuitry facilitate their neuronal dynamics, we employ a computational approach using recurrent neural networks and representational analyses. We examine the differences manifested by the inclusion and exclusion of biologically motivated interareal laminar connections on the computational roles of different neuronal populations in the microcircuit of hierarchically related areas throughout learning. Our findings show that the presence of feedback connections correlates with the functional modularization of cortical populations in different layers and provides the microcircuit with a natural inductive bias to differentiate expected and unexpected inputs at initialization, which we justify mathematically. Furthermore, when testing the effects of training the microcircuit and its variants with a predictive-coding-inspired strategy, we find that doing so helps better encode noisy stimuli in areas of the cortex that receive feedback, all of which combine to suggest evidence for a predictive-coding mechanism serving as an intrinsic operative logic in the cortex.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 9","pages":"1551-1599"},"PeriodicalIF":2.1,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Degeneracy—the ability of different structures to perform the same function—is a fundamental feature of biological systems, contributing to their robustness and evolvability. However, the ubiquity of degeneracy in systems generated through adaptive processes complicates our understanding of the behavioral and computational strategies they employ. In this study, we investigated degeneracy in simple computational agents, known as Markov brains, trained using an artificial evolution algorithm to solve a spatial navigation task with or without associative memory. We analyzed degeneracy at three levels: behavioral, structural, and computational, with a focus on the last. Using information-theoretical concepts, Tononi et al. (1999) proposed a functional measure of degeneracy within biological networks. Here, we extended this approach to compare degeneracy across multiple networks. Using information-theoretical tools and causal analysis, we explored the computational strategies of the evolved agents and quantified their computational degeneracy. Our findings reveal a hierarchy of degenerate solutions, from varied behaviors to diverse structures and computations. Even agents with identical evolved behaviors demonstrated different underlying structures and computations. These results underscore the pervasive nature of degeneracy in neural networks, blurring the lines between the algorithmic and implementation levels in adaptive systems, and highlight the importance of advanced analytical tools to understand their complex behaviors.
{"title":"From Function to Implementation: Exploring Degeneracy in Evolved Artificial Agents","authors":"Zhimin Hu;Oğulcan Cingiler;Clifford Bohm;Larissa Albantakis","doi":"10.1162/neco.a.19","DOIUrl":"10.1162/neco.a.19","url":null,"abstract":"Degeneracy—the ability of different structures to perform the same function—is a fundamental feature of biological systems, contributing to their robustness and evolvability. However, the ubiquity of degeneracy in systems generated through adaptive processes complicates our understanding of the behavioral and computational strategies they employ. In this study, we investigated degeneracy in simple computational agents, known as Markov brains, trained using an artificial evolution algorithm to solve a spatial navigation task with or without associative memory. We analyzed degeneracy at three levels: behavioral, structural, and computational, with a focus on the last. Using information-theoretical concepts, Tononi et al. (1999) proposed a functional measure of degeneracy within biological networks. Here, we extended this approach to compare degeneracy across multiple networks. Using information-theoretical tools and causal analysis, we explored the computational strategies of the evolved agents and quantified their computational degeneracy. Our findings reveal a hierarchy of degenerate solutions, from varied behaviors to diverse structures and computations. Even agents with identical evolved behaviors demonstrated different underlying structures and computations. These results underscore the pervasive nature of degeneracy in neural networks, blurring the lines between the algorithmic and implementation levels in adaptive systems, and highlight the importance of advanced analytical tools to understand their complex behaviors.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 9","pages":"1677-1708"},"PeriodicalIF":2.1,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11180098","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding how brain networks learn and manage multiple tasks simultaneously is of interest in both neuroscience and artificial intelligence. In this regard, a recent research thread in theoretical neuroscience has focused on how recurrent neural network models and their internal dynamics enact multitask learning. To manage different tasks requires a mechanism to convey information about task identity or context into the model, which from a biological perspective may involve mechanisms of neuromodulation. In this study, we use recurrent network models to probe the distinctions between two forms of contextual modulation of neural dynamics, at the level of neuronal excitability and at the level of synaptic strength. We characterize these mechanisms in terms of their functional outcomes, focusing on their robustness to context ambiguity and, relatedly, their efficiency with respect to packing multiple tasks into finite-size networks. We also demonstrate the distinction between these mechanisms at the level of the neuronal dynamics they induce. Together, these characterizations indicate complementarity and synergy in how these mechanisms act, potentially over many timescales, toward enhancing the robustness of multitask learning.
{"title":"Synergistic Pathways of Modulation Enable Robust Task Packing Within Neural Dynamics","authors":"Giacomo Vedovati;ShiNung Ching","doi":"10.1162/neco.a.18","DOIUrl":"10.1162/neco.a.18","url":null,"abstract":"Understanding how brain networks learn and manage multiple tasks simultaneously is of interest in both neuroscience and artificial intelligence. In this regard, a recent research thread in theoretical neuroscience has focused on how recurrent neural network models and their internal dynamics enact multitask learning. To manage different tasks requires a mechanism to convey information about task identity or context into the model, which from a biological perspective may involve mechanisms of neuromodulation. In this study, we use recurrent network models to probe the distinctions between two forms of contextual modulation of neural dynamics, at the level of neuronal excitability and at the level of synaptic strength. We characterize these mechanisms in terms of their functional outcomes, focusing on their robustness to context ambiguity and, relatedly, their efficiency with respect to packing multiple tasks into finite-size networks. We also demonstrate the distinction between these mechanisms at the level of the neuronal dynamics they induce. Together, these characterizations indicate complementarity and synergy in how these mechanisms act, potentially over many timescales, toward enhancing the robustness of multitask learning.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 9","pages":"1529-1550"},"PeriodicalIF":2.1,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sensory processing arises from the communication between neural populations across multiple brain areas. While the widespread presence of neural response variability shared throughout a neural population limits the amount of stimulus-related information those populations can accurately represent, how this variability affects the interareal communication of sensory information is unknown. We propose a mathematical framework to understand the impact of neural population response variability on sensory information transmission. We combine linear Fisher information, a metric connecting stimulus representation and variability, with the framework of communication subspaces, which suggests that functional mappings between cortical populations are low-dimensional relative to the space of population activity patterns. From this, we partition Fisher information depending on the alignment between the population covariance and the mean tuning direction projected onto the communication subspace or its orthogonal complement. We provide mathematical and numerical analyses of our proposed decomposition of Fisher information and examine theoretical scenarios that demonstrate how to leverage communication subspaces for flexible routing and gating of stimulus information. This work will provide researchers investigating interareal communication with a theoretical lens through which to understand sensory information transmission and guide experimental design.
{"title":"Measuring Stimulus Information Transfer Between Neural Populations Through the Communication Subspace","authors":"Oren Weiss;Ruben Coen-Cagli","doi":"10.1162/neco.a.17","DOIUrl":"10.1162/neco.a.17","url":null,"abstract":"Sensory processing arises from the communication between neural populations across multiple brain areas. While the widespread presence of neural response variability shared throughout a neural population limits the amount of stimulus-related information those populations can accurately represent, how this variability affects the interareal communication of sensory information is unknown. We propose a mathematical framework to understand the impact of neural population response variability on sensory information transmission. We combine linear Fisher information, a metric connecting stimulus representation and variability, with the framework of communication subspaces, which suggests that functional mappings between cortical populations are low-dimensional relative to the space of population activity patterns. From this, we partition Fisher information depending on the alignment between the population covariance and the mean tuning direction projected onto the communication subspace or its orthogonal complement. We provide mathematical and numerical analyses of our proposed decomposition of Fisher information and examine theoretical scenarios that demonstrate how to leverage communication subspaces for flexible routing and gating of stimulus information. This work will provide researchers investigating interareal communication with a theoretical lens through which to understand sensory information transmission and guide experimental design.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 9","pages":"1600-1647"},"PeriodicalIF":2.1,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This letter explores the capability of continuous-time recurrent neural networks to store and recall precisely timed scores of spike trains. We show (by numerical experiments) that this is indeed possible: within some range of parameters, any random score of spike trains (for all neurons in the network) can be robustly memorized and autonomously reproduced with stable accurate relative timing of all spikes, with probability close to one. We also demonstrate associative recall under noisy conditions. In these experiments, the required synaptic weights are computed offline to satisfy a template that encourages temporal stability.
{"title":"Continuous-Time Neural Networks Can Stably Memorize Random Spike Trains","authors":"Hugo Aguettaz;Hans-Andrea Loeliger","doi":"10.1162/neco_a_01768","DOIUrl":"10.1162/neco_a_01768","url":null,"abstract":"This letter explores the capability of continuous-time recurrent neural networks to store and recall precisely timed scores of spike trains. We show (by numerical experiments) that this is indeed possible: within some range of parameters, any random score of spike trains (for all neurons in the network) can be robustly memorized and autonomously reproduced with stable accurate relative timing of all spikes, with probability close to one. We also demonstrate associative recall under noisy conditions. In these experiments, the required synaptic weights are computed offline to satisfy a template that encourages temporal stability.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 8","pages":"1439-1468"},"PeriodicalIF":2.1,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Johnny Jingze Li;Sebastian Pardo-Guerra;Kalyan Basu;Gabriel A. Silva
Emergent effect is crucial to understanding the properties of complex systems that do not appear in their basic units, but there has been a lack of theories to measure and understand its mechanisms. In this letter, we consider emergence as a kind of structural nonlinearity, discuss a framework based on homological algebra that encodes emergence as the mathematical structure of cohomologies, and then apply it to network models to develop a computational measure of emergence. This framework ties the potential for emergent effects of a system to its network topology and local structures, paving the way to predict and understand the cause of emergent effects. We show in our numerical experiment that our measure of emergence correlates with the existing information-theoretic measure of emergence.
{"title":"A Categorical Framework for Quantifying Emergent Effects in Network Topology","authors":"Johnny Jingze Li;Sebastian Pardo-Guerra;Kalyan Basu;Gabriel A. Silva","doi":"10.1162/neco_a_01766","DOIUrl":"10.1162/neco_a_01766","url":null,"abstract":"Emergent effect is crucial to understanding the properties of complex systems that do not appear in their basic units, but there has been a lack of theories to measure and understand its mechanisms. In this letter, we consider emergence as a kind of structural nonlinearity, discuss a framework based on homological algebra that encodes emergence as the mathematical structure of cohomologies, and then apply it to network models to develop a computational measure of emergence. This framework ties the potential for emergent effects of a system to its network topology and local structures, paving the way to predict and understand the cause of emergent effects. We show in our numerical experiment that our measure of emergence correlates with the existing information-theoretic measure of emergence.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 8","pages":"1409-1438"},"PeriodicalIF":2.1,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reservoir computing information processing based on untrained recurrent neural networks with random connections is expected to depend on the nonlinear properties of the neurons and the resulting oscillatory, chaotic, or fixed-point dynamics of the network. However, the degree of nonlinearity required and the range of suitable dynamical regimes for a given task remain poorly understood. To clarify these issues, we study the classification accuracy of a reservoir computer in artificial tasks of varying complexity while tuning both the neuron’s degree of nonlinearity and the reservoir’s dynamical regime. We find that even with activation functions of extremely reduced nonlinearity, weak recurrent interactions, and small input signals, the reservoir can compute useful representations. These representations, detectable only in higher-order principal components, make complex classification tasks linearly separable for the readout layer. Increasing the recurrent coupling leads to spontaneous dynamical behavior. Nevertheless, some input-related computations can “ride on top” of oscillatory or fixed-point attractors with little loss of accuracy, whereas chaotic dynamics often reduces task performance. By tuning the system through the full range of dynamical phases, we observe in several classification tasks that accuracy peaks at both the oscillatory/chaotic and chaotic/fixed-point phase boundaries, supporting the edge of chaos hypothesis. We also present a regression task with the opposite behavior. Our findings, particularly the robust weakly nonlinear operating regime, may offer new perspectives for both technical and biological neural networks with random connectivity.
{"title":"Nonlinear Neural Dynamics and Classification Accuracy in Reservoir Computing","authors":"Claus Metzner;Achim Schilling;Andreas Maier;Patrick Krauss","doi":"10.1162/neco_a_01770","DOIUrl":"10.1162/neco_a_01770","url":null,"abstract":"Reservoir computing information processing based on untrained recurrent neural networks with random connections is expected to depend on the nonlinear properties of the neurons and the resulting oscillatory, chaotic, or fixed-point dynamics of the network. However, the degree of nonlinearity required and the range of suitable dynamical regimes for a given task remain poorly understood. To clarify these issues, we study the classification accuracy of a reservoir computer in artificial tasks of varying complexity while tuning both the neuron’s degree of nonlinearity and the reservoir’s dynamical regime. We find that even with activation functions of extremely reduced nonlinearity, weak recurrent interactions, and small input signals, the reservoir can compute useful representations. These representations, detectable only in higher-order principal components, make complex classification tasks linearly separable for the readout layer. Increasing the recurrent coupling leads to spontaneous dynamical behavior. Nevertheless, some input-related computations can “ride on top” of oscillatory or fixed-point attractors with little loss of accuracy, whereas chaotic dynamics often reduces task performance. By tuning the system through the full range of dynamical phases, we observe in several classification tasks that accuracy peaks at both the oscillatory/chaotic and chaotic/fixed-point phase boundaries, supporting the edge of chaos hypothesis. We also present a regression task with the opposite behavior. Our findings, particularly the robust weakly nonlinear operating regime, may offer new perspectives for both technical and biological neural networks with random connectivity.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 8","pages":"1469-1504"},"PeriodicalIF":2.1,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Novelty detection, also known as familiarity discrimination or recognition memory, refers to the ability to distinguish whether a stimulus has been seen before. It has been hypothesized that novelty detection can naturally arise within networks that store memory or learn efficient neural representation because these networks already store information on familiar stimuli. However, existing computational models supporting this idea have yet to reproduce the high capacity of human recognition memory, leaving the hypothesis in question. This article demonstrates that predictive coding, an established model previously shown to effectively support representation learning and memory, can also naturally discriminate novelty with high capacity. The predictive coding model includes neurons encoding prediction errors, and we show that these neurons produce higher activity for novel stimuli, so that the novelty can be decoded from their activity. Additionally, hierarchical predictive coding networks detect novelty at different levels of abstraction within the hierarchy, from low-level sensory features like arrangements of pixels to high-level semantic features like object identities. Overall, based on predictive coding, this article establishes a unified framework that brings together novelty detection, associative memory, and representation learning, demonstrating that a single model can capture these various cognitive functions.
{"title":"Predictive Coding Model Detects Novelty on Different Levels of Representation Hierarchy","authors":"T. Ed Li;Mufeng Tang;Rafal Bogacz","doi":"10.1162/neco_a_01769","DOIUrl":"10.1162/neco_a_01769","url":null,"abstract":"Novelty detection, also known as familiarity discrimination or recognition memory, refers to the ability to distinguish whether a stimulus has been seen before. It has been hypothesized that novelty detection can naturally arise within networks that store memory or learn efficient neural representation because these networks already store information on familiar stimuli. However, existing computational models supporting this idea have yet to reproduce the high capacity of human recognition memory, leaving the hypothesis in question. This article demonstrates that predictive coding, an established model previously shown to effectively support representation learning and memory, can also naturally discriminate novelty with high capacity. The predictive coding model includes neurons encoding prediction errors, and we show that these neurons produce higher activity for novel stimuli, so that the novelty can be decoded from their activity. Additionally, hierarchical predictive coding networks detect novelty at different levels of abstraction within the hierarchy, from low-level sensory features like arrangements of pixels to high-level semantic features like object identities. Overall, based on predictive coding, this article establishes a unified framework that brings together novelty detection, associative memory, and representation learning, demonstrating that a single model can capture these various cognitive functions.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 8","pages":"1373-1408"},"PeriodicalIF":2.1,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generative modeling of crystal structures is significantly challenged by the complexity of input data, which constrains the ability of these models to explore and discover novel crystals. This complexity often confines de novo design methodologies to merely small perturbations of known crystals and hampers the effective application of advanced optimization techniques. One such optimization technique, latent space Bayesian optimization (LSBO), has demonstrated promising results in uncovering novel objects across various domains, especially when combined with variational autoencoders (VAEs). Recognizing LSBO’s potential and the critical need for innovative crystal discovery, we introduce Crystal-LSBO, a de novo design framework for crystals specifically tailored to enhance explorability within LSBO frameworks. Crystal-LSBO employs multiple VAEs, each dedicated to a distinct aspect of crystal structure—lattice, coordinates, and chemical elements—orchestrated by an integrative model that synthesizes these components into a cohesive output. This setup not only streamlines the learning process but also produces explorable latent spaces thanks to the decreased complexity of the learning task for each model, enabling LSBO approaches to operate. Our study pioneers the use of LSBO for de novo crystal design, demonstrating its efficacy through optimization tasks focused mainly on formation energy values. Our results highlight the effectiveness of our methodology, offering a new perspective for de novo crystal discovery.
{"title":"Crystal-LSBO: Automated Design of De Novo Crystals With Latent Space Bayesian Optimization","authors":"Onur Boyar;Yanheng Gu;Yuji Tanaka;Shunsuke Tonogai;Tomoya Itakura;Ichiro Takeuchi","doi":"10.1162/neco_a_01767","DOIUrl":"10.1162/neco_a_01767","url":null,"abstract":"Generative modeling of crystal structures is significantly challenged by the complexity of input data, which constrains the ability of these models to explore and discover novel crystals. This complexity often confines de novo design methodologies to merely small perturbations of known crystals and hampers the effective application of advanced optimization techniques. One such optimization technique, latent space Bayesian optimization (LSBO), has demonstrated promising results in uncovering novel objects across various domains, especially when combined with variational autoencoders (VAEs). Recognizing LSBO’s potential and the critical need for innovative crystal discovery, we introduce Crystal-LSBO, a de novo design framework for crystals specifically tailored to enhance explorability within LSBO frameworks. Crystal-LSBO employs multiple VAEs, each dedicated to a distinct aspect of crystal structure—lattice, coordinates, and chemical elements—orchestrated by an integrative model that synthesizes these components into a cohesive output. This setup not only streamlines the learning process but also produces explorable latent spaces thanks to the decreased complexity of the learning task for each model, enabling LSBO approaches to operate. Our study pioneers the use of LSBO for de novo crystal design, demonstrating its efficacy through optimization tasks focused mainly on formation energy values. Our results highlight the effectiveness of our methodology, offering a new perspective for de novo crystal discovery.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 8","pages":"1505-1527"},"PeriodicalIF":2.1,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11133426","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Memory is a complex process in the brain that involves the encoding, consolidation, and retrieval of previously experienced stimuli. The brain is capable of rapidly forming memories of sensory input. However, applying the memory system to real-world data poses challenges in practical implementation. This article demonstrates that through the integration of sparse spike pattern encoding scheme population tempotron, and various spike-timing-dependent plasticity (STDP) learning rules, supported by bounded weights and biological mechanisms, it is possible to rapidly form stable neural assemblies of external sensory inputs in a spiking neural circuit model inspired by the hippocampal structure. The model employs neural ensemble module and competitive learning strategies that mimic the pattern separation mechanism of the hippocampal dentate gyrus (DG) area to achieve nonoverlapping sparse coding. It also uses population tempotron and NMDA-(N-methyl-D-aspartate)mediated STDP to construct associative and episodic memories, analogous to the CA3 and CA1 regions. These memories are represented by strongly connected neural assemblies formed within just a few trials. Overall, this model offers a robust computational framework to accommodate rapid memory throughout the brain-wide memory process.
记忆是大脑中一个复杂的过程,包括对先前经历的刺激进行编码、巩固和检索。大脑能够迅速形成感官输入的记忆。然而,将存储系统应用于现实世界的数据在实际实施中提出了挑战。本文表明,基于有界权和生物机制,通过整合稀疏脉冲模式编码方案王嘉硕、袁孟文、沈江荣、柴青高、唐华金、群体节奏和各种脉冲时间依赖的可塑性(STDP)学习规则,可以在海马结构启发的脉冲神经回路模型中快速形成外部感觉输入的稳定神经集合。该模型采用模拟海马齿状回(DG)区域模式分离机制的神经集成模块和竞争学习策略来实现非重叠稀疏编码。它还使用群体节奏和NMDA-(n -甲基- d -天冬氨酸)介导的STDP来构建联想和情景记忆,类似于CA3和CA1区域。这些记忆是通过在几次试验中形成的紧密连接的神经集合来表现的。总的来说,这个模型提供了一个强大的计算框架,以适应整个大脑记忆过程中的快速记忆。
{"title":"Rapid Memory Encoding in a Spiking Hippocampus Circuit Model","authors":"Jiashuo Wang;Mengwen Yuan;Jiangrong Shen;Qingao Chai;Huajin Tang","doi":"10.1162/neco_a_01762","DOIUrl":"10.1162/neco_a_01762","url":null,"abstract":"Memory is a complex process in the brain that involves the encoding, consolidation, and retrieval of previously experienced stimuli. The brain is capable of rapidly forming memories of sensory input. However, applying the memory system to real-world data poses challenges in practical implementation. This article demonstrates that through the integration of sparse spike pattern encoding scheme population tempotron, and various spike-timing-dependent plasticity (STDP) learning rules, supported by bounded weights and biological mechanisms, it is possible to rapidly form stable neural assemblies of external sensory inputs in a spiking neural circuit model inspired by the hippocampal structure. The model employs neural ensemble module and competitive learning strategies that mimic the pattern separation mechanism of the hippocampal dentate gyrus (DG) area to achieve nonoverlapping sparse coding. It also uses population tempotron and NMDA-(N-methyl-D-aspartate)mediated STDP to construct associative and episodic memories, analogous to the CA3 and CA1 regions. These memories are represented by strongly connected neural assemblies formed within just a few trials. Overall, this model offers a robust computational framework to accommodate rapid memory throughout the brain-wide memory process.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 7","pages":"1320-1352"},"PeriodicalIF":2.7,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}