首页 > 最新文献

Frontiers in Computational Neuroscience最新文献

英文 中文
From generative AI to the brain: five takeaways. 从生成式人工智能到大脑:五个要点。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-11-24 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1718778
Claudius Gros

The big strides seen in generative AI are not based on somewhat obscure algorithms, but due to clearly defined generative principles. The resulting concrete implementations have proven themselves in large numbers of applications. We suggest that it is imperative to thoroughly investigate which of these generative principles may be operative also in the brain, and hence relevant for cognitive neuroscience. In addition, ML research led to a range of interesting characterizations of neural information processing systems. We discuss five examples, the shortcomings of world modeling, the generation of thought processes, attention, neural scaling laws, and quantization, that illustrate how much neuroscience could potentially learn from ML research.

生成人工智能的巨大进步不是基于一些模糊的算法,而是由于明确定义的生成原则。最终的具体实现已经在大量的应用程序中得到了证明。我们建议彻底研究这些生成原理中哪些可能也在大脑中起作用,从而与认知神经科学相关,这是势在必行的。此外,机器学习研究导致了神经信息处理系统的一系列有趣的特征。我们讨论了五个例子,世界建模的缺点,思维过程的产生,注意力,神经缩放定律和量化,这些例子说明了神经科学可以从机器学习研究中学习到多少东西。
{"title":"From generative AI to the brain: five takeaways.","authors":"Claudius Gros","doi":"10.3389/fncom.2025.1718778","DOIUrl":"10.3389/fncom.2025.1718778","url":null,"abstract":"<p><p>The big strides seen in generative AI are not based on somewhat obscure algorithms, but due to clearly defined generative principles. The resulting concrete implementations have proven themselves in large numbers of applications. We suggest that it is imperative to thoroughly investigate which of these generative principles may be operative also in the brain, and hence relevant for cognitive neuroscience. In addition, ML research led to a range of interesting characterizations of neural information processing systems. We discuss five examples, the shortcomings of world modeling, the generation of thought processes, attention, neural scaling laws, and quantization, that illustrate how much neuroscience could potentially learn from ML research.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1718778"},"PeriodicalIF":2.3,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12682776/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145713982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring internal representations of self-supervised networks: few-shot learning abilities and comparison with human semantics and recognition of objects. 探索自监督网络的内部表征:少量学习能力和与人类语义和物体识别的比较。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-11-21 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1613291
Asaki Kataoka, Yoshihiro Nagano, Masafumi Oizumi

Recent advances in self-supervised learning have attracted significant attention from both machine learning and neuroscience. This is primarily because self-supervised methods do not require annotated supervisory information, making them applicable to training artificial networks without relying on large amounts of curated data, and potentially offering insights into how the brain adapts to its environment in an unsupervised manner. Although several previous studies have elucidated the correspondence between neural representations in deep convolutional neural networks (DCNNs) and biological systems, the extent to which unsupervised or self-supervised learning can explain the human-like acquisition of categorically structured information remains less explored. In this study, we investigate the correspondence between the internal representations of DCNNs trained using a self-supervised contrastive learning algorithm and human semantics and recognition. To this end, we employ a few-shot learning evaluation procedure, which measures the ability of DCNNs to recognize novel concepts from limited exposure, to examine the inter-categorical structure of the learned representations. Two comparative approaches are used to relate the few-shot learning outcomes to human semantics and recognition, with results suggesting that the representations acquired through contrastive learning are well aligned with human cognition. These findings underscore the potential of self-supervised contrastive learning frameworks to model learning mechanisms similar to those of the human brain, particularly in scenarios where explicit supervision is unavailable, such as in human infants prior to language acquisition.

自我监督学习的最新进展引起了机器学习和神经科学的极大关注。这主要是因为自监督方法不需要注释的监督信息,使其适用于训练人工网络,而不依赖于大量的管理数据,并有可能为大脑如何以无监督的方式适应环境提供见解。尽管之前的一些研究已经阐明了深度卷积神经网络(DCNNs)中的神经表征与生物系统之间的对应关系,但在多大程度上,无监督或自监督学习可以解释人类对分类结构信息的类人获取仍然很少被探索。在本研究中,我们研究了使用自监督对比学习算法训练的DCNNs内部表征与人类语义和识别之间的对应关系。为此,我们采用了几次学习评估程序,该程序测量DCNNs从有限暴露中识别新概念的能力,以检查学习表征的分类间结构。两种比较方法被用来将少量学习结果与人类语义和识别联系起来,结果表明,通过对比学习获得的表征与人类认知很好地一致。这些发现强调了自我监督对比学习框架在模拟与人类大脑类似的学习机制方面的潜力,特别是在没有明确监督的情况下,例如人类婴儿在语言习得之前。
{"title":"Exploring internal representations of self-supervised networks: few-shot learning abilities and comparison with human semantics and recognition of objects.","authors":"Asaki Kataoka, Yoshihiro Nagano, Masafumi Oizumi","doi":"10.3389/fncom.2025.1613291","DOIUrl":"10.3389/fncom.2025.1613291","url":null,"abstract":"<p><p>Recent advances in self-supervised learning have attracted significant attention from both machine learning and neuroscience. This is primarily because self-supervised methods do not require annotated supervisory information, making them applicable to training artificial networks without relying on large amounts of curated data, and potentially offering insights into how the brain adapts to its environment in an unsupervised manner. Although several previous studies have elucidated the correspondence between neural representations in deep convolutional neural networks (DCNNs) and biological systems, the extent to which unsupervised or self-supervised learning can explain the human-like acquisition of categorically structured information remains less explored. In this study, we investigate the correspondence between the internal representations of DCNNs trained using a self-supervised contrastive learning algorithm and human semantics and recognition. To this end, we employ a few-shot learning evaluation procedure, which measures the ability of DCNNs to recognize novel concepts from limited exposure, to examine the inter-categorical structure of the learned representations. Two comparative approaches are used to relate the few-shot learning outcomes to human semantics and recognition, with results suggesting that the representations acquired through contrastive learning are well aligned with human cognition. These findings underscore the potential of self-supervised contrastive learning frameworks to model learning mechanisms similar to those of the human brain, particularly in scenarios where explicit supervision is unavailable, such as in human infants prior to language acquisition.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1613291"},"PeriodicalIF":2.3,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12679296/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145700063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hierarchical Bayesian inference model for volatile multivariate exponentially distributed signals. 易变多元指数分布信号的层次贝叶斯推理模型。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-11-12 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1408836
Changbo Zhu, Ke Zhou, Fengzhen Tang, Yandong Tang, Xiaoli Li, Bailu Si

Brain activities often follow an exponential family of distributions. The exponential distribution is the maximum entropy distribution of continuous random variables in the presence of a mean. The memoryless and peakless properties of an exponential distribution impose difficulties for data analysis methods. To estimate the rate parameter of multivariate exponential distribution from a time series of sensory inputs (i.e., observations), we constructed a hierarchical Bayesian inference model based on a variant of general hierarchical Brownian filter (GHBF). To account for the complex interactions among multivariate exponential random variables, the model estimates the second-order interaction of the rate intensity parameter in logarithmic space. Using variational Bayesian scheme, a family of closed-form and analytical update equations are introduced. These update equations also constitute a complete predictive coding framework. The simulation study shows that our model has the ability to evaluate the time-varying rate parameters and the underlying correlation structure of volatile multivariate exponentially distributed signals. The proposed hierarchical Bayesian inference model is of practical utility in analyzing high-dimensional neural activities.

大脑活动通常遵循指数族分布。指数分布是存在均值的连续随机变量的最大熵分布。指数分布的无记忆性和无峰值性给数据分析方法带来了困难。为了从时间序列的感官输入(即观测值)估计多变量指数分布的速率参数,我们构建了一个基于通用分层布朗滤波(GHBF)变体的分层贝叶斯推理模型。为了考虑多变量指数随机变量之间的复杂相互作用,该模型估计了对数空间中速率强度参数的二阶相互作用。利用变分贝叶斯格式,引入了一类闭型解析更新方程。这些更新方程也构成了一个完整的预测编码框架。仿真研究表明,该模型具有对易变的多变量指数分布信号的时变速率参数和潜在的相关结构进行评估的能力。所提出的层次贝叶斯推理模型在分析高维神经活动方面具有实用价值。
{"title":"A hierarchical Bayesian inference model for volatile multivariate exponentially distributed signals.","authors":"Changbo Zhu, Ke Zhou, Fengzhen Tang, Yandong Tang, Xiaoli Li, Bailu Si","doi":"10.3389/fncom.2025.1408836","DOIUrl":"https://doi.org/10.3389/fncom.2025.1408836","url":null,"abstract":"<p><p>Brain activities often follow an exponential family of distributions. The exponential distribution is the maximum entropy distribution of continuous random variables in the presence of a mean. The memoryless and peakless properties of an exponential distribution impose difficulties for data analysis methods. To estimate the rate parameter of multivariate exponential distribution from a time series of sensory inputs (i.e., observations), we constructed a hierarchical Bayesian inference model based on a variant of general hierarchical Brownian filter (GHBF). To account for the complex interactions among multivariate exponential random variables, the model estimates the second-order interaction of the rate intensity parameter in logarithmic space. Using variational Bayesian scheme, a family of closed-form and analytical update equations are introduced. These update equations also constitute a complete predictive coding framework. The simulation study shows that our model has the ability to evaluate the time-varying rate parameters and the underlying correlation structure of volatile multivariate exponentially distributed signals. The proposed hierarchical Bayesian inference model is of practical utility in analyzing high-dimensional neural activities.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1408836"},"PeriodicalIF":2.3,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12648510/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145631558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Common characteristics of variants linked to autism spectrum disorder in the WAVE regulatory complex. WAVE调节复合体中与自闭症谱系障碍相关的变异的共同特征。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-11-12 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1704350
Song Xie, Ke Zuo, Silvia De Rubeis, Giorgio Bonollo, Giorgio Colombo, Paolo Ruggerone, Paolo Carloni

Six variants associated with autism spectrum disorder (ASD) abnormally activate the WASP-family Verprolin-homologous protein (WAVE) regulatory complex (WRC), a critical regulator of actin dynamics. This abnormal activation may contribute to the pathogenesis of this disorder. Using molecular dynamics (MD) simulations, we recently investigated the structural dynamics of wild-type (WT) WRC and R87C, A455P, and Q725R WRC disease-linked variants. Here, by extending MD simulations to I664M, E665K, and D724H WRC, we suggest that all of the mutations weaken the interactions and affect intra-complex allosteric communication between the WAVE1 active C-terminal region (ACR) and the rest of the complex. This might contribute to an abnormal complex activation, a hallmark of WRC-linked ASD. In addition, all mutants but I664M destabilize the ACR V-helix and increase the participation of ACR in large-scale movements. All these features may also abnormally influence the inactive WRC toward a dysfunctional state. We hypothesize that small-molecule ligands counteracting these effects may help restore normal WRC regulation in ASD-related variants.

与自闭症谱系障碍(ASD)相关的六种变异异常激活wasp家族verprolin同源蛋白(WAVE)调节复合体(WRC),这是肌动蛋白动力学的关键调节因子。这种异常的激活可能有助于这种疾病的发病机制。利用分子动力学(MD)模拟,我们最近研究了野生型(WT) WRC和R87C、A455P和Q725R WRC疾病相关变异的结构动力学。在这里,通过将MD模拟扩展到I664M, E665K和D724H WRC,我们发现所有突变都削弱了相互作用,并影响了WAVE1活性c端区(ACR)与复合物其余部分之间的复合物内变构通信。这可能导致异常复合物激活,这是wrc相关ASD的标志。此外,除I664M外,所有突变体都破坏了ACR v -螺旋结构的稳定性,增加了ACR参与大规模运动的能力。所有这些特征也可能异常地影响不活跃的WRC走向功能失调状态。我们假设抵消这些影响的小分子配体可能有助于恢复asd相关变异的正常WRC调节。
{"title":"Common characteristics of variants linked to autism spectrum disorder in the WAVE regulatory complex.","authors":"Song Xie, Ke Zuo, Silvia De Rubeis, Giorgio Bonollo, Giorgio Colombo, Paolo Ruggerone, Paolo Carloni","doi":"10.3389/fncom.2025.1704350","DOIUrl":"https://doi.org/10.3389/fncom.2025.1704350","url":null,"abstract":"<p><p>Six variants associated with autism spectrum disorder (ASD) abnormally activate the WASP-family Verprolin-homologous protein (WAVE) regulatory complex (WRC), a critical regulator of actin dynamics. This abnormal activation may contribute to the pathogenesis of this disorder. Using molecular dynamics (MD) simulations, we recently investigated the structural dynamics of wild-type (WT) WRC and R87C, A455P, and Q725R WRC disease-linked variants. Here, by extending MD simulations to I664M, E665K, and D724H WRC, we suggest that <i>all</i> of the mutations weaken the interactions and affect intra-complex allosteric communication between the WAVE1 active C-terminal region (ACR) and the rest of the complex. This might contribute to an abnormal complex activation, a hallmark of WRC-linked ASD. In addition, all mutants but I664M destabilize the ACR V-helix and increase the participation of ACR in large-scale movements. All these features may also abnormally influence the inactive WRC toward a dysfunctional state. We hypothesize that small-molecule ligands counteracting these effects may help restore normal WRC regulation in ASD-related variants.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1704350"},"PeriodicalIF":2.3,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12647093/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145631587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time delays in computational models of neuronal and synaptic dynamics. 神经元和突触动力学计算模型中的时间延迟。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-11-10 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1700144
Mojtaba Madadi Asl
{"title":"Time delays in computational models of neuronal and synaptic dynamics.","authors":"Mojtaba Madadi Asl","doi":"10.3389/fncom.2025.1700144","DOIUrl":"10.3389/fncom.2025.1700144","url":null,"abstract":"","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1700144"},"PeriodicalIF":2.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12640968/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145603444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Triboelectric nanogenerators for neural data interpretation: bridging multi-sensing interfaces with neuromorphic and deep learning paradigms. 用于神经数据解释的摩擦电纳米发电机:用神经形态和深度学习范例桥接多传感接口。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-11-07 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1691017
Lingli Gan, Shuqin Yuan, Min Guo, Qian Wang, Zongfang Deng, Bin Jia

The rapid growth of computational neuroscience and brain-computer interface (BCI) technologies require efficient, scalable, and biologically compatible approaches for neural data acquisition and interpretation. Traditional sensors and signal processing pipelines often struggle with the high dimensionality, temporal variability, and noise inherent in neural signals, particularly in elderly populations where continuous monitoring is essential. Triboelectric nanogenerators (TENGs), as self-powered and flexible multi-sensing devices, offer a promising avenue for capturing neural-related biophysical signals such as electroencephalography (EEG), electromyography (EMG), and cardiorespiratory dynamics. Their low-power and wearable characteristics make them suitable for long-term health and neurocognitive monitoring. When combined with deep learning models-including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and spiking neural networks (SNNs)-TENG-generated signals can be efficiently decoded, enabling insights into neural states, cognitive functions, and disease progression. Furthermore, neuromorphic computing paradigms provide an energy-efficient and biologically inspired framework that naturally aligns with the event-driven characteristics of TENG outputs. This mini review highlights the convergence of TENG-based sensing, deep learning algorithms, and neuromorphic systems for neural data interpretation. We discuss recent progress, challenges, and future perspectives, with an emphasis on applications in computational neuroscience, neurorehabilitation, and elderly health care.

计算神经科学和脑机接口(BCI)技术的快速发展需要高效、可扩展和生物兼容的方法来获取和解释神经数据。传统的传感器和信号处理管道经常与神经信号固有的高维性、时间变异性和噪声作斗争,特别是在老年人中,连续监测是必不可少的。摩擦电纳米发电机(TENGs)作为一种自供电、灵活的多传感装置,为捕获神经相关的生物物理信号(如脑电图(EEG)、肌电图(EMG)和心肺动力学)提供了一条有前途的途径。它们的低功耗和可穿戴特性使它们适合长期健康和神经认知监测。当与深度学习模型(包括卷积神经网络(cnn)、循环神经网络(rnn)和尖峰神经网络(snn))相结合时,teng生成的信号可以有效解码,从而深入了解神经状态、认知功能和疾病进展。此外,神经形态计算范式提供了一个节能和受生物启发的框架,自然地与TENG输出的事件驱动特征保持一致。这篇简短的综述强调了基于teng的传感、深度学习算法和神经形态系统在神经数据解释方面的融合。我们讨论了最近的进展、挑战和未来的前景,重点是在计算神经科学、神经康复和老年保健方面的应用。
{"title":"Triboelectric nanogenerators for neural data interpretation: bridging multi-sensing interfaces with neuromorphic and deep learning paradigms.","authors":"Lingli Gan, Shuqin Yuan, Min Guo, Qian Wang, Zongfang Deng, Bin Jia","doi":"10.3389/fncom.2025.1691017","DOIUrl":"10.3389/fncom.2025.1691017","url":null,"abstract":"<p><p>The rapid growth of computational neuroscience and brain-computer interface (BCI) technologies require efficient, scalable, and biologically compatible approaches for neural data acquisition and interpretation. Traditional sensors and signal processing pipelines often struggle with the high dimensionality, temporal variability, and noise inherent in neural signals, particularly in elderly populations where continuous monitoring is essential. Triboelectric nanogenerators (TENGs), as self-powered and flexible multi-sensing devices, offer a promising avenue for capturing neural-related biophysical signals such as electroencephalography (EEG), electromyography (EMG), and cardiorespiratory dynamics. Their low-power and wearable characteristics make them suitable for long-term health and neurocognitive monitoring. When combined with deep learning models-including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and spiking neural networks (SNNs)-TENG-generated signals can be efficiently decoded, enabling insights into neural states, cognitive functions, and disease progression. Furthermore, neuromorphic computing paradigms provide an energy-efficient and biologically inspired framework that naturally aligns with the event-driven characteristics of TENG outputs. This mini review highlights the convergence of TENG-based sensing, deep learning algorithms, and neuromorphic systems for neural data interpretation. We discuss recent progress, challenges, and future perspectives, with an emphasis on applications in computational neuroscience, neurorehabilitation, and elderly health care.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1691017"},"PeriodicalIF":2.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12634569/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145586393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural heterogeneity as a unifying mechanism for efficient learning in spiking neural networks. 神经异质性作为脉冲神经网络高效学习的统一机制。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-11-07 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1661070
Fudong Zhang, Jingjing Cui

The brain is a highly diverse and heterogeneous network, yet the functional role of this neural heterogeneity remains largely unclear. Despite growing interest in neural heterogeneity, a comprehensive understanding of how it influences computation across different neural levels and learning methods is still lacking. In this work, we systematically examine the neural computation of spiking neural networks (SNNs) in three key sources of neural heterogeneity: external, network, and intrinsic heterogeneity. We evaluate their impact using three distinct learning methods, which can carry out tasks ranging from simple curve fitting to complex network reconstruction and real-world applications. Our results show that while different types of neural heterogeneity contribute in distinct ways, they consistently improve learning accuracy and robustness. These findings suggest that neural heterogeneity across multiple levels improves learning capacity and robustness of neural computation, and should be considered a core design principle in the optimization of SNNs.

大脑是一个高度多样化和异质性的网络,然而这种神经异质性的功能作用在很大程度上仍然不清楚。尽管人们对神经异质性越来越感兴趣,但对它如何影响不同神经水平和学习方法的计算的全面理解仍然缺乏。在这项工作中,我们系统地研究了脉冲神经网络(snn)在三个关键神经异质性来源中的神经计算:外部、网络和内在异质性。我们使用三种不同的学习方法来评估它们的影响,这些方法可以执行从简单的曲线拟合到复杂的网络重构和现实世界应用的任务。我们的研究结果表明,虽然不同类型的神经异质性以不同的方式起作用,但它们一致地提高了学习的准确性和鲁棒性。这些发现表明,跨多个层次的神经异质性提高了神经计算的学习能力和鲁棒性,应该被视为snn优化的核心设计原则。
{"title":"Neural heterogeneity as a unifying mechanism for efficient learning in spiking neural networks.","authors":"Fudong Zhang, Jingjing Cui","doi":"10.3389/fncom.2025.1661070","DOIUrl":"10.3389/fncom.2025.1661070","url":null,"abstract":"<p><p>The brain is a highly diverse and heterogeneous network, yet the functional role of this neural heterogeneity remains largely unclear. Despite growing interest in neural heterogeneity, a comprehensive understanding of how it influences computation across different neural levels and learning methods is still lacking. In this work, we systematically examine the neural computation of spiking neural networks (SNNs) in three key sources of neural heterogeneity: external, network, and intrinsic heterogeneity. We evaluate their impact using three distinct learning methods, which can carry out tasks ranging from simple curve fitting to complex network reconstruction and real-world applications. Our results show that while different types of neural heterogeneity contribute in distinct ways, they consistently improve learning accuracy and robustness. These findings suggest that neural heterogeneity across multiple levels improves learning capacity and robustness of neural computation, and should be considered a core design principle in the optimization of SNNs.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1661070"},"PeriodicalIF":2.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12634501/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145586461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interleaving cortex-analog mixing improves deep non-negative matrix factorization networks. 交错皮质模拟混合改进了深度非负矩阵分解网络。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-11-05 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1692418
Mahbod Nouri, David Rotermund, Alberto Garcia-Ortiz, Klaus R Pawelzik

Considering biological constraints in artificial neural networks has led to dramatic improvements in performance. Nevertheless, to date, the positivity of long-range signals in the cortex has not been shown to yield improvements. While Non-negative matrix factorization (NMF) captures biological constraints of positive long-range interactions, deep convolutional neural networks with NMF modules do not match the performance of conventional neural networks (CNNs) of a similar size. This work shows that introducing intermediate modules that combine the NMF's positive activities, analogous to the processing in cortical columns, leads to improved performance on benchmark data that exceeds that of vanilla deep convolutional networks. This demonstrates that including positive long-range signaling together with local interactions of both signs in analogy to cortical hyper-columns has the potential to enhance the performance of deep networks.

在人工神经网络中考虑生物约束导致了性能的显著提高。然而,到目前为止,大脑皮层中远程信号的积极性并没有得到改善。虽然非负矩阵分解(NMF)捕获了正远程相互作用的生物约束,但具有NMF模块的深度卷积神经网络的性能无法与类似大小的传统神经网络(cnn)相匹配。这项工作表明,引入结合NMF积极活动的中间模块,类似于皮质柱的处理,可以提高基准数据的性能,超过普通深度卷积网络。这表明,将积极的远程信号与两种信号的局部相互作用(类似于皮质超列)结合在一起,有可能提高深度网络的性能。
{"title":"Interleaving cortex-analog mixing improves deep non-negative matrix factorization networks.","authors":"Mahbod Nouri, David Rotermund, Alberto Garcia-Ortiz, Klaus R Pawelzik","doi":"10.3389/fncom.2025.1692418","DOIUrl":"10.3389/fncom.2025.1692418","url":null,"abstract":"<p><p>Considering biological constraints in artificial neural networks has led to dramatic improvements in performance. Nevertheless, to date, the positivity of long-range signals in the cortex has not been shown to yield improvements. While Non-negative matrix factorization (NMF) captures biological constraints of positive long-range interactions, deep convolutional neural networks with NMF modules do not match the performance of conventional neural networks (CNNs) of a similar size. This work shows that introducing intermediate modules that combine the NMF's positive activities, analogous to the processing in cortical columns, leads to improved performance on benchmark data that exceeds that of vanilla deep convolutional networks. This demonstrates that including positive long-range signaling together with local interactions of both signs in analogy to cortical hyper-columns has the potential to enhance the performance of deep networks.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1692418"},"PeriodicalIF":2.3,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12626930/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145563432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Universal differential equations as a unifying modeling language for neuroscience. 通用微分方程作为神经科学的统一建模语言。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-10-30 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1677930
Ahmed El-Gazzar, Marcel van Gerven

The rapid growth of large-scale neuroscience datasets has spurred diverse modeling strategies, ranging from mechanistic models grounded in biophysics, to phenomenological descriptions of neural dynamics, to data-driven deep neural networks (DNNs). Each approach offers distinct strengths as mechanistic models provide interpretability, phenomenological models capture emergent dynamics, and DNNs excel at predictive accuracy but this also comes with limitations when applied in isolation. Universal differential equations (UDEs) offer a unifying modeling framework that integrates these complementary approaches. By treating differential equations as parameterizable, differentiable objects that can be combined with modern deep learning techniques, UDEs enable hybrid models that balance interpretability with predictive power. We provide a systematic overview of the UDE framework, covering its mathematical foundations, training methodologies, and recent innovations. We argue that UDEs fill a critical gap between mechanistic, phenomenological, and data-driven models in neuroscience, with potential to advance applications in neural computation, neural control, neural decoding, and normative modeling in neuroscience.

大规模神经科学数据集的快速增长催生了多种建模策略,从基于生物物理学的机制模型,到神经动力学的现象学描述,再到数据驱动的深度神经网络(dnn)。每种方法都有其独特的优势,如机械模型提供可解释性,现象学模型捕获紧急动态,dnn在预测准确性方面表现出色,但在单独应用时也存在局限性。通用微分方程提供了一个统一的建模框架,集成了这些互补的方法。通过将微分方程视为可参数化、可微分的对象,并将其与现代深度学习技术相结合,UDEs使混合模型能够平衡可解释性和预测能力。我们提供了一个系统概述的UDE框架,涵盖其数学基础,培训方法,和最近的创新。我们认为,人工神经网络填补了神经科学中机制、现象学和数据驱动模型之间的关键空白,有可能推进神经科学中神经计算、神经控制、神经解码和规范建模的应用。
{"title":"Universal differential equations as a unifying modeling language for neuroscience.","authors":"Ahmed El-Gazzar, Marcel van Gerven","doi":"10.3389/fncom.2025.1677930","DOIUrl":"10.3389/fncom.2025.1677930","url":null,"abstract":"<p><p>The rapid growth of large-scale neuroscience datasets has spurred diverse modeling strategies, ranging from mechanistic models grounded in biophysics, to phenomenological descriptions of neural dynamics, to data-driven deep neural networks (DNNs). Each approach offers distinct strengths as mechanistic models provide interpretability, phenomenological models capture emergent dynamics, and DNNs excel at predictive accuracy but this also comes with limitations when applied in isolation. Universal differential equations (UDEs) offer a unifying modeling framework that integrates these complementary approaches. By treating differential equations as parameterizable, differentiable objects that can be combined with modern deep learning techniques, UDEs enable hybrid models that balance interpretability with predictive power. We provide a systematic overview of the UDE framework, covering its mathematical foundations, training methodologies, and recent innovations. We argue that UDEs fill a critical gap between mechanistic, phenomenological, and data-driven models in neuroscience, with potential to advance applications in neural computation, neural control, neural decoding, and normative modeling in neuroscience.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1677930"},"PeriodicalIF":2.3,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12611869/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145539805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale intracranial EEG dynamics across sleep-wake states: toward memory-related processing. 跨睡眠-觉醒状态的多尺度颅内脑电图动态:朝向记忆相关处理。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-10-24 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1618191
Juan M Tenti, Monserrat Pallares Di Nunzio, Marisa A Bab, Osvaldo Anibal Rosso, Fernando Montani, Marcelo J F Arlego

Sleep is known to support memory consolidation through a complex interplay of neural dynamics across multiple timescales. Using intracranial EEG (iEEG) recordings from patients undergoing clinical monitoring, we characterize spectral activity, neuronal avalanche dynamics, and temporal correlations across sleep-wake states, with a focus on their spatial distribution and potential functional relevance. We observe increased low-frequency power, larger avalanches, and enhanced long-range temporal correlations-quantified via Detrended Fluctuation Analysis-during N2 and N3 sleep. In contrast, REM sleep and wakefulness show reduced temporal persistence and fewer large-scale cascades, suggesting a shift toward more fragmented and flexible dynamics. These signatures vary across cortical regions, with distinctive patterns emerging in medial temporal and frontal areas-regions implicated in memory processing. Rather than providing direct evidence of consolidation, our results point to a functional neural landscape that may favor both stabilization and reconfiguration of internal representations during sleep. Overall, our findings highlight the utility of iEEG in revealing the multiscale spatio-temporal structure of sleep-related brain dynamics, offering insights into the physiological conditions that support memory-related processing.

众所周知,睡眠通过跨多个时间尺度的神经动力学的复杂相互作用来支持记忆巩固。利用接受临床监测的患者的颅内脑电图(iEEG)记录,我们表征了睡眠-觉醒状态下的频谱活动、神经元雪崩动力学和时间相关性,重点关注了它们的空间分布和潜在的功能相关性。我们观察到,在N2和N3睡眠期间,低频功率增加,雪崩更大,远程时间相关性增强(通过去趋势波动分析量化)。相比之下,快速眼动睡眠和清醒状态表现出较低的时间持久性和较少的大规模级联,表明向更碎片化和灵活的动态转变。这些特征在不同的皮质区域有所不同,在内侧颞叶区和额叶区出现了不同的模式,这些区域与记忆处理有关。我们的研究结果并没有提供巩固的直接证据,而是指出了一种功能神经景观,它可能有利于睡眠期间内部表征的稳定和重新配置。总的来说,我们的发现突出了脑电图在揭示睡眠相关大脑动态的多尺度时空结构方面的效用,为支持记忆相关处理的生理条件提供了见解。
{"title":"Multiscale intracranial EEG dynamics across sleep-wake states: toward memory-related processing.","authors":"Juan M Tenti, Monserrat Pallares Di Nunzio, Marisa A Bab, Osvaldo Anibal Rosso, Fernando Montani, Marcelo J F Arlego","doi":"10.3389/fncom.2025.1618191","DOIUrl":"10.3389/fncom.2025.1618191","url":null,"abstract":"<p><p>Sleep is known to support memory consolidation through a complex interplay of neural dynamics across multiple timescales. Using intracranial EEG (iEEG) recordings from patients undergoing clinical monitoring, we characterize spectral activity, neuronal avalanche dynamics, and temporal correlations across sleep-wake states, with a focus on their spatial distribution and potential functional relevance. We observe increased low-frequency power, larger avalanches, and enhanced long-range temporal correlations-quantified via Detrended Fluctuation Analysis-during N2 and N3 sleep. In contrast, REM sleep and wakefulness show reduced temporal persistence and fewer large-scale cascades, suggesting a shift toward more fragmented and flexible dynamics. These signatures vary across cortical regions, with distinctive patterns emerging in medial temporal and frontal areas-regions implicated in memory processing. Rather than providing direct evidence of consolidation, our results point to a functional neural landscape that may favor both stabilization and reconfiguration of internal representations during sleep. Overall, our findings highlight the utility of iEEG in revealing the multiscale spatio-temporal structure of sleep-related brain dynamics, offering insights into the physiological conditions that support memory-related processing.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1618191"},"PeriodicalIF":2.3,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12592051/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145481350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Computational Neuroscience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1