首页 > 最新文献

Frontiers in Computational Neuroscience最新文献

英文 中文
Time delays in computational models of neuronal and synaptic dynamics. 神经元和突触动力学计算模型中的时间延迟。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-11-10 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1700144
Mojtaba Madadi Asl
{"title":"Time delays in computational models of neuronal and synaptic dynamics.","authors":"Mojtaba Madadi Asl","doi":"10.3389/fncom.2025.1700144","DOIUrl":"10.3389/fncom.2025.1700144","url":null,"abstract":"","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1700144"},"PeriodicalIF":2.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12640968/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145603444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Triboelectric nanogenerators for neural data interpretation: bridging multi-sensing interfaces with neuromorphic and deep learning paradigms. 用于神经数据解释的摩擦电纳米发电机:用神经形态和深度学习范例桥接多传感接口。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-11-07 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1691017
Lingli Gan, Shuqin Yuan, Min Guo, Qian Wang, Zongfang Deng, Bin Jia

The rapid growth of computational neuroscience and brain-computer interface (BCI) technologies require efficient, scalable, and biologically compatible approaches for neural data acquisition and interpretation. Traditional sensors and signal processing pipelines often struggle with the high dimensionality, temporal variability, and noise inherent in neural signals, particularly in elderly populations where continuous monitoring is essential. Triboelectric nanogenerators (TENGs), as self-powered and flexible multi-sensing devices, offer a promising avenue for capturing neural-related biophysical signals such as electroencephalography (EEG), electromyography (EMG), and cardiorespiratory dynamics. Their low-power and wearable characteristics make them suitable for long-term health and neurocognitive monitoring. When combined with deep learning models-including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and spiking neural networks (SNNs)-TENG-generated signals can be efficiently decoded, enabling insights into neural states, cognitive functions, and disease progression. Furthermore, neuromorphic computing paradigms provide an energy-efficient and biologically inspired framework that naturally aligns with the event-driven characteristics of TENG outputs. This mini review highlights the convergence of TENG-based sensing, deep learning algorithms, and neuromorphic systems for neural data interpretation. We discuss recent progress, challenges, and future perspectives, with an emphasis on applications in computational neuroscience, neurorehabilitation, and elderly health care.

计算神经科学和脑机接口(BCI)技术的快速发展需要高效、可扩展和生物兼容的方法来获取和解释神经数据。传统的传感器和信号处理管道经常与神经信号固有的高维性、时间变异性和噪声作斗争,特别是在老年人中,连续监测是必不可少的。摩擦电纳米发电机(TENGs)作为一种自供电、灵活的多传感装置,为捕获神经相关的生物物理信号(如脑电图(EEG)、肌电图(EMG)和心肺动力学)提供了一条有前途的途径。它们的低功耗和可穿戴特性使它们适合长期健康和神经认知监测。当与深度学习模型(包括卷积神经网络(cnn)、循环神经网络(rnn)和尖峰神经网络(snn))相结合时,teng生成的信号可以有效解码,从而深入了解神经状态、认知功能和疾病进展。此外,神经形态计算范式提供了一个节能和受生物启发的框架,自然地与TENG输出的事件驱动特征保持一致。这篇简短的综述强调了基于teng的传感、深度学习算法和神经形态系统在神经数据解释方面的融合。我们讨论了最近的进展、挑战和未来的前景,重点是在计算神经科学、神经康复和老年保健方面的应用。
{"title":"Triboelectric nanogenerators for neural data interpretation: bridging multi-sensing interfaces with neuromorphic and deep learning paradigms.","authors":"Lingli Gan, Shuqin Yuan, Min Guo, Qian Wang, Zongfang Deng, Bin Jia","doi":"10.3389/fncom.2025.1691017","DOIUrl":"10.3389/fncom.2025.1691017","url":null,"abstract":"<p><p>The rapid growth of computational neuroscience and brain-computer interface (BCI) technologies require efficient, scalable, and biologically compatible approaches for neural data acquisition and interpretation. Traditional sensors and signal processing pipelines often struggle with the high dimensionality, temporal variability, and noise inherent in neural signals, particularly in elderly populations where continuous monitoring is essential. Triboelectric nanogenerators (TENGs), as self-powered and flexible multi-sensing devices, offer a promising avenue for capturing neural-related biophysical signals such as electroencephalography (EEG), electromyography (EMG), and cardiorespiratory dynamics. Their low-power and wearable characteristics make them suitable for long-term health and neurocognitive monitoring. When combined with deep learning models-including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and spiking neural networks (SNNs)-TENG-generated signals can be efficiently decoded, enabling insights into neural states, cognitive functions, and disease progression. Furthermore, neuromorphic computing paradigms provide an energy-efficient and biologically inspired framework that naturally aligns with the event-driven characteristics of TENG outputs. This mini review highlights the convergence of TENG-based sensing, deep learning algorithms, and neuromorphic systems for neural data interpretation. We discuss recent progress, challenges, and future perspectives, with an emphasis on applications in computational neuroscience, neurorehabilitation, and elderly health care.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1691017"},"PeriodicalIF":2.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12634569/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145586393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural heterogeneity as a unifying mechanism for efficient learning in spiking neural networks. 神经异质性作为脉冲神经网络高效学习的统一机制。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-11-07 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1661070
Fudong Zhang, Jingjing Cui

The brain is a highly diverse and heterogeneous network, yet the functional role of this neural heterogeneity remains largely unclear. Despite growing interest in neural heterogeneity, a comprehensive understanding of how it influences computation across different neural levels and learning methods is still lacking. In this work, we systematically examine the neural computation of spiking neural networks (SNNs) in three key sources of neural heterogeneity: external, network, and intrinsic heterogeneity. We evaluate their impact using three distinct learning methods, which can carry out tasks ranging from simple curve fitting to complex network reconstruction and real-world applications. Our results show that while different types of neural heterogeneity contribute in distinct ways, they consistently improve learning accuracy and robustness. These findings suggest that neural heterogeneity across multiple levels improves learning capacity and robustness of neural computation, and should be considered a core design principle in the optimization of SNNs.

大脑是一个高度多样化和异质性的网络,然而这种神经异质性的功能作用在很大程度上仍然不清楚。尽管人们对神经异质性越来越感兴趣,但对它如何影响不同神经水平和学习方法的计算的全面理解仍然缺乏。在这项工作中,我们系统地研究了脉冲神经网络(snn)在三个关键神经异质性来源中的神经计算:外部、网络和内在异质性。我们使用三种不同的学习方法来评估它们的影响,这些方法可以执行从简单的曲线拟合到复杂的网络重构和现实世界应用的任务。我们的研究结果表明,虽然不同类型的神经异质性以不同的方式起作用,但它们一致地提高了学习的准确性和鲁棒性。这些发现表明,跨多个层次的神经异质性提高了神经计算的学习能力和鲁棒性,应该被视为snn优化的核心设计原则。
{"title":"Neural heterogeneity as a unifying mechanism for efficient learning in spiking neural networks.","authors":"Fudong Zhang, Jingjing Cui","doi":"10.3389/fncom.2025.1661070","DOIUrl":"10.3389/fncom.2025.1661070","url":null,"abstract":"<p><p>The brain is a highly diverse and heterogeneous network, yet the functional role of this neural heterogeneity remains largely unclear. Despite growing interest in neural heterogeneity, a comprehensive understanding of how it influences computation across different neural levels and learning methods is still lacking. In this work, we systematically examine the neural computation of spiking neural networks (SNNs) in three key sources of neural heterogeneity: external, network, and intrinsic heterogeneity. We evaluate their impact using three distinct learning methods, which can carry out tasks ranging from simple curve fitting to complex network reconstruction and real-world applications. Our results show that while different types of neural heterogeneity contribute in distinct ways, they consistently improve learning accuracy and robustness. These findings suggest that neural heterogeneity across multiple levels improves learning capacity and robustness of neural computation, and should be considered a core design principle in the optimization of SNNs.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1661070"},"PeriodicalIF":2.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12634501/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145586461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interleaving cortex-analog mixing improves deep non-negative matrix factorization networks. 交错皮质模拟混合改进了深度非负矩阵分解网络。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-11-05 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1692418
Mahbod Nouri, David Rotermund, Alberto Garcia-Ortiz, Klaus R Pawelzik

Considering biological constraints in artificial neural networks has led to dramatic improvements in performance. Nevertheless, to date, the positivity of long-range signals in the cortex has not been shown to yield improvements. While Non-negative matrix factorization (NMF) captures biological constraints of positive long-range interactions, deep convolutional neural networks with NMF modules do not match the performance of conventional neural networks (CNNs) of a similar size. This work shows that introducing intermediate modules that combine the NMF's positive activities, analogous to the processing in cortical columns, leads to improved performance on benchmark data that exceeds that of vanilla deep convolutional networks. This demonstrates that including positive long-range signaling together with local interactions of both signs in analogy to cortical hyper-columns has the potential to enhance the performance of deep networks.

在人工神经网络中考虑生物约束导致了性能的显著提高。然而,到目前为止,大脑皮层中远程信号的积极性并没有得到改善。虽然非负矩阵分解(NMF)捕获了正远程相互作用的生物约束,但具有NMF模块的深度卷积神经网络的性能无法与类似大小的传统神经网络(cnn)相匹配。这项工作表明,引入结合NMF积极活动的中间模块,类似于皮质柱的处理,可以提高基准数据的性能,超过普通深度卷积网络。这表明,将积极的远程信号与两种信号的局部相互作用(类似于皮质超列)结合在一起,有可能提高深度网络的性能。
{"title":"Interleaving cortex-analog mixing improves deep non-negative matrix factorization networks.","authors":"Mahbod Nouri, David Rotermund, Alberto Garcia-Ortiz, Klaus R Pawelzik","doi":"10.3389/fncom.2025.1692418","DOIUrl":"10.3389/fncom.2025.1692418","url":null,"abstract":"<p><p>Considering biological constraints in artificial neural networks has led to dramatic improvements in performance. Nevertheless, to date, the positivity of long-range signals in the cortex has not been shown to yield improvements. While Non-negative matrix factorization (NMF) captures biological constraints of positive long-range interactions, deep convolutional neural networks with NMF modules do not match the performance of conventional neural networks (CNNs) of a similar size. This work shows that introducing intermediate modules that combine the NMF's positive activities, analogous to the processing in cortical columns, leads to improved performance on benchmark data that exceeds that of vanilla deep convolutional networks. This demonstrates that including positive long-range signaling together with local interactions of both signs in analogy to cortical hyper-columns has the potential to enhance the performance of deep networks.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1692418"},"PeriodicalIF":2.3,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12626930/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145563432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Universal differential equations as a unifying modeling language for neuroscience. 通用微分方程作为神经科学的统一建模语言。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-10-30 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1677930
Ahmed El-Gazzar, Marcel van Gerven

The rapid growth of large-scale neuroscience datasets has spurred diverse modeling strategies, ranging from mechanistic models grounded in biophysics, to phenomenological descriptions of neural dynamics, to data-driven deep neural networks (DNNs). Each approach offers distinct strengths as mechanistic models provide interpretability, phenomenological models capture emergent dynamics, and DNNs excel at predictive accuracy but this also comes with limitations when applied in isolation. Universal differential equations (UDEs) offer a unifying modeling framework that integrates these complementary approaches. By treating differential equations as parameterizable, differentiable objects that can be combined with modern deep learning techniques, UDEs enable hybrid models that balance interpretability with predictive power. We provide a systematic overview of the UDE framework, covering its mathematical foundations, training methodologies, and recent innovations. We argue that UDEs fill a critical gap between mechanistic, phenomenological, and data-driven models in neuroscience, with potential to advance applications in neural computation, neural control, neural decoding, and normative modeling in neuroscience.

大规模神经科学数据集的快速增长催生了多种建模策略,从基于生物物理学的机制模型,到神经动力学的现象学描述,再到数据驱动的深度神经网络(dnn)。每种方法都有其独特的优势,如机械模型提供可解释性,现象学模型捕获紧急动态,dnn在预测准确性方面表现出色,但在单独应用时也存在局限性。通用微分方程提供了一个统一的建模框架,集成了这些互补的方法。通过将微分方程视为可参数化、可微分的对象,并将其与现代深度学习技术相结合,UDEs使混合模型能够平衡可解释性和预测能力。我们提供了一个系统概述的UDE框架,涵盖其数学基础,培训方法,和最近的创新。我们认为,人工神经网络填补了神经科学中机制、现象学和数据驱动模型之间的关键空白,有可能推进神经科学中神经计算、神经控制、神经解码和规范建模的应用。
{"title":"Universal differential equations as a unifying modeling language for neuroscience.","authors":"Ahmed El-Gazzar, Marcel van Gerven","doi":"10.3389/fncom.2025.1677930","DOIUrl":"10.3389/fncom.2025.1677930","url":null,"abstract":"<p><p>The rapid growth of large-scale neuroscience datasets has spurred diverse modeling strategies, ranging from mechanistic models grounded in biophysics, to phenomenological descriptions of neural dynamics, to data-driven deep neural networks (DNNs). Each approach offers distinct strengths as mechanistic models provide interpretability, phenomenological models capture emergent dynamics, and DNNs excel at predictive accuracy but this also comes with limitations when applied in isolation. Universal differential equations (UDEs) offer a unifying modeling framework that integrates these complementary approaches. By treating differential equations as parameterizable, differentiable objects that can be combined with modern deep learning techniques, UDEs enable hybrid models that balance interpretability with predictive power. We provide a systematic overview of the UDE framework, covering its mathematical foundations, training methodologies, and recent innovations. We argue that UDEs fill a critical gap between mechanistic, phenomenological, and data-driven models in neuroscience, with potential to advance applications in neural computation, neural control, neural decoding, and normative modeling in neuroscience.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1677930"},"PeriodicalIF":2.3,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12611869/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145539805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale intracranial EEG dynamics across sleep-wake states: toward memory-related processing. 跨睡眠-觉醒状态的多尺度颅内脑电图动态:朝向记忆相关处理。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-10-24 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1618191
Juan M Tenti, Monserrat Pallares Di Nunzio, Marisa A Bab, Osvaldo Anibal Rosso, Fernando Montani, Marcelo J F Arlego

Sleep is known to support memory consolidation through a complex interplay of neural dynamics across multiple timescales. Using intracranial EEG (iEEG) recordings from patients undergoing clinical monitoring, we characterize spectral activity, neuronal avalanche dynamics, and temporal correlations across sleep-wake states, with a focus on their spatial distribution and potential functional relevance. We observe increased low-frequency power, larger avalanches, and enhanced long-range temporal correlations-quantified via Detrended Fluctuation Analysis-during N2 and N3 sleep. In contrast, REM sleep and wakefulness show reduced temporal persistence and fewer large-scale cascades, suggesting a shift toward more fragmented and flexible dynamics. These signatures vary across cortical regions, with distinctive patterns emerging in medial temporal and frontal areas-regions implicated in memory processing. Rather than providing direct evidence of consolidation, our results point to a functional neural landscape that may favor both stabilization and reconfiguration of internal representations during sleep. Overall, our findings highlight the utility of iEEG in revealing the multiscale spatio-temporal structure of sleep-related brain dynamics, offering insights into the physiological conditions that support memory-related processing.

众所周知,睡眠通过跨多个时间尺度的神经动力学的复杂相互作用来支持记忆巩固。利用接受临床监测的患者的颅内脑电图(iEEG)记录,我们表征了睡眠-觉醒状态下的频谱活动、神经元雪崩动力学和时间相关性,重点关注了它们的空间分布和潜在的功能相关性。我们观察到,在N2和N3睡眠期间,低频功率增加,雪崩更大,远程时间相关性增强(通过去趋势波动分析量化)。相比之下,快速眼动睡眠和清醒状态表现出较低的时间持久性和较少的大规模级联,表明向更碎片化和灵活的动态转变。这些特征在不同的皮质区域有所不同,在内侧颞叶区和额叶区出现了不同的模式,这些区域与记忆处理有关。我们的研究结果并没有提供巩固的直接证据,而是指出了一种功能神经景观,它可能有利于睡眠期间内部表征的稳定和重新配置。总的来说,我们的发现突出了脑电图在揭示睡眠相关大脑动态的多尺度时空结构方面的效用,为支持记忆相关处理的生理条件提供了见解。
{"title":"Multiscale intracranial EEG dynamics across sleep-wake states: toward memory-related processing.","authors":"Juan M Tenti, Monserrat Pallares Di Nunzio, Marisa A Bab, Osvaldo Anibal Rosso, Fernando Montani, Marcelo J F Arlego","doi":"10.3389/fncom.2025.1618191","DOIUrl":"10.3389/fncom.2025.1618191","url":null,"abstract":"<p><p>Sleep is known to support memory consolidation through a complex interplay of neural dynamics across multiple timescales. Using intracranial EEG (iEEG) recordings from patients undergoing clinical monitoring, we characterize spectral activity, neuronal avalanche dynamics, and temporal correlations across sleep-wake states, with a focus on their spatial distribution and potential functional relevance. We observe increased low-frequency power, larger avalanches, and enhanced long-range temporal correlations-quantified via Detrended Fluctuation Analysis-during N2 and N3 sleep. In contrast, REM sleep and wakefulness show reduced temporal persistence and fewer large-scale cascades, suggesting a shift toward more fragmented and flexible dynamics. These signatures vary across cortical regions, with distinctive patterns emerging in medial temporal and frontal areas-regions implicated in memory processing. Rather than providing direct evidence of consolidation, our results point to a functional neural landscape that may favor both stabilization and reconfiguration of internal representations during sleep. Overall, our findings highlight the utility of iEEG in revealing the multiscale spatio-temporal structure of sleep-related brain dynamics, offering insights into the physiological conditions that support memory-related processing.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1618191"},"PeriodicalIF":2.3,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12592051/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145481350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sudden restructuring of memory representations in recurrent neural networks with repeated stimulus presentations. 具有重复刺激呈现的递归神经网络中记忆表征的突然重构。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-10-22 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1601641
Jonathon R Howlett

While acquisition curves in human learning averaged at the group level display smooth, gradual changes in performance, individual learning curves across cognitive domains reveal sudden, discontinuous jumps in performance. Similar thresholding effects are a hallmark of a range of nonlinear systems which can be explored using simple, abstract models. Here, I investigate discontinuous changes in learning performance using Amari-Hopfield networks with Hebbian learning rules which are repeatedly exposed to a single stimulus. Simulations reveal that the attractor basin size for a target stimulus increases in discrete jumps rather than gradual changes with repeated stimulus exposure. The distribution of the size of these positive jumps in basin size is best approximated by a lognormal distribution, suggesting that the distribution is heavy-tailed. Examination of the transition graph structure for networks before and after basin size changes reveals that newly acquired states are often organized into hierarchically branching tree structures, and that the distribution of branch sizes is best approximated by a power law distribution. The findings suggest that even simple nonlinear network models of associative learning exhibit discontinuous changes in performance with repeated learning which mirror behavioral results observed in humans. Future work can investigate similar mechanisms in more biologically detailed network models, potentially offering insight into the network mechanisms of learning with repeated exposure or practice.

虽然人类学习的平均习得曲线在群体水平上显示出平稳、渐进的表现变化,但跨认知领域的个人学习曲线显示出突然、不连续的表现跳跃。类似的阈值效应是一系列非线性系统的标志,可以用简单的抽象模型来探索。在这里,我使用Amari-Hopfield网络和Hebbian学习规则来研究学习表现的不连续变化,这些网络反复暴露在单一刺激下。模拟结果表明,目标刺激的吸引子盆地大小随着重复刺激的暴露而以离散跳跃的方式增加,而不是逐渐变化。这些正跃的盆地大小的分布最接近对数正态分布,表明该分布是重尾分布。对流域大小变化前后网络的过渡图结构的研究表明,新获得的状态通常被组织成分层分支树结构,分支大小的分布最好地近似于幂律分布。研究结果表明,即使是简单的联想学习的非线性网络模型,在重复学习时也会表现出不连续的变化,这反映了在人类身上观察到的行为结果。未来的工作可以在更详细的生物学网络模型中研究类似的机制,有可能通过反复暴露或实践来深入了解学习的网络机制。
{"title":"Sudden restructuring of memory representations in recurrent neural networks with repeated stimulus presentations.","authors":"Jonathon R Howlett","doi":"10.3389/fncom.2025.1601641","DOIUrl":"10.3389/fncom.2025.1601641","url":null,"abstract":"<p><p>While acquisition curves in human learning averaged at the group level display smooth, gradual changes in performance, individual learning curves across cognitive domains reveal sudden, discontinuous jumps in performance. Similar thresholding effects are a hallmark of a range of nonlinear systems which can be explored using simple, abstract models. Here, I investigate discontinuous changes in learning performance using Amari-Hopfield networks with Hebbian learning rules which are repeatedly exposed to a single stimulus. Simulations reveal that the attractor basin size for a target stimulus increases in discrete jumps rather than gradual changes with repeated stimulus exposure. The distribution of the size of these positive jumps in basin size is best approximated by a lognormal distribution, suggesting that the distribution is heavy-tailed. Examination of the transition graph structure for networks before and after basin size changes reveals that newly acquired states are often organized into hierarchically branching tree structures, and that the distribution of branch sizes is best approximated by a power law distribution. The findings suggest that even simple nonlinear network models of associative learning exhibit discontinuous changes in performance with repeated learning which mirror behavioral results observed in humans. Future work can investigate similar mechanisms in more biologically detailed network models, potentially offering insight into the network mechanisms of learning with repeated exposure or practice.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1601641"},"PeriodicalIF":2.3,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12585986/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145458157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An AI methodology to reduce training intensity, error rates, and size of neural networks. 一种减少训练强度、错误率和神经网络大小的人工智能方法。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-10-21 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1628115
Thaddeus J A Kobylarz

Massive computing systems are required to train neural networks. The prodigious amount of consumed energy makes the creation of AI applications significant polluters. Despite the enormous training effort, neural network error rates limit its use for medical applications, because errors can lead to intolerable morbidity and mortality. Two reasons contribute to the excessive training requirements and high error rates; an iterative reinforcement process (tuning) that does not guarantee convergence and the deployment of neuron models only capable of realizing linearly separable switching functions. tuning procedures require tens of thousands of training iterations. In addition, linearly separable neuron models have severely limited capability; which leads to large neural nets. For seven inputs, the ratio of total possible switching functions to linearly separable switching functions is 41 octillion. Addressed here is the creation of neuron models for the application of disease diagnosis. Algorithms are described that perform direct neuron creation. This results in far fewer training steps than that of current AI systems. The design algorithms result in neurons that do not manufacture errors (hallucinations). The algorithms utilize a template to create neuron models that are capable of performing any type of switching function. The algorithms show that a neuron model capable of performing both linearly and nonlinearly separable switching functions is vastly superior to the neuron models currently being used. Included examples illustrate use of the template for determining disease diagnoses (outputs) from symptoms (inputs). The examples show convergence with a single training iteration.

训练神经网络需要大量的计算系统。大量的能源消耗使得人工智能应用程序的创建成为严重的污染。尽管付出了巨大的训练努力,但神经网络的错误率限制了它在医疗应用中的应用,因为错误可能导致无法忍受的发病率和死亡率。造成培训要求过高和错误率高的原因有两个;迭代强化过程(调谐)不能保证收敛,神经元模型的部署只能实现线性可分的切换函数。调优过程需要成千上万的训练迭代。此外,线性可分神经元模型的能力严重有限;这就产生了大型神经网络。对于7个输入,所有可能的开关函数与线性可分开关函数的比值为41千万亿。这里讨论的是用于疾病诊断应用的神经元模型的创建。描述了执行直接神经元创建的算法。这使得训练步骤比目前的人工智能系统少得多。设计算法产生的神经元不会产生错误(幻觉)。该算法利用模板来创建能够执行任何类型切换功能的神经元模型。这些算法表明,能够执行线性和非线性可分切换函数的神经元模型大大优于目前使用的神经元模型。所包括的示例说明了如何使用模板根据症状(输入)确定疾病诊断(输出)。示例显示了单个训练迭代的收敛性。
{"title":"An AI methodology to reduce training intensity, error rates, and size of neural networks.","authors":"Thaddeus J A Kobylarz","doi":"10.3389/fncom.2025.1628115","DOIUrl":"10.3389/fncom.2025.1628115","url":null,"abstract":"<p><p>Massive computing systems are required to train neural networks. The prodigious amount of consumed energy makes the creation of AI applications significant polluters. Despite the enormous training effort, neural network error rates limit its use for medical applications, because errors can lead to intolerable morbidity and mortality. Two reasons contribute to the excessive training requirements and high error rates; an iterative reinforcement process (tuning) that does not guarantee convergence and the deployment of neuron models only capable of realizing linearly separable switching functions. tuning procedures require tens of thousands of training iterations. In addition, linearly separable neuron models have severely limited capability; which leads to large neural nets. For seven inputs, the ratio of total possible switching functions to linearly separable switching functions is 41 octillion. Addressed here is the creation of neuron models for the application of disease diagnosis. Algorithms are described that perform direct neuron creation. This results in far fewer training steps than that of current AI systems. The design algorithms result in neurons that do not manufacture errors (hallucinations). The algorithms utilize a template to create neuron models that are capable of performing any type of switching function. The algorithms show that a neuron model capable of performing both linearly and nonlinearly separable switching functions is vastly superior to the neuron models currently being used. Included examples illustrate use of the template for determining disease diagnoses (outputs) from symptoms (inputs). The examples show convergence with a single training iteration.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1628115"},"PeriodicalIF":2.3,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12582943/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145451544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using noise to distinguish between system and observer effects in multimodal neuroimaging. 用噪声来区分多模态神经成像中的系统效应和观察者效应。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-10-17 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1693279
Erik D Fagerholm, Hirokazu Tanaka, Gregory Scott, Robert Leech, Federico E Turkheimer, Peter Zeidman, Karl J Friston, Milan Brázdil

Introduction: It has become increasingly common to record brain activity simultaneously at more than one spatiotemporal scale. Here, we address a central question raised by such cross-scale datasets: do they reflect the same underlying dynamics observed in different ways, or different dynamics observed in the same way? In other words, to what extent can variation between modalities be attributed to system-level versus observer-level effects? System-level effects reflect genuine differences in neural dynamics at the resolution sampled by each device. Observer-level effects, by contrast, reflect artefactual differences introduced by the nonlinear transformations each device imposes on the signal. We demonstrate that noise, when incorporated into generative models, can help disentangle these two sources of variation.

Methods: We apply this noise-based approach to simultaneously recorded high-frequency broadband signals from macroelectrodes and microwires in the human hippocampus.

Results: Most subjects show a complex mixture of system- and observer-level contributions to their time series. However, in one subject, the cross-scale difference is statistically attributable to an observer-level effect-i.e., consistent with the same dynamics at both microwire and macroelectrode scales.

Discussion: This study shows that noise can be used in empirical datasets to determine whether cross-scale variation arises from differences in neural dynamics or differences in observer functions.

在多个时空尺度上同时记录大脑活动已经变得越来越普遍。在这里,我们解决了这样的跨尺度数据集提出的一个核心问题:它们是反映了以不同方式观察到的相同的潜在动态,还是以相同方式观察到的不同动态?换句话说,模式之间的差异在多大程度上可以归因于系统级与观察者级的影响?系统级效应反映了每个设备采样分辨率下神经动力学的真正差异。相比之下,观察者级效应反映了每个设备对信号施加的非线性变换所带来的人为差异。我们证明,噪声,当纳入生成模型,可以帮助解开这两个来源的变化。方法:我们将这种基于噪声的方法应用于同时记录人类海马体中来自大电极和微线的高频宽带信号。结果:大多数受试者对他们的时间序列表现出复杂的系统和观察者水平的混合贡献。然而,在一个受试者中,跨尺度差异在统计上可归因于观察者水平效应。,在微线和大电极尺度上都符合相同的动力学。讨论:本研究表明,在经验数据集中,噪声可以用来确定跨尺度变化是由神经动力学的差异还是观察者函数的差异引起的。
{"title":"Using noise to distinguish between system and observer effects in multimodal neuroimaging.","authors":"Erik D Fagerholm, Hirokazu Tanaka, Gregory Scott, Robert Leech, Federico E Turkheimer, Peter Zeidman, Karl J Friston, Milan Brázdil","doi":"10.3389/fncom.2025.1693279","DOIUrl":"10.3389/fncom.2025.1693279","url":null,"abstract":"<p><strong>Introduction: </strong>It has become increasingly common to record brain activity simultaneously at more than one spatiotemporal scale. Here, we address a central question raised by such cross-scale datasets: do they reflect the same underlying dynamics observed in different ways, or different dynamics observed in the same way? In other words, to what extent can variation between modalities be attributed to system-level versus observer-level effects? System-level effects reflect genuine differences in neural dynamics at the resolution sampled by each device. Observer-level effects, by contrast, reflect artefactual differences introduced by the nonlinear transformations each device imposes on the signal. We demonstrate that noise, when incorporated into generative models, can help disentangle these two sources of variation.</p><p><strong>Methods: </strong>We apply this noise-based approach to simultaneously recorded high-frequency broadband signals from macroelectrodes and microwires in the human hippocampus.</p><p><strong>Results: </strong>Most subjects show a complex mixture of system- and observer-level contributions to their time series. However, in one subject, the cross-scale difference is statistically attributable to an observer-level effect-i.e., consistent with the same dynamics at both microwire and macroelectrode scales.</p><p><strong>Discussion: </strong>This study shows that noise can be used in empirical datasets to determine whether cross-scale variation arises from differences in neural dynamics or differences in observer functions.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1693279"},"PeriodicalIF":2.3,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12575338/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145430659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing epileptic seizure recognition through bidirectional LSTM networks. 通过双向LSTM网络推进癫痫发作识别。
IF 2.3 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-10-17 eCollection Date: 2025-01-01 DOI: 10.3389/fncom.2025.1668358
Sanaa Al-Marzouki

Seizure detection in a timely and accurate manner remains a primary challenge in clinical neurology, affecting diagnosis planning and patient management. Most of the traditional methods rely on feature extraction and traditional machine learning techniques, which are not efficient in capturing the dynamic characteristics of neural signals. It is the aim of this study to address such limitations by designing a deep learning model from bidirectional Long Short-Term Memory (BiLSTM) networks in a bid to enhance epileptic seizure identification reliability and accuracy. The dataset used, drawn from Kaggle's Epileptic Seizure Recognition challenge, consists of 11,500 samples with 179 features per sample corresponding to different electroencephalogram (EEG) readings. Data preprocessing was utilized to normalize and structure the input to the deep learning model. The proposed BiLSTM model employs sophisticated architecture to leverage temporal dependency and bidirectional data flows. It incorporates multiple dense and dropout layers alongside batch normalization to enhance the capability of the model in learning from the EEG data in an efficient manner. It supports end-to-end feature learning from the raw EEG signals without the need for intensive preprocessing and feature engineering. BiLSTM model performed better than others with 98.70% accuracy on the validation set and surpassed traditional techniques. The F1-score and other statistical metrics also validated the performance of the model as the confusion matrix achieved high values for recall and precision. The results confirm the capability of bidirectional LSTM networks to better identify seizures with significant improvements over conventional practices. Apart from facilitating seizure detection in a reliable fashion, the method improves the overall field of biomedical signal processing and can also be used in real-time observation and intervention protocols.

及时、准确地检测癫痫发作仍然是临床神经病学的主要挑战,影响诊断计划和患者管理。传统的方法大多依赖于特征提取和传统的机器学习技术,这些技术在捕获神经信号的动态特征方面效率不高。本研究的目的是通过设计双向长短期记忆(BiLSTM)网络的深度学习模型来解决这些限制,以提高癫痫发作识别的可靠性和准确性。使用的数据集来自Kaggle的癫痫发作识别挑战,由11,500个样本组成,每个样本对应不同的脑电图(EEG)读数,每个样本有179个特征。利用数据预处理对深度学习模型的输入进行规范化和结构化。提出的BiLSTM模型采用复杂的体系结构来利用时间依赖性和双向数据流。该方法结合多个密集层和dropout层以及批处理归一化,提高了模型从脑电数据中高效学习的能力。它支持从原始EEG信号中进行端到端特征学习,而无需进行密集的预处理和特征工程。BiLSTM模型在验证集上的准确率达到98.70%,优于其他模型。f1分数和其他统计指标也验证了模型的性能,因为混淆矩阵在召回率和精度方面达到了很高的值。结果证实了双向LSTM网络能够更好地识别癫痫发作,比传统方法有了显著的改进。除了以可靠的方式促进癫痫检测外,该方法还改善了整个生物医学信号处理领域,也可用于实时观察和干预方案。
{"title":"Advancing epileptic seizure recognition through bidirectional LSTM networks.","authors":"Sanaa Al-Marzouki","doi":"10.3389/fncom.2025.1668358","DOIUrl":"10.3389/fncom.2025.1668358","url":null,"abstract":"<p><p>Seizure detection in a timely and accurate manner remains a primary challenge in clinical neurology, affecting diagnosis planning and patient management. Most of the traditional methods rely on feature extraction and traditional machine learning techniques, which are not efficient in capturing the dynamic characteristics of neural signals. It is the aim of this study to address such limitations by designing a deep learning model from bidirectional Long Short-Term Memory (BiLSTM) networks in a bid to enhance epileptic seizure identification reliability and accuracy. The dataset used, drawn from Kaggle's Epileptic Seizure Recognition challenge, consists of 11,500 samples with 179 features per sample corresponding to different electroencephalogram (EEG) readings. Data preprocessing was utilized to normalize and structure the input to the deep learning model. The proposed BiLSTM model employs sophisticated architecture to leverage temporal dependency and bidirectional data flows. It incorporates multiple dense and dropout layers alongside batch normalization to enhance the capability of the model in learning from the EEG data in an efficient manner. It supports end-to-end feature learning from the raw EEG signals without the need for intensive preprocessing and feature engineering. BiLSTM model performed better than others with 98.70% accuracy on the validation set and surpassed traditional techniques. The F1-score and other statistical metrics also validated the performance of the model as the confusion matrix achieved high values for recall and precision. The results confirm the capability of bidirectional LSTM networks to better identify seizures with significant improvements over conventional practices. Apart from facilitating seizure detection in a reliable fashion, the method improves the overall field of biomedical signal processing and can also be used in real-time observation and intervention protocols.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1668358"},"PeriodicalIF":2.3,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12575252/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145430647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Computational Neuroscience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1