Pub Date : 2025-11-24eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1718778
Claudius Gros
The big strides seen in generative AI are not based on somewhat obscure algorithms, but due to clearly defined generative principles. The resulting concrete implementations have proven themselves in large numbers of applications. We suggest that it is imperative to thoroughly investigate which of these generative principles may be operative also in the brain, and hence relevant for cognitive neuroscience. In addition, ML research led to a range of interesting characterizations of neural information processing systems. We discuss five examples, the shortcomings of world modeling, the generation of thought processes, attention, neural scaling laws, and quantization, that illustrate how much neuroscience could potentially learn from ML research.
{"title":"From generative AI to the brain: five takeaways.","authors":"Claudius Gros","doi":"10.3389/fncom.2025.1718778","DOIUrl":"10.3389/fncom.2025.1718778","url":null,"abstract":"<p><p>The big strides seen in generative AI are not based on somewhat obscure algorithms, but due to clearly defined generative principles. The resulting concrete implementations have proven themselves in large numbers of applications. We suggest that it is imperative to thoroughly investigate which of these generative principles may be operative also in the brain, and hence relevant for cognitive neuroscience. In addition, ML research led to a range of interesting characterizations of neural information processing systems. We discuss five examples, the shortcomings of world modeling, the generation of thought processes, attention, neural scaling laws, and quantization, that illustrate how much neuroscience could potentially learn from ML research.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1718778"},"PeriodicalIF":2.3,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12682776/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145713982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1613291
Asaki Kataoka, Yoshihiro Nagano, Masafumi Oizumi
Recent advances in self-supervised learning have attracted significant attention from both machine learning and neuroscience. This is primarily because self-supervised methods do not require annotated supervisory information, making them applicable to training artificial networks without relying on large amounts of curated data, and potentially offering insights into how the brain adapts to its environment in an unsupervised manner. Although several previous studies have elucidated the correspondence between neural representations in deep convolutional neural networks (DCNNs) and biological systems, the extent to which unsupervised or self-supervised learning can explain the human-like acquisition of categorically structured information remains less explored. In this study, we investigate the correspondence between the internal representations of DCNNs trained using a self-supervised contrastive learning algorithm and human semantics and recognition. To this end, we employ a few-shot learning evaluation procedure, which measures the ability of DCNNs to recognize novel concepts from limited exposure, to examine the inter-categorical structure of the learned representations. Two comparative approaches are used to relate the few-shot learning outcomes to human semantics and recognition, with results suggesting that the representations acquired through contrastive learning are well aligned with human cognition. These findings underscore the potential of self-supervised contrastive learning frameworks to model learning mechanisms similar to those of the human brain, particularly in scenarios where explicit supervision is unavailable, such as in human infants prior to language acquisition.
{"title":"Exploring internal representations of self-supervised networks: few-shot learning abilities and comparison with human semantics and recognition of objects.","authors":"Asaki Kataoka, Yoshihiro Nagano, Masafumi Oizumi","doi":"10.3389/fncom.2025.1613291","DOIUrl":"10.3389/fncom.2025.1613291","url":null,"abstract":"<p><p>Recent advances in self-supervised learning have attracted significant attention from both machine learning and neuroscience. This is primarily because self-supervised methods do not require annotated supervisory information, making them applicable to training artificial networks without relying on large amounts of curated data, and potentially offering insights into how the brain adapts to its environment in an unsupervised manner. Although several previous studies have elucidated the correspondence between neural representations in deep convolutional neural networks (DCNNs) and biological systems, the extent to which unsupervised or self-supervised learning can explain the human-like acquisition of categorically structured information remains less explored. In this study, we investigate the correspondence between the internal representations of DCNNs trained using a self-supervised contrastive learning algorithm and human semantics and recognition. To this end, we employ a few-shot learning evaluation procedure, which measures the ability of DCNNs to recognize novel concepts from limited exposure, to examine the inter-categorical structure of the learned representations. Two comparative approaches are used to relate the few-shot learning outcomes to human semantics and recognition, with results suggesting that the representations acquired through contrastive learning are well aligned with human cognition. These findings underscore the potential of self-supervised contrastive learning frameworks to model learning mechanisms similar to those of the human brain, particularly in scenarios where explicit supervision is unavailable, such as in human infants prior to language acquisition.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1613291"},"PeriodicalIF":2.3,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12679296/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145700063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-12eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1408836
Changbo Zhu, Ke Zhou, Fengzhen Tang, Yandong Tang, Xiaoli Li, Bailu Si
Brain activities often follow an exponential family of distributions. The exponential distribution is the maximum entropy distribution of continuous random variables in the presence of a mean. The memoryless and peakless properties of an exponential distribution impose difficulties for data analysis methods. To estimate the rate parameter of multivariate exponential distribution from a time series of sensory inputs (i.e., observations), we constructed a hierarchical Bayesian inference model based on a variant of general hierarchical Brownian filter (GHBF). To account for the complex interactions among multivariate exponential random variables, the model estimates the second-order interaction of the rate intensity parameter in logarithmic space. Using variational Bayesian scheme, a family of closed-form and analytical update equations are introduced. These update equations also constitute a complete predictive coding framework. The simulation study shows that our model has the ability to evaluate the time-varying rate parameters and the underlying correlation structure of volatile multivariate exponentially distributed signals. The proposed hierarchical Bayesian inference model is of practical utility in analyzing high-dimensional neural activities.
{"title":"A hierarchical Bayesian inference model for volatile multivariate exponentially distributed signals.","authors":"Changbo Zhu, Ke Zhou, Fengzhen Tang, Yandong Tang, Xiaoli Li, Bailu Si","doi":"10.3389/fncom.2025.1408836","DOIUrl":"https://doi.org/10.3389/fncom.2025.1408836","url":null,"abstract":"<p><p>Brain activities often follow an exponential family of distributions. The exponential distribution is the maximum entropy distribution of continuous random variables in the presence of a mean. The memoryless and peakless properties of an exponential distribution impose difficulties for data analysis methods. To estimate the rate parameter of multivariate exponential distribution from a time series of sensory inputs (i.e., observations), we constructed a hierarchical Bayesian inference model based on a variant of general hierarchical Brownian filter (GHBF). To account for the complex interactions among multivariate exponential random variables, the model estimates the second-order interaction of the rate intensity parameter in logarithmic space. Using variational Bayesian scheme, a family of closed-form and analytical update equations are introduced. These update equations also constitute a complete predictive coding framework. The simulation study shows that our model has the ability to evaluate the time-varying rate parameters and the underlying correlation structure of volatile multivariate exponentially distributed signals. The proposed hierarchical Bayesian inference model is of practical utility in analyzing high-dimensional neural activities.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1408836"},"PeriodicalIF":2.3,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12648510/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145631558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-12eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1704350
Song Xie, Ke Zuo, Silvia De Rubeis, Giorgio Bonollo, Giorgio Colombo, Paolo Ruggerone, Paolo Carloni
Six variants associated with autism spectrum disorder (ASD) abnormally activate the WASP-family Verprolin-homologous protein (WAVE) regulatory complex (WRC), a critical regulator of actin dynamics. This abnormal activation may contribute to the pathogenesis of this disorder. Using molecular dynamics (MD) simulations, we recently investigated the structural dynamics of wild-type (WT) WRC and R87C, A455P, and Q725R WRC disease-linked variants. Here, by extending MD simulations to I664M, E665K, and D724H WRC, we suggest that all of the mutations weaken the interactions and affect intra-complex allosteric communication between the WAVE1 active C-terminal region (ACR) and the rest of the complex. This might contribute to an abnormal complex activation, a hallmark of WRC-linked ASD. In addition, all mutants but I664M destabilize the ACR V-helix and increase the participation of ACR in large-scale movements. All these features may also abnormally influence the inactive WRC toward a dysfunctional state. We hypothesize that small-molecule ligands counteracting these effects may help restore normal WRC regulation in ASD-related variants.
与自闭症谱系障碍(ASD)相关的六种变异异常激活wasp家族verprolin同源蛋白(WAVE)调节复合体(WRC),这是肌动蛋白动力学的关键调节因子。这种异常的激活可能有助于这种疾病的发病机制。利用分子动力学(MD)模拟,我们最近研究了野生型(WT) WRC和R87C、A455P和Q725R WRC疾病相关变异的结构动力学。在这里,通过将MD模拟扩展到I664M, E665K和D724H WRC,我们发现所有突变都削弱了相互作用,并影响了WAVE1活性c端区(ACR)与复合物其余部分之间的复合物内变构通信。这可能导致异常复合物激活,这是wrc相关ASD的标志。此外,除I664M外,所有突变体都破坏了ACR v -螺旋结构的稳定性,增加了ACR参与大规模运动的能力。所有这些特征也可能异常地影响不活跃的WRC走向功能失调状态。我们假设抵消这些影响的小分子配体可能有助于恢复asd相关变异的正常WRC调节。
{"title":"Common characteristics of variants linked to autism spectrum disorder in the WAVE regulatory complex.","authors":"Song Xie, Ke Zuo, Silvia De Rubeis, Giorgio Bonollo, Giorgio Colombo, Paolo Ruggerone, Paolo Carloni","doi":"10.3389/fncom.2025.1704350","DOIUrl":"https://doi.org/10.3389/fncom.2025.1704350","url":null,"abstract":"<p><p>Six variants associated with autism spectrum disorder (ASD) abnormally activate the WASP-family Verprolin-homologous protein (WAVE) regulatory complex (WRC), a critical regulator of actin dynamics. This abnormal activation may contribute to the pathogenesis of this disorder. Using molecular dynamics (MD) simulations, we recently investigated the structural dynamics of wild-type (WT) WRC and R87C, A455P, and Q725R WRC disease-linked variants. Here, by extending MD simulations to I664M, E665K, and D724H WRC, we suggest that <i>all</i> of the mutations weaken the interactions and affect intra-complex allosteric communication between the WAVE1 active C-terminal region (ACR) and the rest of the complex. This might contribute to an abnormal complex activation, a hallmark of WRC-linked ASD. In addition, all mutants but I664M destabilize the ACR V-helix and increase the participation of ACR in large-scale movements. All these features may also abnormally influence the inactive WRC toward a dysfunctional state. We hypothesize that small-molecule ligands counteracting these effects may help restore normal WRC regulation in ASD-related variants.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1704350"},"PeriodicalIF":2.3,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12647093/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145631587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1700144
Mojtaba Madadi Asl
{"title":"Time delays in computational models of neuronal and synaptic dynamics.","authors":"Mojtaba Madadi Asl","doi":"10.3389/fncom.2025.1700144","DOIUrl":"10.3389/fncom.2025.1700144","url":null,"abstract":"","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1700144"},"PeriodicalIF":2.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12640968/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145603444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1691017
Lingli Gan, Shuqin Yuan, Min Guo, Qian Wang, Zongfang Deng, Bin Jia
The rapid growth of computational neuroscience and brain-computer interface (BCI) technologies require efficient, scalable, and biologically compatible approaches for neural data acquisition and interpretation. Traditional sensors and signal processing pipelines often struggle with the high dimensionality, temporal variability, and noise inherent in neural signals, particularly in elderly populations where continuous monitoring is essential. Triboelectric nanogenerators (TENGs), as self-powered and flexible multi-sensing devices, offer a promising avenue for capturing neural-related biophysical signals such as electroencephalography (EEG), electromyography (EMG), and cardiorespiratory dynamics. Their low-power and wearable characteristics make them suitable for long-term health and neurocognitive monitoring. When combined with deep learning models-including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and spiking neural networks (SNNs)-TENG-generated signals can be efficiently decoded, enabling insights into neural states, cognitive functions, and disease progression. Furthermore, neuromorphic computing paradigms provide an energy-efficient and biologically inspired framework that naturally aligns with the event-driven characteristics of TENG outputs. This mini review highlights the convergence of TENG-based sensing, deep learning algorithms, and neuromorphic systems for neural data interpretation. We discuss recent progress, challenges, and future perspectives, with an emphasis on applications in computational neuroscience, neurorehabilitation, and elderly health care.
{"title":"Triboelectric nanogenerators for neural data interpretation: bridging multi-sensing interfaces with neuromorphic and deep learning paradigms.","authors":"Lingli Gan, Shuqin Yuan, Min Guo, Qian Wang, Zongfang Deng, Bin Jia","doi":"10.3389/fncom.2025.1691017","DOIUrl":"10.3389/fncom.2025.1691017","url":null,"abstract":"<p><p>The rapid growth of computational neuroscience and brain-computer interface (BCI) technologies require efficient, scalable, and biologically compatible approaches for neural data acquisition and interpretation. Traditional sensors and signal processing pipelines often struggle with the high dimensionality, temporal variability, and noise inherent in neural signals, particularly in elderly populations where continuous monitoring is essential. Triboelectric nanogenerators (TENGs), as self-powered and flexible multi-sensing devices, offer a promising avenue for capturing neural-related biophysical signals such as electroencephalography (EEG), electromyography (EMG), and cardiorespiratory dynamics. Their low-power and wearable characteristics make them suitable for long-term health and neurocognitive monitoring. When combined with deep learning models-including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and spiking neural networks (SNNs)-TENG-generated signals can be efficiently decoded, enabling insights into neural states, cognitive functions, and disease progression. Furthermore, neuromorphic computing paradigms provide an energy-efficient and biologically inspired framework that naturally aligns with the event-driven characteristics of TENG outputs. This mini review highlights the convergence of TENG-based sensing, deep learning algorithms, and neuromorphic systems for neural data interpretation. We discuss recent progress, challenges, and future perspectives, with an emphasis on applications in computational neuroscience, neurorehabilitation, and elderly health care.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1691017"},"PeriodicalIF":2.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12634569/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145586393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1661070
Fudong Zhang, Jingjing Cui
The brain is a highly diverse and heterogeneous network, yet the functional role of this neural heterogeneity remains largely unclear. Despite growing interest in neural heterogeneity, a comprehensive understanding of how it influences computation across different neural levels and learning methods is still lacking. In this work, we systematically examine the neural computation of spiking neural networks (SNNs) in three key sources of neural heterogeneity: external, network, and intrinsic heterogeneity. We evaluate their impact using three distinct learning methods, which can carry out tasks ranging from simple curve fitting to complex network reconstruction and real-world applications. Our results show that while different types of neural heterogeneity contribute in distinct ways, they consistently improve learning accuracy and robustness. These findings suggest that neural heterogeneity across multiple levels improves learning capacity and robustness of neural computation, and should be considered a core design principle in the optimization of SNNs.
{"title":"Neural heterogeneity as a unifying mechanism for efficient learning in spiking neural networks.","authors":"Fudong Zhang, Jingjing Cui","doi":"10.3389/fncom.2025.1661070","DOIUrl":"10.3389/fncom.2025.1661070","url":null,"abstract":"<p><p>The brain is a highly diverse and heterogeneous network, yet the functional role of this neural heterogeneity remains largely unclear. Despite growing interest in neural heterogeneity, a comprehensive understanding of how it influences computation across different neural levels and learning methods is still lacking. In this work, we systematically examine the neural computation of spiking neural networks (SNNs) in three key sources of neural heterogeneity: external, network, and intrinsic heterogeneity. We evaluate their impact using three distinct learning methods, which can carry out tasks ranging from simple curve fitting to complex network reconstruction and real-world applications. Our results show that while different types of neural heterogeneity contribute in distinct ways, they consistently improve learning accuracy and robustness. These findings suggest that neural heterogeneity across multiple levels improves learning capacity and robustness of neural computation, and should be considered a core design principle in the optimization of SNNs.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1661070"},"PeriodicalIF":2.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12634501/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145586461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-05eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1692418
Mahbod Nouri, David Rotermund, Alberto Garcia-Ortiz, Klaus R Pawelzik
Considering biological constraints in artificial neural networks has led to dramatic improvements in performance. Nevertheless, to date, the positivity of long-range signals in the cortex has not been shown to yield improvements. While Non-negative matrix factorization (NMF) captures biological constraints of positive long-range interactions, deep convolutional neural networks with NMF modules do not match the performance of conventional neural networks (CNNs) of a similar size. This work shows that introducing intermediate modules that combine the NMF's positive activities, analogous to the processing in cortical columns, leads to improved performance on benchmark data that exceeds that of vanilla deep convolutional networks. This demonstrates that including positive long-range signaling together with local interactions of both signs in analogy to cortical hyper-columns has the potential to enhance the performance of deep networks.
{"title":"Interleaving cortex-analog mixing improves deep non-negative matrix factorization networks.","authors":"Mahbod Nouri, David Rotermund, Alberto Garcia-Ortiz, Klaus R Pawelzik","doi":"10.3389/fncom.2025.1692418","DOIUrl":"10.3389/fncom.2025.1692418","url":null,"abstract":"<p><p>Considering biological constraints in artificial neural networks has led to dramatic improvements in performance. Nevertheless, to date, the positivity of long-range signals in the cortex has not been shown to yield improvements. While Non-negative matrix factorization (NMF) captures biological constraints of positive long-range interactions, deep convolutional neural networks with NMF modules do not match the performance of conventional neural networks (CNNs) of a similar size. This work shows that introducing intermediate modules that combine the NMF's positive activities, analogous to the processing in cortical columns, leads to improved performance on benchmark data that exceeds that of vanilla deep convolutional networks. This demonstrates that including positive long-range signaling together with local interactions of both signs in analogy to cortical hyper-columns has the potential to enhance the performance of deep networks.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1692418"},"PeriodicalIF":2.3,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12626930/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145563432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1677930
Ahmed El-Gazzar, Marcel van Gerven
The rapid growth of large-scale neuroscience datasets has spurred diverse modeling strategies, ranging from mechanistic models grounded in biophysics, to phenomenological descriptions of neural dynamics, to data-driven deep neural networks (DNNs). Each approach offers distinct strengths as mechanistic models provide interpretability, phenomenological models capture emergent dynamics, and DNNs excel at predictive accuracy but this also comes with limitations when applied in isolation. Universal differential equations (UDEs) offer a unifying modeling framework that integrates these complementary approaches. By treating differential equations as parameterizable, differentiable objects that can be combined with modern deep learning techniques, UDEs enable hybrid models that balance interpretability with predictive power. We provide a systematic overview of the UDE framework, covering its mathematical foundations, training methodologies, and recent innovations. We argue that UDEs fill a critical gap between mechanistic, phenomenological, and data-driven models in neuroscience, with potential to advance applications in neural computation, neural control, neural decoding, and normative modeling in neuroscience.
{"title":"Universal differential equations as a unifying modeling language for neuroscience.","authors":"Ahmed El-Gazzar, Marcel van Gerven","doi":"10.3389/fncom.2025.1677930","DOIUrl":"10.3389/fncom.2025.1677930","url":null,"abstract":"<p><p>The rapid growth of large-scale neuroscience datasets has spurred diverse modeling strategies, ranging from mechanistic models grounded in biophysics, to phenomenological descriptions of neural dynamics, to data-driven deep neural networks (DNNs). Each approach offers distinct strengths as mechanistic models provide interpretability, phenomenological models capture emergent dynamics, and DNNs excel at predictive accuracy but this also comes with limitations when applied in isolation. Universal differential equations (UDEs) offer a unifying modeling framework that integrates these complementary approaches. By treating differential equations as parameterizable, differentiable objects that can be combined with modern deep learning techniques, UDEs enable hybrid models that balance interpretability with predictive power. We provide a systematic overview of the UDE framework, covering its mathematical foundations, training methodologies, and recent innovations. We argue that UDEs fill a critical gap between mechanistic, phenomenological, and data-driven models in neuroscience, with potential to advance applications in neural computation, neural control, neural decoding, and normative modeling in neuroscience.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1677930"},"PeriodicalIF":2.3,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12611869/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145539805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-24eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1618191
Juan M Tenti, Monserrat Pallares Di Nunzio, Marisa A Bab, Osvaldo Anibal Rosso, Fernando Montani, Marcelo J F Arlego
Sleep is known to support memory consolidation through a complex interplay of neural dynamics across multiple timescales. Using intracranial EEG (iEEG) recordings from patients undergoing clinical monitoring, we characterize spectral activity, neuronal avalanche dynamics, and temporal correlations across sleep-wake states, with a focus on their spatial distribution and potential functional relevance. We observe increased low-frequency power, larger avalanches, and enhanced long-range temporal correlations-quantified via Detrended Fluctuation Analysis-during N2 and N3 sleep. In contrast, REM sleep and wakefulness show reduced temporal persistence and fewer large-scale cascades, suggesting a shift toward more fragmented and flexible dynamics. These signatures vary across cortical regions, with distinctive patterns emerging in medial temporal and frontal areas-regions implicated in memory processing. Rather than providing direct evidence of consolidation, our results point to a functional neural landscape that may favor both stabilization and reconfiguration of internal representations during sleep. Overall, our findings highlight the utility of iEEG in revealing the multiscale spatio-temporal structure of sleep-related brain dynamics, offering insights into the physiological conditions that support memory-related processing.
{"title":"Multiscale intracranial EEG dynamics across sleep-wake states: toward memory-related processing.","authors":"Juan M Tenti, Monserrat Pallares Di Nunzio, Marisa A Bab, Osvaldo Anibal Rosso, Fernando Montani, Marcelo J F Arlego","doi":"10.3389/fncom.2025.1618191","DOIUrl":"10.3389/fncom.2025.1618191","url":null,"abstract":"<p><p>Sleep is known to support memory consolidation through a complex interplay of neural dynamics across multiple timescales. Using intracranial EEG (iEEG) recordings from patients undergoing clinical monitoring, we characterize spectral activity, neuronal avalanche dynamics, and temporal correlations across sleep-wake states, with a focus on their spatial distribution and potential functional relevance. We observe increased low-frequency power, larger avalanches, and enhanced long-range temporal correlations-quantified via Detrended Fluctuation Analysis-during N2 and N3 sleep. In contrast, REM sleep and wakefulness show reduced temporal persistence and fewer large-scale cascades, suggesting a shift toward more fragmented and flexible dynamics. These signatures vary across cortical regions, with distinctive patterns emerging in medial temporal and frontal areas-regions implicated in memory processing. Rather than providing direct evidence of consolidation, our results point to a functional neural landscape that may favor both stabilization and reconfiguration of internal representations during sleep. Overall, our findings highlight the utility of iEEG in revealing the multiscale spatio-temporal structure of sleep-related brain dynamics, offering insights into the physiological conditions that support memory-related processing.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1618191"},"PeriodicalIF":2.3,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12592051/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145481350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}