Pub Date : 2024-08-27DOI: 10.3389/fncom.2024.1449364
Jessica L Verpeut,Marlies Oostland
{"title":"The significance of cerebellar contributions in early-life through aging.","authors":"Jessica L Verpeut,Marlies Oostland","doi":"10.3389/fncom.2024.1449364","DOIUrl":"https://doi.org/10.3389/fncom.2024.1449364","url":null,"abstract":"","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"4 1","pages":"1449364"},"PeriodicalIF":3.2,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-26DOI: 10.3389/fncom.2024.1335130
Rawan El-Zghir, Natasha Gabay, Peter Robinson
A compact description of the frequency structure and topography of human alpha-band rhythms is obtained by use of the first four brain activity eigenmodes previously derived from corticothalamic neural field theory. Just two eigenmodes that overlap in frequency are found to reproduce the observed topography of the classical alpha rhythm for subjects with a single, occipitally concentrated alpha peak in their electroencephalograms. Alpha frequency splitting and relative amplitudes of double alpha peaks are explored analytically and numerically within this four-mode framework using eigenfunction expansion and perturbation methods. These effects are found to result primarily from the different eigenvalues and corticothalamic gains corresponding to the eigenmodes. Three modes with two non-overlapping frequencies suffice to reproduce the observed topography for subjects with a double alpha peak, where the appearance of a distinct second alpha peak requires an increase of the corticothalamic gain of higher eigenmodes relative to the first. Conversely, alpha blocking is inferred to be linked to a relatively small attention-dependent reduction of the gain of the relevant eigenmodes, whose effect is enhanced by the near-critical state of the brain and whose sign is consistent with inferences from neural field theory. The topographies and blocking of the mu and tau rhythms within the alpha-band are explained analogously via eigenmodes. Moreover, the observation of three rhythms in the alpha band is due to there being exactly three members of the first family of spatially nonuniform modes. These results thus provide a simple, unified description of alpha band rhythms and enable experimental observations of spectral structure and topography to be linked directly to theory and underlying physiology.
{"title":"Unified theory of alpha, mu, and tau rhythms via eigenmodes of brain activity","authors":"Rawan El-Zghir, Natasha Gabay, Peter Robinson","doi":"10.3389/fncom.2024.1335130","DOIUrl":"https://doi.org/10.3389/fncom.2024.1335130","url":null,"abstract":"A compact description of the frequency structure and topography of human alpha-band rhythms is obtained by use of the first four brain activity eigenmodes previously derived from corticothalamic neural field theory. Just two eigenmodes that overlap in frequency are found to reproduce the observed topography of the classical alpha rhythm for subjects with a single, occipitally concentrated alpha peak in their electroencephalograms. Alpha frequency splitting and relative amplitudes of double alpha peaks are explored analytically and numerically within this four-mode framework using eigenfunction expansion and perturbation methods. These effects are found to result primarily from the different eigenvalues and corticothalamic gains corresponding to the eigenmodes. Three modes with two non-overlapping frequencies suffice to reproduce the observed topography for subjects with a double alpha peak, where the appearance of a distinct second alpha peak requires an increase of the corticothalamic gain of higher eigenmodes relative to the first. Conversely, alpha blocking is inferred to be linked to a relatively small attention-dependent reduction of the gain of the relevant eigenmodes, whose effect is enhanced by the near-critical state of the brain and whose sign is consistent with inferences from neural field theory. The topographies and blocking of the mu and tau rhythms within the alpha-band are explained analogously via eigenmodes. Moreover, the observation of three rhythms in the alpha band is due to there being exactly three members of the first family of spatially nonuniform modes. These results thus provide a simple, unified description of alpha band rhythms and enable experimental observations of spectral structure and topography to be linked directly to theory and underlying physiology.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"117 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142205514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prenatal alcohol exposure (PAE) refers to the exposure of the developing fetus due to alcohol consumption during pregnancy and can have life-long consequences for learning, behavior, and health. Understanding the impact of PAE on the developing brain manifests challenges due to its complex structural and functional attributes, which can be addressed by leveraging machine learning (ML) and deep learning (DL) approaches. While most ML and DL models have been tailored for adult-centric problems, this work focuses on applying DL to detect PAE in the pediatric population. This study integrates the pre-trained simple fully convolutional network (SFCN) as a transfer learning approach for extracting features and a newly trained classifier to distinguish between unexposed and PAE participants based on T1-weighted structural brain magnetic resonance (MR) scans of individuals aged 2–8 years. Among several varying dataset sizes and augmentation strategy during training, the classifier secured the highest sensitivity of 88.47% with 85.04% average accuracy on testing data when considering a balanced dataset with augmentation for both classes. Moreover, we also preliminarily performed explainability analysis using the Grad-CAM method, highlighting various brain regions such as corpus callosum, cerebellum, pons, and white matter as the most important features in the model's decision-making process. Despite the challenges of constructing DL models for pediatric populations due to the brain's rapid development, motion artifacts, and insufficient data, this work highlights the potential of transfer learning in situations where data is limited. Furthermore, this study underscores the importance of preserving a balanced dataset for fair classification and clarifying the rationale behind the model's prediction using explainability analysis.
{"title":"Deep learning for detecting prenatal alcohol exposure in pediatric brain MRI: a transfer learning approach with explainability insights","authors":"Anik Das, Kaue Duarte, Catherine Lebel, Mariana Bento","doi":"10.3389/fncom.2024.1434421","DOIUrl":"https://doi.org/10.3389/fncom.2024.1434421","url":null,"abstract":"Prenatal alcohol exposure (PAE) refers to the exposure of the developing fetus due to alcohol consumption during pregnancy and can have life-long consequences for learning, behavior, and health. Understanding the impact of PAE on the developing brain manifests challenges due to its complex structural and functional attributes, which can be addressed by leveraging machine learning (ML) and deep learning (DL) approaches. While most ML and DL models have been tailored for adult-centric problems, this work focuses on applying DL to detect PAE in the pediatric population. This study integrates the pre-trained simple fully convolutional network (SFCN) as a transfer learning approach for extracting features and a newly trained classifier to distinguish between unexposed and PAE participants based on T1-weighted structural brain magnetic resonance (MR) scans of individuals aged 2–8 years. Among several varying dataset sizes and augmentation strategy during training, the classifier secured the highest sensitivity of 88.47% with 85.04% average accuracy on testing data when considering a balanced dataset with augmentation for both classes. Moreover, we also preliminarily performed explainability analysis using the Grad-CAM method, highlighting various brain regions such as corpus callosum, cerebellum, pons, and white matter as the most important features in the model's decision-making process. Despite the challenges of constructing DL models for pediatric populations due to the brain's rapid development, motion artifacts, and insufficient data, this work highlights the potential of transfer learning in situations where data is limited. Furthermore, this study underscores the importance of preserving a balanced dataset for fair classification and clarifying the rationale behind the model's prediction using explainability analysis.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"11 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142226292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23eCollection Date: 2024-01-01DOI: 10.3389/fncom.2024.1386841
Samuele Carli, Luigi Brugnano, Daniele Caligiore
Introduction: Historically, Parkinson's Disease (PD) research has focused on the dysfunction of dopamine-producing cells in the substantia nigra pars compacta, which is linked to motor regulation in the basal ganglia. Therapies have mainly aimed at restoring dopamine (DA) levels, showing effectiveness but variable outcomes and side effects. Recent evidence indicates that PD complexity implicates disruptions in DA, noradrenaline (NA), and serotonin (5-HT) systems, which may underlie the variations in therapy effects.
Methods: We present a system-level bio-constrained computational model that comprehensively investigates the dynamic interactions between these neurotransmitter systems. The model was designed to replicate experimental data demonstrating the impact of NA and 5-HT depletion in a PD animal model, providing insights into the causal relationships between basal ganglia regions and neuromodulator release areas.
Results: The model successfully replicates experimental data and generates predictions regarding changes in unexplored brain regions, suggesting avenues for further investigation. It highlights the potential efficacy of alternative treatments targeting the locus coeruleus and dorsal raphe nucleus, though these preliminary findings require further validation. Sensitivity analysis identifies critical model parameters, offering insights into key factors influencing brain area activity. A stability analysis underscores the robustness of our mathematical formulation, bolstering the model validity.
Discussion: Our holistic approach emphasizes that PD is a multifactorial disorder and opens promising avenues for early diagnostic tools that harness the intricate interactions among monoaminergic systems. Investigating NA and 5-HT systems alongside the DA system may yield more effective, subtype-specific therapies. The exploration of multisystem dysregulation in PD is poised to revolutionize our understanding and management of this complex neurodegenerative disorder.
{"title":"Simulating combined monoaminergic depletions in a PD animal model through a bio-constrained differential equations system.","authors":"Samuele Carli, Luigi Brugnano, Daniele Caligiore","doi":"10.3389/fncom.2024.1386841","DOIUrl":"10.3389/fncom.2024.1386841","url":null,"abstract":"<p><strong>Introduction: </strong>Historically, Parkinson's Disease (PD) research has focused on the dysfunction of dopamine-producing cells in the substantia nigra pars compacta, which is linked to motor regulation in the basal ganglia. Therapies have mainly aimed at restoring dopamine (DA) levels, showing effectiveness but variable outcomes and side effects. Recent evidence indicates that PD complexity implicates disruptions in DA, noradrenaline (NA), and serotonin (5-HT) systems, which may underlie the variations in therapy effects.</p><p><strong>Methods: </strong>We present a system-level bio-constrained computational model that comprehensively investigates the dynamic interactions between these neurotransmitter systems. The model was designed to replicate experimental data demonstrating the impact of NA and 5-HT depletion in a PD animal model, providing insights into the causal relationships between basal ganglia regions and neuromodulator release areas.</p><p><strong>Results: </strong>The model successfully replicates experimental data and generates predictions regarding changes in unexplored brain regions, suggesting avenues for further investigation. It highlights the potential efficacy of alternative treatments targeting the locus coeruleus and dorsal raphe nucleus, though these preliminary findings require further validation. Sensitivity analysis identifies critical model parameters, offering insights into key factors influencing brain area activity. A stability analysis underscores the robustness of our mathematical formulation, bolstering the model validity.</p><p><strong>Discussion: </strong>Our holistic approach emphasizes that PD is a multifactorial disorder and opens promising avenues for early diagnostic tools that harness the intricate interactions among monoaminergic systems. Investigating NA and 5-HT systems alongside the DA system may yield more effective, subtype-specific therapies. The exploration of multisystem dysregulation in PD is poised to revolutionize our understanding and management of this complex neurodegenerative disorder.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"18 ","pages":"1386841"},"PeriodicalIF":2.1,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11378529/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142153527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.3389/fncom.2024.1395901
Darren J. Edwards
There have been impressive advancements in the field of natural language processing (NLP) in recent years, largely driven by innovations in the development of transformer-based large language models (LLM) that utilize “attention.” This approach employs masked self-attention to establish (via similarly) different positions of tokens (words) within an inputted sequence of tokens to compute the most appropriate response based on its training corpus. However, there is speculation as to whether this approach alone can be scaled up to develop emergent artificial general intelligence (AGI), and whether it can address the alignment of AGI values with human values (called the alignment problem). Some researchers exploring the alignment problem highlight three aspects that AGI (or AI) requires to help resolve this problem: (1) an interpretable values specification; (2) a utility function; and (3) a dynamic contextual account of behavior. Here, a neurosymbolic model is proposed to help resolve these issues of human value alignment in AI, which expands on the transformer-based model for NLP to incorporate symbolic reasoning that may allow AGI to incorporate perspective-taking reasoning (i.e., resolving the need for a dynamic contextual account of behavior through deictics) as defined by a multilevel evolutionary and neurobiological framework into a functional contextual post-Skinnerian model of human language called “Neurobiological and Natural Selection Relational Frame Theory” (N-Frame). It is argued that this approach may also help establish a comprehensible value scheme, a utility function by expanding the expected utility equation of behavioral economics to consider functional contextualism, and even an observer (or witness) centric model for consciousness. Evolution theory, subjective quantum mechanics, and neuroscience are further aimed to help explain consciousness, and possible implementation within an LLM through correspondence to an interface as suggested by N-Frame. This argument is supported by the computational level of hypergraphs, relational density clusters, a conscious quantum level defined by QBism, and real-world applied level (human user feedback). It is argued that this approach could enable AI to achieve consciousness and develop deictic perspective-taking abilities, thereby attaining human-level self-awareness, empathy, and compassion toward others. Importantly, this consciousness hypothesis can be directly tested with a significance of approximately 5-sigma significance (with a 1 in 3.5 million probability that any identified AI-conscious observations in the form of a collapsed wave form are due to chance factors) through double-slit intent-type experimentation and visualization procedures for derived perspective-taking relational frames. Ultimately, this could provide a solution to the alignment problem and contribute to the emergence of a theory of mind (ToM) within AI.
{"title":"A functional contextual, observer-centric, quantum mechanical, and neuro-symbolic approach to solving the alignment problem of artificial general intelligence: safe AI through intersecting computational psychological neuroscience and LLM architecture for emergent theory of mind","authors":"Darren J. Edwards","doi":"10.3389/fncom.2024.1395901","DOIUrl":"https://doi.org/10.3389/fncom.2024.1395901","url":null,"abstract":"There have been impressive advancements in the field of natural language processing (NLP) in recent years, largely driven by innovations in the development of transformer-based large language models (LLM) that utilize “attention.” This approach employs masked self-attention to establish (via similarly) different positions of tokens (words) within an inputted sequence of tokens to compute the most appropriate response based on its training corpus. However, there is speculation as to whether this approach alone can be scaled up to develop emergent artificial general intelligence (AGI), and whether it can address the alignment of AGI values with human values (called the alignment problem). Some researchers exploring the alignment problem highlight three aspects that AGI (or AI) requires to help resolve this problem: (1) an interpretable values specification; (2) a utility function; and (3) a dynamic contextual account of behavior. Here, a neurosymbolic model is proposed to help resolve these issues of human value alignment in AI, which expands on the transformer-based model for NLP to incorporate symbolic reasoning that may allow AGI to incorporate perspective-taking reasoning (i.e., resolving the need for a dynamic contextual account of behavior through deictics) as defined by a multilevel evolutionary and neurobiological framework into a functional contextual post-Skinnerian model of human language called “Neurobiological and Natural Selection Relational Frame Theory” (<jats:italic>N</jats:italic>-Frame). It is argued that this approach may also help establish a comprehensible value scheme, a utility function by expanding the expected utility equation of behavioral economics to consider functional contextualism, and even an observer (or witness) centric model for consciousness. Evolution theory, subjective quantum mechanics, and neuroscience are further aimed to help explain consciousness, and possible implementation within an LLM through correspondence to an interface as suggested by <jats:italic>N</jats:italic>-Frame. This argument is supported by the computational level of hypergraphs, relational density clusters, a conscious quantum level defined by QBism, and real-world applied level (human user feedback). It is argued that this approach could enable AI to achieve consciousness and develop deictic perspective-taking abilities, thereby attaining human-level self-awareness, empathy, and compassion toward others. Importantly, this consciousness hypothesis can be directly tested with a significance of approximately 5-sigma significance (with a 1 in 3.5 million probability that any identified AI-conscious observations in the form of a collapsed wave form are due to chance factors) through double-slit intent-type experimentation and visualization procedures for derived perspective-taking relational frames. Ultimately, this could provide a solution to the alignment problem and contribute to the emergence of a theory of mind (ToM) within AI.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"20 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141933868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-06DOI: 10.3389/fncom.2024.1432593
Federico Tesler, Roberta Maria Lorenzi, Adam Ponzi, Claudia Casellato, Fulvia Palesi, Daniela Gandolfi, Claudia A. M. Gandini Wheeler Kingshott, Jonathan Mapelli, Egidio D'Angelo, Michele Migliore, Alain Destexhe
The development of biologically realistic models of brain microcircuits and regions constitutes currently a very relevant topic in computational neuroscience. One of the main challenges of such models is the passage between different scales, going from the microscale (cellular) to the meso (microcircuit) and macroscale (region or whole-brain level), while keeping at the same time a constraint on the demand of computational resources. In this paper we introduce a multiscale modeling framework for the hippocampal CA1, a region of the brain that plays a key role in functions such as learning, memory consolidation and navigation. Our modeling framework goes from the single cell level to the macroscale and makes use of a novel mean-field model of CA1, introduced in this paper, to bridge the gap between the micro and macro scales. We test and validate the model by analyzing the response of the system to the main brain rhythms observed in the hippocampus and comparing our results with the ones of the corresponding spiking network model of CA1. Then, we analyze the implementation of synaptic plasticity within our framework, a key aspect to study the role of hippocampus in learning and memory consolidation, and we demonstrate the capability of our framework to incorporate the variations at synaptic level. Finally, we present an example of the implementation of our model to study a stimulus propagation at the macro-scale level, and we show that the results of our framework can capture the dynamics obtained in the corresponding spiking network model of the whole CA1 area.
{"title":"Multiscale modeling of neuronal dynamics in hippocampus CA1","authors":"Federico Tesler, Roberta Maria Lorenzi, Adam Ponzi, Claudia Casellato, Fulvia Palesi, Daniela Gandolfi, Claudia A. M. Gandini Wheeler Kingshott, Jonathan Mapelli, Egidio D'Angelo, Michele Migliore, Alain Destexhe","doi":"10.3389/fncom.2024.1432593","DOIUrl":"https://doi.org/10.3389/fncom.2024.1432593","url":null,"abstract":"The development of biologically realistic models of brain microcircuits and regions constitutes currently a very relevant topic in computational neuroscience. One of the main challenges of such models is the passage between different scales, going from the microscale (cellular) to the meso (microcircuit) and macroscale (region or whole-brain level), while keeping at the same time a constraint on the demand of computational resources. In this paper we introduce a multiscale modeling framework for the hippocampal CA1, a region of the brain that plays a key role in functions such as learning, memory consolidation and navigation. Our modeling framework goes from the single cell level to the macroscale and makes use of a novel mean-field model of CA1, introduced in this paper, to bridge the gap between the micro and macro scales. We test and validate the model by analyzing the response of the system to the main brain rhythms observed in the hippocampus and comparing our results with the ones of the corresponding spiking network model of CA1. Then, we analyze the implementation of synaptic plasticity within our framework, a key aspect to study the role of hippocampus in learning and memory consolidation, and we demonstrate the capability of our framework to incorporate the variations at synaptic level. Finally, we present an example of the implementation of our model to study a stimulus propagation at the macro-scale level, and we show that the results of our framework can capture the dynamics obtained in the corresponding spiking network model of the whole CA1 area.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"58 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141968818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.3389/fncom.2024.1421458
Duho Sihn, Sung-Phil Kim
IntroductionBehaviors often involve a sequence of events, and learning and reproducing it is essential for sequential memory. Brain loop structures refer to loop-shaped inter-regional connection structures in the brain such as cortico-basal ganglia-thalamic and cortico-cerebellar loops. They are thought to play a crucial role in supporting sequential memory, but it is unclear what properties of the loop structure are important and why.MethodsIn this study, we investigated conditions necessary for the learning of sequential memory in brain loop structures via computational modeling. We assumed that sequential memory emerges due to delayed information transmission in loop structures and presented a basic neural activity model and validated our theoretical considerations with spiking neural network simulations.ResultsBased on this model, we described the factors for the learning of sequential memory: first, the information transmission delay should decrease as the size of the loop structure increases; and second, the likelihood of the learning of sequential memory increases as the size of the loop structure increases and soon saturates. Combining these factors, we showed that moderate-sized brain loop structures are advantageous for the learning of sequential memory due to the physiological restrictions of information transmission delay.DiscussionOur results will help us better understand the relationship between sequential memory and brain loop structures.
{"title":"A neural basis for learning sequential memory in brain loop structures","authors":"Duho Sihn, Sung-Phil Kim","doi":"10.3389/fncom.2024.1421458","DOIUrl":"https://doi.org/10.3389/fncom.2024.1421458","url":null,"abstract":"IntroductionBehaviors often involve a sequence of events, and learning and reproducing it is essential for sequential memory. Brain loop structures refer to loop-shaped inter-regional connection structures in the brain such as cortico-basal ganglia-thalamic and cortico-cerebellar loops. They are thought to play a crucial role in supporting sequential memory, but it is unclear what properties of the loop structure are important and why.MethodsIn this study, we investigated conditions necessary for the learning of sequential memory in brain loop structures via computational modeling. We assumed that sequential memory emerges due to delayed information transmission in loop structures and presented a basic neural activity model and validated our theoretical considerations with spiking neural network simulations.ResultsBased on this model, we described the factors for the learning of sequential memory: first, the information transmission delay should decrease as the size of the loop structure increases; and second, the likelihood of the learning of sequential memory increases as the size of the loop structure increases and soon saturates. Combining these factors, we showed that moderate-sized brain loop structures are advantageous for the learning of sequential memory due to the physiological restrictions of information transmission delay.DiscussionOur results will help us better understand the relationship between sequential memory and brain loop structures.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"3 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141933870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-24DOI: 10.3389/fncom.2024.1388166
Haiping Huang
A good theory of mathematical beauty is more practical than any current observation, as new predictions about physical reality can be self-consistently verified. This belief applies to the current status of understanding deep neural networks including large language models and even the biological intelligence. Toy models provide a metaphor of physical reality, allowing mathematically formulating the reality (i.e., the so-called theory), which can be updated as more conjectures are justified or refuted. One does not need to present all details in a model, but rather, more abstract models are constructed, as complex systems such as the brains or deep networks have many sloppy dimensions but much less stiff dimensions that strongly impact macroscopic observables. This type of bottom-up mechanistic modeling is still promising in the modern era of understanding the natural or artificial intelligence. Here, we shed light on eight challenges in developing theory of intelligence following this theoretical paradigm. Theses challenges are representation learning, generalization, adversarial robustness, continual learning, causal learning, internal model of the brain, next-token prediction, and the mechanics of subjective experience.
{"title":"Eight challenges in developing theory of intelligence","authors":"Haiping Huang","doi":"10.3389/fncom.2024.1388166","DOIUrl":"https://doi.org/10.3389/fncom.2024.1388166","url":null,"abstract":"A good theory of mathematical beauty is more practical than any current observation, as new predictions about physical reality can be self-consistently verified. This belief applies to the current status of understanding deep neural networks including large language models and even the biological intelligence. Toy models provide a metaphor of physical reality, allowing mathematically formulating the reality (i.e., the so-called theory), which can be updated as more conjectures are justified or refuted. One does not need to present all details in a model, but rather, more abstract models are constructed, as complex systems such as the brains or deep networks have many sloppy dimensions but much less stiff dimensions that strongly impact macroscopic observables. This type of bottom-up mechanistic modeling is still promising in the modern era of understanding the natural or artificial intelligence. Here, we shed light on eight challenges in developing theory of intelligence following this theoretical paradigm. Theses challenges are representation learning, generalization, adversarial robustness, continual learning, causal learning, internal model of the brain, next-token prediction, and the mechanics of subjective experience.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141782368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-19DOI: 10.3389/fncom.2024.1416494
Wei Chen, Yuan Liao, Rui Dai, Yuanlin Dong, Liya Huang
EEG-based emotion recognition is becoming crucial in brain-computer interfaces (BCI). Currently, most researches focus on improving accuracy, while neglecting further research on the interpretability of models, we are committed to analyzing the impact of different brain regions and signal frequency bands on emotion generation based on graph structure. Therefore, this paper proposes a method named Dual Attention Mechanism Graph Convolutional Neural Network (DAMGCN). Specifically, we utilize graph convolutional neural networks to model the brain network as a graph to extract representative spatial features. Furthermore, we employ the self-attention mechanism of the Transformer model which allocates more electrode channel weights and signal frequency band weights to important brain regions and frequency bands. The visualization of attention mechanism clearly demonstrates the weight allocation learned by DAMGCN. During the performance evaluation of our model on the DEAP, SEED, and SEED-IV datasets, we achieved the best results on the SEED dataset, showing subject-dependent experiments’ accuracy of 99.42% and subject-independent experiments’ accuracy of 73.21%. The results are demonstrably superior to the accuracies of most existing models in the realm of EEG-based emotion recognition.
{"title":"EEG-based emotion recognition using graph convolutional neural network with dual attention mechanism","authors":"Wei Chen, Yuan Liao, Rui Dai, Yuanlin Dong, Liya Huang","doi":"10.3389/fncom.2024.1416494","DOIUrl":"https://doi.org/10.3389/fncom.2024.1416494","url":null,"abstract":"EEG-based emotion recognition is becoming crucial in brain-computer interfaces (BCI). Currently, most researches focus on improving accuracy, while neglecting further research on the interpretability of models, we are committed to analyzing the impact of different brain regions and signal frequency bands on emotion generation based on graph structure. Therefore, this paper proposes a method named Dual Attention Mechanism Graph Convolutional Neural Network (DAMGCN). Specifically, we utilize graph convolutional neural networks to model the brain network as a graph to extract representative spatial features. Furthermore, we employ the self-attention mechanism of the Transformer model which allocates more electrode channel weights and signal frequency band weights to important brain regions and frequency bands. The visualization of attention mechanism clearly demonstrates the weight allocation learned by DAMGCN. During the performance evaluation of our model on the DEAP, SEED, and SEED-IV datasets, we achieved the best results on the SEED dataset, showing subject-dependent experiments’ accuracy of 99.42% and subject-independent experiments’ accuracy of 73.21%. The results are demonstrably superior to the accuracies of most existing models in the realm of EEG-based emotion recognition.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"40 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141743565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It remains difficult for mobile robots to continue accurate self-localization when they are suddenly teleported to a location that is different from their beliefs during navigation. Incorporating insights from neuroscience into developing a spatial cognition model for mobile robots may make it possible to acquire the ability to respond appropriately to changing situations, similar to living organisms. Recent neuroscience research has shown that during teleportation in rat navigation, neural populations of place cells in the cornu ammonis-3 region of the hippocampus, which are sparse representations of each other, switch discretely. In this study, we construct a spatial cognition model using brain reference architecture-driven development, a method for developing brain-inspired software that is functionally and structurally consistent with the brain. The spatial cognition model was realized by integrating the recurrent state—space model, a world model, with Monte Carlo localization to infer allocentric self-positions within the framework of neuro-symbol emergence in the robotics toolkit. The spatial cognition model, which models the cornu ammonis-1 and -3 regions with each latent variable, demonstrated improved self-localization performance of mobile robots during teleportation in a simulation environment. Moreover, it was confirmed that sparse neural activity could be obtained for the latent variables corresponding to cornu ammonis-3. These results suggest that spatial cognition models incorporating neuroscience insights can contribute to improving the self-localization technology for mobile robots. The project website is https://nakashimatakeshi.github.io/HF-IGL/.
{"title":"Hippocampal formation-inspired global self-localization: quick recovery from the kidnapped robot problem from an egocentric perspective","authors":"Takeshi Nakashima, Shunsuke Otake, Akira Taniguchi, Katsuyoshi Maeyama, Lotfi El Hafi, Tadahiro Taniguchi, Hiroshi Yamakawa","doi":"10.3389/fncom.2024.1398851","DOIUrl":"https://doi.org/10.3389/fncom.2024.1398851","url":null,"abstract":"It remains difficult for mobile robots to continue accurate self-localization when they are suddenly teleported to a location that is different from their beliefs during navigation. Incorporating insights from neuroscience into developing a spatial cognition model for mobile robots may make it possible to acquire the ability to respond appropriately to changing situations, similar to living organisms. Recent neuroscience research has shown that during teleportation in rat navigation, neural populations of place cells in the cornu ammonis-3 region of the hippocampus, which are sparse representations of each other, switch discretely. In this study, we construct a spatial cognition model using brain reference architecture-driven development, a method for developing brain-inspired software that is functionally and structurally consistent with the brain. The spatial cognition model was realized by integrating the recurrent state—space model, a world model, with Monte Carlo localization to infer allocentric self-positions within the framework of neuro-symbol emergence in the robotics toolkit. The spatial cognition model, which models the cornu ammonis-1 and -3 regions with each latent variable, demonstrated improved self-localization performance of mobile robots during teleportation in a simulation environment. Moreover, it was confirmed that sparse neural activity could be obtained for the latent variables corresponding to cornu ammonis-3. These results suggest that spatial cognition models incorporating neuroscience insights can contribute to improving the self-localization technology for mobile robots. The project website is <jats:ext-link>https://nakashimatakeshi.github.io/HF-IGL/</jats:ext-link>.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"25 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141743566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}