Pub Date : 2024-08-06DOI: 10.3389/fncom.2024.1432593
Federico Tesler, Roberta Maria Lorenzi, Adam Ponzi, Claudia Casellato, Fulvia Palesi, Daniela Gandolfi, Claudia A. M. Gandini Wheeler Kingshott, Jonathan Mapelli, Egidio D'Angelo, Michele Migliore, Alain Destexhe
The development of biologically realistic models of brain microcircuits and regions constitutes currently a very relevant topic in computational neuroscience. One of the main challenges of such models is the passage between different scales, going from the microscale (cellular) to the meso (microcircuit) and macroscale (region or whole-brain level), while keeping at the same time a constraint on the demand of computational resources. In this paper we introduce a multiscale modeling framework for the hippocampal CA1, a region of the brain that plays a key role in functions such as learning, memory consolidation and navigation. Our modeling framework goes from the single cell level to the macroscale and makes use of a novel mean-field model of CA1, introduced in this paper, to bridge the gap between the micro and macro scales. We test and validate the model by analyzing the response of the system to the main brain rhythms observed in the hippocampus and comparing our results with the ones of the corresponding spiking network model of CA1. Then, we analyze the implementation of synaptic plasticity within our framework, a key aspect to study the role of hippocampus in learning and memory consolidation, and we demonstrate the capability of our framework to incorporate the variations at synaptic level. Finally, we present an example of the implementation of our model to study a stimulus propagation at the macro-scale level, and we show that the results of our framework can capture the dynamics obtained in the corresponding spiking network model of the whole CA1 area.
{"title":"Multiscale modeling of neuronal dynamics in hippocampus CA1","authors":"Federico Tesler, Roberta Maria Lorenzi, Adam Ponzi, Claudia Casellato, Fulvia Palesi, Daniela Gandolfi, Claudia A. M. Gandini Wheeler Kingshott, Jonathan Mapelli, Egidio D'Angelo, Michele Migliore, Alain Destexhe","doi":"10.3389/fncom.2024.1432593","DOIUrl":"https://doi.org/10.3389/fncom.2024.1432593","url":null,"abstract":"The development of biologically realistic models of brain microcircuits and regions constitutes currently a very relevant topic in computational neuroscience. One of the main challenges of such models is the passage between different scales, going from the microscale (cellular) to the meso (microcircuit) and macroscale (region or whole-brain level), while keeping at the same time a constraint on the demand of computational resources. In this paper we introduce a multiscale modeling framework for the hippocampal CA1, a region of the brain that plays a key role in functions such as learning, memory consolidation and navigation. Our modeling framework goes from the single cell level to the macroscale and makes use of a novel mean-field model of CA1, introduced in this paper, to bridge the gap between the micro and macro scales. We test and validate the model by analyzing the response of the system to the main brain rhythms observed in the hippocampus and comparing our results with the ones of the corresponding spiking network model of CA1. Then, we analyze the implementation of synaptic plasticity within our framework, a key aspect to study the role of hippocampus in learning and memory consolidation, and we demonstrate the capability of our framework to incorporate the variations at synaptic level. Finally, we present an example of the implementation of our model to study a stimulus propagation at the macro-scale level, and we show that the results of our framework can capture the dynamics obtained in the corresponding spiking network model of the whole CA1 area.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"58 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141968818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.3389/fncom.2024.1421458
Duho Sihn, Sung-Phil Kim
IntroductionBehaviors often involve a sequence of events, and learning and reproducing it is essential for sequential memory. Brain loop structures refer to loop-shaped inter-regional connection structures in the brain such as cortico-basal ganglia-thalamic and cortico-cerebellar loops. They are thought to play a crucial role in supporting sequential memory, but it is unclear what properties of the loop structure are important and why.MethodsIn this study, we investigated conditions necessary for the learning of sequential memory in brain loop structures via computational modeling. We assumed that sequential memory emerges due to delayed information transmission in loop structures and presented a basic neural activity model and validated our theoretical considerations with spiking neural network simulations.ResultsBased on this model, we described the factors for the learning of sequential memory: first, the information transmission delay should decrease as the size of the loop structure increases; and second, the likelihood of the learning of sequential memory increases as the size of the loop structure increases and soon saturates. Combining these factors, we showed that moderate-sized brain loop structures are advantageous for the learning of sequential memory due to the physiological restrictions of information transmission delay.DiscussionOur results will help us better understand the relationship between sequential memory and brain loop structures.
{"title":"A neural basis for learning sequential memory in brain loop structures","authors":"Duho Sihn, Sung-Phil Kim","doi":"10.3389/fncom.2024.1421458","DOIUrl":"https://doi.org/10.3389/fncom.2024.1421458","url":null,"abstract":"IntroductionBehaviors often involve a sequence of events, and learning and reproducing it is essential for sequential memory. Brain loop structures refer to loop-shaped inter-regional connection structures in the brain such as cortico-basal ganglia-thalamic and cortico-cerebellar loops. They are thought to play a crucial role in supporting sequential memory, but it is unclear what properties of the loop structure are important and why.MethodsIn this study, we investigated conditions necessary for the learning of sequential memory in brain loop structures via computational modeling. We assumed that sequential memory emerges due to delayed information transmission in loop structures and presented a basic neural activity model and validated our theoretical considerations with spiking neural network simulations.ResultsBased on this model, we described the factors for the learning of sequential memory: first, the information transmission delay should decrease as the size of the loop structure increases; and second, the likelihood of the learning of sequential memory increases as the size of the loop structure increases and soon saturates. Combining these factors, we showed that moderate-sized brain loop structures are advantageous for the learning of sequential memory due to the physiological restrictions of information transmission delay.DiscussionOur results will help us better understand the relationship between sequential memory and brain loop structures.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"3 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141933870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-24DOI: 10.3389/fncom.2024.1388166
Haiping Huang
A good theory of mathematical beauty is more practical than any current observation, as new predictions about physical reality can be self-consistently verified. This belief applies to the current status of understanding deep neural networks including large language models and even the biological intelligence. Toy models provide a metaphor of physical reality, allowing mathematically formulating the reality (i.e., the so-called theory), which can be updated as more conjectures are justified or refuted. One does not need to present all details in a model, but rather, more abstract models are constructed, as complex systems such as the brains or deep networks have many sloppy dimensions but much less stiff dimensions that strongly impact macroscopic observables. This type of bottom-up mechanistic modeling is still promising in the modern era of understanding the natural or artificial intelligence. Here, we shed light on eight challenges in developing theory of intelligence following this theoretical paradigm. Theses challenges are representation learning, generalization, adversarial robustness, continual learning, causal learning, internal model of the brain, next-token prediction, and the mechanics of subjective experience.
{"title":"Eight challenges in developing theory of intelligence","authors":"Haiping Huang","doi":"10.3389/fncom.2024.1388166","DOIUrl":"https://doi.org/10.3389/fncom.2024.1388166","url":null,"abstract":"A good theory of mathematical beauty is more practical than any current observation, as new predictions about physical reality can be self-consistently verified. This belief applies to the current status of understanding deep neural networks including large language models and even the biological intelligence. Toy models provide a metaphor of physical reality, allowing mathematically formulating the reality (i.e., the so-called theory), which can be updated as more conjectures are justified or refuted. One does not need to present all details in a model, but rather, more abstract models are constructed, as complex systems such as the brains or deep networks have many sloppy dimensions but much less stiff dimensions that strongly impact macroscopic observables. This type of bottom-up mechanistic modeling is still promising in the modern era of understanding the natural or artificial intelligence. Here, we shed light on eight challenges in developing theory of intelligence following this theoretical paradigm. Theses challenges are representation learning, generalization, adversarial robustness, continual learning, causal learning, internal model of the brain, next-token prediction, and the mechanics of subjective experience.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"47 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141782368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-19DOI: 10.3389/fncom.2024.1416494
Wei Chen, Yuan Liao, Rui Dai, Yuanlin Dong, Liya Huang
EEG-based emotion recognition is becoming crucial in brain-computer interfaces (BCI). Currently, most researches focus on improving accuracy, while neglecting further research on the interpretability of models, we are committed to analyzing the impact of different brain regions and signal frequency bands on emotion generation based on graph structure. Therefore, this paper proposes a method named Dual Attention Mechanism Graph Convolutional Neural Network (DAMGCN). Specifically, we utilize graph convolutional neural networks to model the brain network as a graph to extract representative spatial features. Furthermore, we employ the self-attention mechanism of the Transformer model which allocates more electrode channel weights and signal frequency band weights to important brain regions and frequency bands. The visualization of attention mechanism clearly demonstrates the weight allocation learned by DAMGCN. During the performance evaluation of our model on the DEAP, SEED, and SEED-IV datasets, we achieved the best results on the SEED dataset, showing subject-dependent experiments’ accuracy of 99.42% and subject-independent experiments’ accuracy of 73.21%. The results are demonstrably superior to the accuracies of most existing models in the realm of EEG-based emotion recognition.
{"title":"EEG-based emotion recognition using graph convolutional neural network with dual attention mechanism","authors":"Wei Chen, Yuan Liao, Rui Dai, Yuanlin Dong, Liya Huang","doi":"10.3389/fncom.2024.1416494","DOIUrl":"https://doi.org/10.3389/fncom.2024.1416494","url":null,"abstract":"EEG-based emotion recognition is becoming crucial in brain-computer interfaces (BCI). Currently, most researches focus on improving accuracy, while neglecting further research on the interpretability of models, we are committed to analyzing the impact of different brain regions and signal frequency bands on emotion generation based on graph structure. Therefore, this paper proposes a method named Dual Attention Mechanism Graph Convolutional Neural Network (DAMGCN). Specifically, we utilize graph convolutional neural networks to model the brain network as a graph to extract representative spatial features. Furthermore, we employ the self-attention mechanism of the Transformer model which allocates more electrode channel weights and signal frequency band weights to important brain regions and frequency bands. The visualization of attention mechanism clearly demonstrates the weight allocation learned by DAMGCN. During the performance evaluation of our model on the DEAP, SEED, and SEED-IV datasets, we achieved the best results on the SEED dataset, showing subject-dependent experiments’ accuracy of 99.42% and subject-independent experiments’ accuracy of 73.21%. The results are demonstrably superior to the accuracies of most existing models in the realm of EEG-based emotion recognition.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"40 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141743565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It remains difficult for mobile robots to continue accurate self-localization when they are suddenly teleported to a location that is different from their beliefs during navigation. Incorporating insights from neuroscience into developing a spatial cognition model for mobile robots may make it possible to acquire the ability to respond appropriately to changing situations, similar to living organisms. Recent neuroscience research has shown that during teleportation in rat navigation, neural populations of place cells in the cornu ammonis-3 region of the hippocampus, which are sparse representations of each other, switch discretely. In this study, we construct a spatial cognition model using brain reference architecture-driven development, a method for developing brain-inspired software that is functionally and structurally consistent with the brain. The spatial cognition model was realized by integrating the recurrent state—space model, a world model, with Monte Carlo localization to infer allocentric self-positions within the framework of neuro-symbol emergence in the robotics toolkit. The spatial cognition model, which models the cornu ammonis-1 and -3 regions with each latent variable, demonstrated improved self-localization performance of mobile robots during teleportation in a simulation environment. Moreover, it was confirmed that sparse neural activity could be obtained for the latent variables corresponding to cornu ammonis-3. These results suggest that spatial cognition models incorporating neuroscience insights can contribute to improving the self-localization technology for mobile robots. The project website is https://nakashimatakeshi.github.io/HF-IGL/.
{"title":"Hippocampal formation-inspired global self-localization: quick recovery from the kidnapped robot problem from an egocentric perspective","authors":"Takeshi Nakashima, Shunsuke Otake, Akira Taniguchi, Katsuyoshi Maeyama, Lotfi El Hafi, Tadahiro Taniguchi, Hiroshi Yamakawa","doi":"10.3389/fncom.2024.1398851","DOIUrl":"https://doi.org/10.3389/fncom.2024.1398851","url":null,"abstract":"It remains difficult for mobile robots to continue accurate self-localization when they are suddenly teleported to a location that is different from their beliefs during navigation. Incorporating insights from neuroscience into developing a spatial cognition model for mobile robots may make it possible to acquire the ability to respond appropriately to changing situations, similar to living organisms. Recent neuroscience research has shown that during teleportation in rat navigation, neural populations of place cells in the cornu ammonis-3 region of the hippocampus, which are sparse representations of each other, switch discretely. In this study, we construct a spatial cognition model using brain reference architecture-driven development, a method for developing brain-inspired software that is functionally and structurally consistent with the brain. The spatial cognition model was realized by integrating the recurrent state—space model, a world model, with Monte Carlo localization to infer allocentric self-positions within the framework of neuro-symbol emergence in the robotics toolkit. The spatial cognition model, which models the cornu ammonis-1 and -3 regions with each latent variable, demonstrated improved self-localization performance of mobile robots during teleportation in a simulation environment. Moreover, it was confirmed that sparse neural activity could be obtained for the latent variables corresponding to cornu ammonis-3. These results suggest that spatial cognition models incorporating neuroscience insights can contribute to improving the self-localization technology for mobile robots. The project website is <jats:ext-link>https://nakashimatakeshi.github.io/HF-IGL/</jats:ext-link>.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"25 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141743566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-08DOI: 10.3389/fncom.2024.1367148
Marko Wilke
The first step in spatial normalization of magnetic resonance (MR) images commonly is an affine transformation, which may be vulnerable to image imperfections (such as inhomogeneities or “unusual” heads). Additionally, common software solutions use internal starting estimates to allow for a more efficient computation, which may pose a problem in datasets not conforming to these assumptions (such as those from children). In this technical note, three main questions were addressed: one, does the affine spatial normalization step implemented in SPM12 benefit from an initial inhomogeneity correction. Two, does using a complexity-reduced image version improve robustness when matching “unusual” images. And three, can a blind “brute-force” application of a wide range of parameter combinations improve the affine fit for unusual datasets in particular. A large database of 2081 image datasets was used, covering the full age range from birth to old age. All analyses were performed in Matlab. Results demonstrate that an initial removal of image inhomogeneities improved the affine fit particularly when more inhomogeneity was present. Further, using a complexity-reduced input image also improved the affine fit and was beneficial in younger children in particular. Finally, blindly exploring a very wide parameter space resulted in a better fit for the vast majority of subjects, but again particularly so in infants and young children. In summary, the suggested modifications were shown to improve the affine transformation in the large majority of datasets in general, and in children in particular. The changes can easily be implemented into SPM12.
{"title":"A three-step, “brute-force” approach toward optimized affine spatial normalization","authors":"Marko Wilke","doi":"10.3389/fncom.2024.1367148","DOIUrl":"https://doi.org/10.3389/fncom.2024.1367148","url":null,"abstract":"The first step in spatial normalization of magnetic resonance (MR) images commonly is an affine transformation, which may be vulnerable to image imperfections (such as inhomogeneities or “unusual” heads). Additionally, common software solutions use internal starting estimates to allow for a more efficient computation, which may pose a problem in datasets not conforming to these assumptions (such as those from children). In this technical note, three main questions were addressed: one, does the affine spatial normalization step implemented in SPM12 benefit from an initial inhomogeneity correction. Two, does using a complexity-reduced image version improve robustness when matching “unusual” images. And three, can a blind “brute-force” application of a wide range of parameter combinations improve the affine fit for unusual datasets in particular. A large database of 2081 image datasets was used, covering the full age range from birth to old age. All analyses were performed in Matlab. Results demonstrate that an initial removal of image inhomogeneities improved the affine fit particularly when more inhomogeneity was present. Further, using a complexity-reduced input image also improved the affine fit and was beneficial in younger children in particular. Finally, blindly exploring a very wide parameter space resulted in a better fit for the vast majority of subjects, but again particularly so in infants and young children. In summary, the suggested modifications were shown to improve the affine transformation in the large majority of datasets in general, and in children in particular. The changes can easily be implemented into SPM12.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"46 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141568415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.3389/fncom.2024.1397819
Zhixian Han, Anne B. Sereno
Many studies have shown that the human visual system has two major functionally distinct cortical visual pathways: a ventral pathway, thought to be important for object recognition, and a dorsal pathway, thought to be important for spatial cognition. According to our and others previous studies, artificial neural networks with two segregated pathways can determine objects' identities and locations more accurately and efficiently than one-pathway artificial neural networks. In addition, we showed that these two segregated artificial cortical visual pathways can each process identity and spatial information of visual objects independently and differently. However, when using such networks to process multiple objects' identities and locations, a binding problem arises because the networks may not associate each object's identity with its location correctly. In a previous study, we constrained the binding problem by training the artificial identity pathway to retain relative location information of objects. This design uses a location map to constrain the binding problem. One limitation of that study was that we only considered two attributes of our objects (identity and location) and only one possible map (location) for binding. However, typically the brain needs to process and bind many attributes of an object, and any of these attributes could be used to constrain the binding problem. In our current study, using visual objects with multiple attributes (identity, luminance, orientation, and location) that need to be recognized, we tried to find the best map (among an identity map, a luminance map, an orientation map, or a location map) to constrain the binding problem. We found that in our experimental simulations, when visual attributes are independent of each other, a location map is always a better choice than the other kinds of maps examined for constraining the binding problem. Our findings agree with previous neurophysiological findings that show that the organization or map in many visual cortical areas is primarily retinotopic or spatial.
{"title":"A spatial map: a propitious choice for constraining the binding problem","authors":"Zhixian Han, Anne B. Sereno","doi":"10.3389/fncom.2024.1397819","DOIUrl":"https://doi.org/10.3389/fncom.2024.1397819","url":null,"abstract":"Many studies have shown that the human visual system has two major functionally distinct cortical visual pathways: a ventral pathway, thought to be important for object recognition, and a dorsal pathway, thought to be important for spatial cognition. According to our and others previous studies, artificial neural networks with two segregated pathways can determine objects' identities and locations more accurately and efficiently than one-pathway artificial neural networks. In addition, we showed that these two segregated artificial cortical visual pathways can each process identity and spatial information of visual objects independently and differently. However, when using such networks to process multiple objects' identities and locations, a binding problem arises because the networks may not associate each object's identity with its location correctly. In a previous study, we constrained the binding problem by training the artificial identity pathway to retain relative location information of objects. This design uses a location map to constrain the binding problem. One limitation of that study was that we only considered two attributes of our objects (identity and location) and only one possible map (location) for binding. However, typically the brain needs to process and bind many attributes of an object, and any of these attributes could be used to constrain the binding problem. In our current study, using visual objects with multiple attributes (identity, luminance, orientation, and location) that need to be recognized, we tried to find the best map (among an identity map, a luminance map, an orientation map, or a location map) to constrain the binding problem. We found that in our experimental simulations, when visual attributes are independent of each other, a location map is always a better choice than the other kinds of maps examined for constraining the binding problem. Our findings agree with previous neurophysiological findings that show that the organization or map in many visual cortical areas is primarily retinotopic or spatial.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"45 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141527565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IntroductionConstructing an accurate and comprehensive knowledge graph of specific diseases is critical for practical clinical disease diagnosis and treatment, reasoning and decision support, rehabilitation, and health management. For knowledge graph construction tasks (such as named entity recognition, relation extraction), classical BERT-based methods require a large amount of training data to ensure model performance. However, real-world medical annotation data, especially disease-specific annotation samples, are very limited. In addition, existing models do not perform well in recognizing out-of-distribution entities and relations that are not seen in the training phase.MethodIn this study, we present a novel and practical pipeline for constructing a heart failure knowledge graph using large language models and medical expert refinement. We apply prompt engineering to the three phases of schema design: schema design, information extraction, and knowledge completion. The best performance is achieved by designing task-specific prompt templates combined with the TwoStepChat approach.ResultsExperiments on two datasets show that the TwoStepChat method outperforms the Vanillia prompt and outperforms the fine-tuned BERT-based baselines. Moreover, our method saves 65% of the time compared to manual annotation and is better suited to extract the out-of-distribution information in the real world.
{"title":"Knowledge graph construction for heart failure using large language models with prompt engineering","authors":"Tianhan Xu, Yixun Gu, Mantian Xue, Renjie Gu, Bin Li, Xiang Gu","doi":"10.3389/fncom.2024.1389475","DOIUrl":"https://doi.org/10.3389/fncom.2024.1389475","url":null,"abstract":"IntroductionConstructing an accurate and comprehensive knowledge graph of specific diseases is critical for practical clinical disease diagnosis and treatment, reasoning and decision support, rehabilitation, and health management. For knowledge graph construction tasks (such as named entity recognition, relation extraction), classical BERT-based methods require a large amount of training data to ensure model performance. However, real-world medical annotation data, especially disease-specific annotation samples, are very limited. In addition, existing models do not perform well in recognizing out-of-distribution entities and relations that are not seen in the training phase.MethodIn this study, we present a novel and practical pipeline for constructing a heart failure knowledge graph using large language models and medical expert refinement. We apply prompt engineering to the three phases of schema design: schema design, information extraction, and knowledge completion. The best performance is achieved by designing task-specific prompt templates combined with the TwoStepChat approach.ResultsExperiments on two datasets show that the TwoStepChat method outperforms the Vanillia prompt and outperforms the fine-tuned BERT-based baselines. Moreover, our method saves 65% of the time compared to manual annotation and is better suited to extract the out-of-distribution information in the real world.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"31 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141508661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-28DOI: 10.3389/fncom.2024.1425008
Jyoti Arora, Ghadir Altuwaijri, Ali Nauman, Meena Tushir, Tripti Sharma, Deepali Gupta, Sung Won Kim
In clinical research, it is crucial to segment the magnetic resonance (MR) brain image for studying the internal tissues of the brain. To address this challenge in a sustainable manner, a novel approach has been proposed leveraging the power of unsupervised clustering while integrating conditional spatial properties of the image into intuitionistic clustering technique for segmenting MRI images of brain scans. In the proposed technique, an Intuitionistic-based clustering approach incorporates a nuanced understanding of uncertainty inherent in the image data. The measure of uncertainty is achieved through calculation of hesitation degree. The approach introduces a conditional spatial function alongside the intuitionistic membership matrix, enabling the consideration of spatial relationships within the image. Furthermore, by calculating weighted intuitionistic membership matrix, the algorithm gains the ability to adapt its smoothing behavior based on the local context. The main advantages are enhanced robustness with homogenous segments, lower sensitivity to noise, intensity inhomogeneity and accommodation of degree of hesitation or uncertainty that may exist in the real-world datasets. A comparative analysis of synthetic and real datasets of MR brain images proves the efficiency of the suggested approach over different algorithms. The paper investigates how the suggested research methodology performs in medical industry under different circumstances including both qualitative and quantitative parameters such as segmentation accuracy, similarity index, true positive ratio, false positive ratio. The experimental outcomes demonstrate that the suggested algorithm outperforms in retaining image details and achieving segmentation accuracy.
{"title":"Conditional spatial biased intuitionistic clustering technique for brain MRI image segmentation","authors":"Jyoti Arora, Ghadir Altuwaijri, Ali Nauman, Meena Tushir, Tripti Sharma, Deepali Gupta, Sung Won Kim","doi":"10.3389/fncom.2024.1425008","DOIUrl":"https://doi.org/10.3389/fncom.2024.1425008","url":null,"abstract":"In clinical research, it is crucial to segment the magnetic resonance (MR) brain image for studying the internal tissues of the brain. To address this challenge in a sustainable manner, a novel approach has been proposed leveraging the power of unsupervised clustering while integrating conditional spatial properties of the image into intuitionistic clustering technique for segmenting MRI images of brain scans. In the proposed technique, an Intuitionistic-based clustering approach incorporates a nuanced understanding of uncertainty inherent in the image data. The measure of uncertainty is achieved through calculation of hesitation degree. The approach introduces a conditional spatial function alongside the intuitionistic membership matrix, enabling the consideration of spatial relationships within the image. Furthermore, by calculating weighted intuitionistic membership matrix, the algorithm gains the ability to adapt its smoothing behavior based on the local context. The main advantages are enhanced robustness with homogenous segments, lower sensitivity to noise, intensity inhomogeneity and accommodation of degree of hesitation or uncertainty that may exist in the real-world datasets. A comparative analysis of synthetic and real datasets of MR brain images proves the efficiency of the suggested approach over different algorithms. The paper investigates how the suggested research methodology performs in medical industry under different circumstances including both qualitative and quantitative parameters such as segmentation accuracy, similarity index, true positive ratio, false positive ratio. The experimental outcomes demonstrate that the suggested algorithm outperforms in retaining image details and achieving segmentation accuracy.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"193 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141508683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-26DOI: 10.3389/fncom.2024.1379368
Liming Cheng, Jiaqi Xiong, Junwei Duan, Yuhang Zhang, Chun Chen, Jingxin Zhong, Zhiguo Zhou, Yujuan Quan
IntroductionEpilepsy is a common neurological condition that affects a large number of individuals worldwide. One of the primary challenges in epilepsy is the accurate and timely detection of seizure. Recently, the graph regularized broad learning system (GBLS) has achieved superior performance improvement with its flat structure and less time-consuming training process compared to deep neural networks. Nevertheless, the number of feature and enhancement nodes in GBLS is predetermined. These node settings are also randomly selected and remain unchanged throughout the training process. The characteristic of randomness is thus more easier to make non-optimal nodes generate, which cannot contribute significantly to solving the optimization problem.MethodsTo obtain more optimal nodes for optimization and achieve superior automatic detection performance, we propose a novel broad neural network named self-adaptive evolutionary graph regularized broad learning system (SaE-GBLS). Self-adaptive evolutionary algorithm, which can construct mutation strategies in the strategy pool based on the experience of producing solutions for selecting network parameters, is incorporated into SaE-GBLS model for optimizing the node parameters. The epilepsy seizure is automatic detected by our proposed SaE-GBLS model based on three publicly available EEG datasets and one private clinical EEG dataset.Results and discussionThe experimental results indicate that our suggested strategy has the potential to perform as well as current machine learning approaches.
{"title":"Frontiers | SaE-GBLS: an effective self-adaptive evolutionary optimized graph-broad model for EEG-based automatic epileptic seizure detection","authors":"Liming Cheng, Jiaqi Xiong, Junwei Duan, Yuhang Zhang, Chun Chen, Jingxin Zhong, Zhiguo Zhou, Yujuan Quan","doi":"10.3389/fncom.2024.1379368","DOIUrl":"https://doi.org/10.3389/fncom.2024.1379368","url":null,"abstract":"IntroductionEpilepsy is a common neurological condition that affects a large number of individuals worldwide. One of the primary challenges in epilepsy is the accurate and timely detection of seizure. Recently, the graph regularized broad learning system (GBLS) has achieved superior performance improvement with its flat structure and less time-consuming training process compared to deep neural networks. Nevertheless, the number of feature and enhancement nodes in GBLS is predetermined. These node settings are also randomly selected and remain unchanged throughout the training process. The characteristic of randomness is thus more easier to make non-optimal nodes generate, which cannot contribute significantly to solving the optimization problem.MethodsTo obtain more optimal nodes for optimization and achieve superior automatic detection performance, we propose a novel broad neural network named self-adaptive evolutionary graph regularized broad learning system (SaE-GBLS). Self-adaptive evolutionary algorithm, which can construct mutation strategies in the strategy pool based on the experience of producing solutions for selecting network parameters, is incorporated into SaE-GBLS model for optimizing the node parameters. The epilepsy seizure is automatic detected by our proposed SaE-GBLS model based on three publicly available EEG datasets and one private clinical EEG dataset.Results and discussionThe experimental results indicate that our suggested strategy has the potential to perform as well as current machine learning approaches.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"16 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}