Electroencephalogram (EEG) plays a pivotal role in the detection and analysis of epileptic seizures, which affects over 70 million people in the world. Nonetheless, the visual interpretation of EEG signals for epilepsy detection is laborious and time-consuming. To tackle this open challenge, we introduce a straightforward yet efficient hybrid deep learning approach, named ResBiLSTM, for detecting epileptic seizures using EEG signals. Firstly, a one-dimensional residual neural network (ResNet) is tailored to adeptly extract the local spatial features of EEG signals. Subsequently, the acquired features are input into a bidirectional long short-term memory (BiLSTM) layer to model temporal dependencies. These output features are further processed through two fully connected layers to achieve the final epileptic seizure detection. The performance of ResBiLSTM is assessed on the epileptic seizure datasets provided by the University of Bonn and Temple University Hospital (TUH). The ResBiLSTM model achieves epileptic seizure detection accuracy rates of 98.88-100% in binary and ternary classifications on the Bonn dataset. Experimental outcomes for seizure recognition across seven epilepsy seizure types on the TUH seizure corpus (TUSZ) dataset indicate that the ResBiLSTM model attains a classification accuracy of 95.03% and a weighted F1 score of 95.03% with 10-fold cross-validation. These findings illustrate that ResBiLSTM outperforms several recent deep learning state-of-the-art approaches.
{"title":"Residual and bidirectional LSTM for epileptic seizure detection.","authors":"Wei Zhao, Wen-Feng Wang, Lalit Mohan Patnaik, Bao-Can Zhang, Su-Jun Weng, Shi-Xiao Xiao, De-Zhi Wei, Hai-Feng Zhou","doi":"10.3389/fncom.2024.1415967","DOIUrl":"10.3389/fncom.2024.1415967","url":null,"abstract":"<p><p>Electroencephalogram (EEG) plays a pivotal role in the detection and analysis of epileptic seizures, which affects over 70 million people in the world. Nonetheless, the visual interpretation of EEG signals for epilepsy detection is laborious and time-consuming. To tackle this open challenge, we introduce a straightforward yet efficient hybrid deep learning approach, named ResBiLSTM, for detecting epileptic seizures using EEG signals. Firstly, a one-dimensional residual neural network (ResNet) is tailored to adeptly extract the local spatial features of EEG signals. Subsequently, the acquired features are input into a bidirectional long short-term memory (BiLSTM) layer to model temporal dependencies. These output features are further processed through two fully connected layers to achieve the final epileptic seizure detection. The performance of ResBiLSTM is assessed on the epileptic seizure datasets provided by the University of Bonn and Temple University Hospital (TUH). The ResBiLSTM model achieves epileptic seizure detection accuracy rates of 98.88-100% in binary and ternary classifications on the Bonn dataset. Experimental outcomes for seizure recognition across seven epilepsy seizure types on the TUH seizure corpus (TUSZ) dataset indicate that the ResBiLSTM model attains a classification accuracy of 95.03% and a weighted F1 score of 95.03% with 10-fold cross-validation. These findings illustrate that ResBiLSTM outperforms several recent deep learning state-of-the-art approaches.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"18 ","pages":"1415967"},"PeriodicalIF":2.1,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11215953/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141476309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-30DOI: 10.3389/fncom.2024.1355855
Akito Fukunishi, Kyo Kutsuzawa, Dai Owaki, Mitsuhiro Hayashibe
How our central nervous system efficiently controls our complex musculoskeletal system is still debated. The muscle synergy hypothesis is proposed to simplify this complex system by assuming the existence of functional neural modules that coordinate several muscles. Modularity based on muscle synergies can facilitate motor learning without compromising task performance. However, the effectiveness of modularity in motor control remains debated. This ambiguity can, in part, stem from overlooking that the performance of modularity depends on the mechanical aspects of modules of interest, such as the torque the modules exert. To address this issue, this study introduces two criteria to evaluate the quality of module sets based on commonly used performance metrics in motor learning studies: the accuracy of torque production and learning speed. One evaluates the regularity in the direction of mechanical torque the modules exert, while the other evaluates the evenness of its magnitude. For verification of our criteria, we simulated motor learning of torque production tasks in a realistic musculoskeletal system of the upper arm using feed-forward neural networks while changing the control conditions. We found that the proposed criteria successfully explain the tendency of learning performance in various control conditions. These result suggest that regularity in the direction of and evenness in magnitude of mechanical torque of utilized modules are significant factor for determining learning performance. Although the criteria were originally conceived for an error-based learning scheme, the approach to pursue which set of modules is better for motor control can have significant implications in other studies of modularity in general.
{"title":"Synergy quality assessment of muscle modules for determining learning performance using a realistic musculoskeletal model","authors":"Akito Fukunishi, Kyo Kutsuzawa, Dai Owaki, Mitsuhiro Hayashibe","doi":"10.3389/fncom.2024.1355855","DOIUrl":"https://doi.org/10.3389/fncom.2024.1355855","url":null,"abstract":"How our central nervous system efficiently controls our complex musculoskeletal system is still debated. The muscle synergy hypothesis is proposed to simplify this complex system by assuming the existence of functional neural modules that coordinate several muscles. Modularity based on muscle synergies can facilitate motor learning without compromising task performance. However, the effectiveness of modularity in motor control remains debated. This ambiguity can, in part, stem from overlooking that the performance of modularity depends on the mechanical aspects of modules of interest, such as the torque the modules exert. To address this issue, this study introduces two criteria to evaluate the quality of module sets based on commonly used performance metrics in motor learning studies: the accuracy of torque production and learning speed. One evaluates the regularity in the direction of mechanical torque the modules exert, while the other evaluates the evenness of its magnitude. For verification of our criteria, we simulated motor learning of torque production tasks in a realistic musculoskeletal system of the upper arm using feed-forward neural networks while changing the control conditions. We found that the proposed criteria successfully explain the tendency of learning performance in various control conditions. These result suggest that regularity in the direction of and evenness in magnitude of mechanical torque of utilized modules are significant factor for determining learning performance. Although the criteria were originally conceived for an error-based learning scheme, the approach to pursue which set of modules is better for motor control can have significant implications in other studies of modularity in general.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"24 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141189235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The spiking convolutional neural network (SCNN) is a kind of spiking neural network (SNN) with high accuracy for visual tasks and power efficiency on neuromorphic hardware, which is attractive for edge applications. However, it is challenging to implement SCNNs on resource-constrained edge devices because of the large number of convolutional operations and membrane potential (Vm) storage needed. Previous works have focused on timestep reduction, network pruning, and network quantization to realize SCNN implementation on edge devices. However, they overlooked similarities between spiking feature maps (SFmaps), which contain significant redundancy and cause unnecessary computation and storage. This work proposes a dual-threshold spiking convolutional neural network (DT-SCNN) to decrease the number of operations and memory access by utilizing similarities between SFmaps. The DT-SCNN employs dual firing thresholds to derive two similar SFmaps from one Vm map, reducing the number of convolutional operations and decreasing the volume of Vms and convolutional weights by half. We propose a variant spatio-temporal back propagation (STBP) training method with a two-stage strategy to train DT-SCNNs to decrease the inference timestep to 1. The experimental results show that the dual-thresholds mechanism achieves a 50% reduction in operations and data storage for the convolutional layers compared to conventional SCNNs while achieving not more than a 0.4% accuracy loss on the CIFAR10, MNIST, and Fashion MNIST datasets. Due to the lightweight network and single timestep inference, the DT-SCNN has the least number of operations compared to previous works, paving the way for low-latency and power-efficient edge applications.
尖峰卷积神经网络(SCNN)是一种尖峰神经网络(SNN),在视觉任务中具有高精确度,在神经形态硬件上具有高能效,对边缘应用很有吸引力。然而,由于需要大量卷积运算和膜电位(Vm)存储,在资源受限的边缘设备上实现 SCNN 是一项挑战。以前的工作主要集中在减少时间步长、网络剪枝和网络量化上,以实现在边缘设备上实施 SCNN。然而,他们忽略了尖峰特征图(SFmaps)之间的相似性,这些特征图包含大量冗余,会造成不必要的计算和存储。本研究提出了一种双阈值尖峰卷积神经网络(DT-SCNN),通过利用 SFmaps 之间的相似性来减少运算次数和内存访问。DT-SCNN 采用双发射阈值,从一个 Vm 映射中推导出两个相似的 SF 映射,从而减少了卷积操作的数量,并将 Vm 和卷积权重的体积减少了一半。实验结果表明,与传统 SCNN 相比,双阈值机制减少了卷积层 50% 的操作和数据存储,同时在 CIFAR10、MNIST 和时尚 MNIST 数据集上的准确率损失不超过 0.4%。由于采用了轻量级网络和单时间步推理,DT-SCNN 的操作次数与以前的作品相比最少,为低延迟、高能效的边缘应用铺平了道路。
{"title":"DT-SCNN: dual-threshold spiking convolutional neural network with fewer operations and memory access for edge applications","authors":"Fuming Lei, Xu Yang, Jian Liu, Runjiang Dou, Nanjian Wu","doi":"10.3389/fncom.2024.1418115","DOIUrl":"https://doi.org/10.3389/fncom.2024.1418115","url":null,"abstract":"The spiking convolutional neural network (SCNN) is a kind of spiking neural network (SNN) with high accuracy for visual tasks and power efficiency on neuromorphic hardware, which is attractive for edge applications. However, it is challenging to implement SCNNs on resource-constrained edge devices because of the large number of convolutional operations and membrane potential (Vm) storage needed. Previous works have focused on timestep reduction, network pruning, and network quantization to realize SCNN implementation on edge devices. However, they overlooked similarities between spiking feature maps (SFmaps), which contain significant redundancy and cause unnecessary computation and storage. This work proposes a dual-threshold spiking convolutional neural network (DT-SCNN) to decrease the number of operations and memory access by utilizing similarities between SFmaps. The DT-SCNN employs dual firing thresholds to derive two similar SFmaps from one Vm map, reducing the number of convolutional operations and decreasing the volume of Vms and convolutional weights by half. We propose a variant spatio-temporal back propagation (STBP) training method with a two-stage strategy to train DT-SCNNs to decrease the inference timestep to 1. The experimental results show that the dual-thresholds mechanism achieves a 50% reduction in operations and data storage for the convolutional layers compared to conventional SCNNs while achieving not more than a 0.4% accuracy loss on the CIFAR10, MNIST, and Fashion MNIST datasets. Due to the lightweight network and single timestep inference, the DT-SCNN has the least number of operations compared to previous works, paving the way for low-latency and power-efficient edge applications.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"15 10 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141189272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-28DOI: 10.3389/fncom.2024.1398898
Bernard A. Pailthorpe
Network analysis of the marmoset cortical connectivity data indicates a significant 3D cluster in and around the pre-frontal cortex. A multi-node, heterogeneous neural mass model of this six-node cluster was constructed. Its parameters were informed by available experimental and simulation data so that each neural mass oscillated in a characteristic frequency band. Nodes were connected with directed, weighted links derived from the marmoset structural connectivity data. Heterogeneity arose from the different link weights and model parameters for each node. Stimulation of the cluster with an incident pulse train modulated in the standard frequency bands induced a variety of dynamical state transitions that lasted in the range of 5–10 s, suggestive of timescales relevant to short-term memory. A short gamma burst rapidly reset the beta-induced transition. The theta-induced transition state showed a spontaneous, delayed reset to the resting state. An additional, continuous gamma wave stimulus induced a new beating oscillatory state. Longer or repeated gamma bursts were phase-aligned with the beta oscillation, delivering increasing energy input and causing shorter transition times. The relevance of these results to working memory is yet to be established, but they suggest interesting opportunities.
{"title":"Simulated dynamical transitions in a heterogeneous marmoset pFC cluster","authors":"Bernard A. Pailthorpe","doi":"10.3389/fncom.2024.1398898","DOIUrl":"https://doi.org/10.3389/fncom.2024.1398898","url":null,"abstract":"Network analysis of the marmoset cortical connectivity data indicates a significant 3D cluster in and around the pre-frontal cortex. A multi-node, heterogeneous neural mass model of this six-node cluster was constructed. Its parameters were informed by available experimental and simulation data so that each neural mass oscillated in a characteristic frequency band. Nodes were connected with directed, weighted links derived from the marmoset structural connectivity data. Heterogeneity arose from the different link weights and model parameters for each node. Stimulation of the cluster with an incident pulse train modulated in the standard frequency bands induced a variety of dynamical state transitions that lasted in the range of 5–10 s, suggestive of timescales relevant to short-term memory. A short gamma burst rapidly reset the beta-induced transition. The theta-induced transition state showed a spontaneous, delayed reset to the resting state. An additional, continuous gamma wave stimulus induced a new beating oscillatory state. Longer or repeated gamma bursts were phase-aligned with the beta oscillation, delivering increasing energy input and causing shorter transition times. The relevance of these results to working memory is yet to be established, but they suggest interesting opportunities.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"11 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141167607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-16DOI: 10.3389/fncom.2024.1240348
Kyle Daruwalla, Mikko Lipasti
Deep neural feedforward networks are effective models for a wide array of problems, but training and deploying such networks presents a significant energy cost. Spiking neural networks (SNNs), which are modeled after biologically realistic neurons, offer a potential solution when deployed correctly on neuromorphic computing hardware. Still, many applications train SNNs offline, and running network training directly on neuromorphic hardware is an ongoing research problem. The primary hurdle is that back-propagation, which makes training such artificial deep networks possible, is biologically implausible. Neuroscientists are uncertain about how the brain would propagate a precise error signal backward through a network of neurons. Recent progress addresses part of this question, e.g., the weight transport problem, but a complete solution remains intangible. In contrast, novel learning rules based on the information bottleneck (IB) train each layer of a network independently, circumventing the need to propagate errors across layers. Instead, propagation is implicit due the layers' feedforward connectivity. These rules take the form of a three-factor Hebbian update a global error signal modulates local synaptic updates within each layer. Unfortunately, the global signal for a given layer requires processing multiple samples concurrently, and the brain only sees a single sample at a time. We propose a new three-factor update rule where the global signal correctly captures information across samples via an auxiliary memory network. The auxiliary network can be trained a priori independently of the dataset being used with the primary network. We demonstrate comparable performance to baselines on image classification tasks. Interestingly, unlike back-propagation-like schemes where there is no link between learning and memory, our rule presents a direct connection between working memory and synaptic updates. To the best of our knowledge, this is the first rule to make this link explicit. We explore these implications in initial experiments examining the effect of memory capacity on learning performance. Moving forward, this work suggests an alternate view of learning where each layer balances memory-informed compression against task performance. This view naturally encompasses several key aspects of neural computation, including memory, efficiency, and locality.
{"title":"Information bottleneck-based Hebbian learning rule naturally ties working memory and synaptic updates","authors":"Kyle Daruwalla, Mikko Lipasti","doi":"10.3389/fncom.2024.1240348","DOIUrl":"https://doi.org/10.3389/fncom.2024.1240348","url":null,"abstract":"Deep neural feedforward networks are effective models for a wide array of problems, but training and deploying such networks presents a significant energy cost. Spiking neural networks (SNNs), which are modeled after biologically realistic neurons, offer a potential solution when deployed correctly on neuromorphic computing hardware. Still, many applications train SNNs <jats:italic>offline</jats:italic>, and running network training directly on neuromorphic hardware is an ongoing research problem. The primary hurdle is that back-propagation, which makes training such artificial deep networks possible, is biologically implausible. Neuroscientists are uncertain about how the brain would propagate a precise error signal backward through a network of neurons. Recent progress addresses part of this question, e.g., the weight transport problem, but a complete solution remains intangible. In contrast, novel learning rules based on the information bottleneck (IB) train each layer of a network independently, circumventing the need to propagate errors across layers. Instead, propagation is implicit due the layers' feedforward connectivity. These rules take the form of a three-factor Hebbian update a global error signal modulates local synaptic updates within each layer. Unfortunately, the global signal for a given layer requires processing multiple samples concurrently, and the brain only sees a single sample at a time. We propose a new three-factor update rule where the global signal correctly captures information across samples via an auxiliary memory network. The auxiliary network can be trained <jats:italic>a priori</jats:italic> independently of the dataset being used with the primary network. We demonstrate comparable performance to baselines on image classification tasks. Interestingly, unlike back-propagation-like schemes where there is no link between learning and memory, our rule presents a direct connection between working memory and synaptic updates. To the best of our knowledge, this is the first rule to make this link explicit. We explore these implications in initial experiments examining the effect of memory capacity on learning performance. Moving forward, this work suggests an alternate view of learning where each layer balances memory-informed compression against task performance. This view naturally encompasses several key aspects of neural computation, including memory, efficiency, and locality.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"33 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141058716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IntroductionNovel technologies based on virtual reality (VR) are creating attractive virtual environments with high ecological value, used both in basic/clinical neuroscience and modern medical practice. The study aimed to evaluate the effects of VR-based training in an elderly population.Materials and methodsThe study included 36 women over the age of 60, who were randomly divided into two groups subjected to balance-strength and balance-cognitive training. The research applied both conventional clinical tests, such as (a) the Timed Up and Go test, (b) the five-times sit-to-stand test, and (c) the posturographic exam with the Romberg test with eyes open and closed. Training in both groups was conducted for 10 sessions and embraced exercises on a bicycle ergometer and exercises using non-immersive VR created by the ActivLife platform. Machine learning methods with a k-nearest neighbors classifier, which are very effective and popular, were proposed to statistically evaluate the differences in training effects in the two groups.Results and conclusionThe study showed that training using VR brought beneficial improvement in clinical tests and changes in the pattern of posturographic trajectories were observed. An important finding of the research was a statistically significant reduction in the risk of falls in the study population. The use of virtual environments in exercise/training has great potential in promoting healthy aging and preventing balance loss and falls among seniors.
{"title":"A machine learning approach to evaluate the impact of virtual balance/cognitive training on fall risk in older women","authors":"Beata Sokołowska, Wiktor Świderski, Edyta Smolis-Bąk, Ewa Sokołowska, Teresa Sadura-Sieklucka","doi":"10.3389/fncom.2024.1390208","DOIUrl":"https://doi.org/10.3389/fncom.2024.1390208","url":null,"abstract":"IntroductionNovel technologies based on virtual reality (VR) are creating attractive virtual environments with high ecological value, used both in basic/clinical neuroscience and modern medical practice. The study aimed to evaluate the effects of VR-based training in an elderly population.Materials and methodsThe study included 36 women over the age of 60, who were randomly divided into two groups subjected to balance-strength and balance-cognitive training. The research applied both conventional clinical tests, such as (a) the Timed Up and Go test, (b) the five-times sit-to-stand test, and (c) the posturographic exam with the Romberg test with eyes open and closed. Training in both groups was conducted for 10 sessions and embraced exercises on a bicycle ergometer and exercises using non-immersive VR created by the ActivLife platform. Machine learning methods with a <jats:italic>k</jats:italic>-nearest neighbors classifier, which are very effective and popular, were proposed to statistically evaluate the differences in training effects in the two groups.Results and conclusionThe study showed that training using VR brought beneficial improvement in clinical tests and changes in the pattern of posturographic trajectories were observed. An important finding of the research was a statistically significant reduction in the risk of falls in the study population. The use of virtual environments in exercise/training has great potential in promoting healthy aging and preventing balance loss and falls among seniors.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"27 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140929638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.3389/fncom.2024.1327986
Peter Kan, Yong Fang Zhu, Junling Ma, Gurmit Singh
ObjectiveNav1.8 expression is restricted to sensory neurons; it was hypothesized that aberrant expression and function of this channel at the site of injury contributed to pathological pain. However, the specific contributions of Nav1.8 to neuropathic pain are not as clear as its role in inflammatory pain. The aim of this study is to understand how Nav1.8 present in peripheral sensory neurons regulate neuronal excitability and induce various electrophysiological features on neuropathic pain.MethodsTo study the effect of changes in sodium channel Nav1.8 kinetics, Hodgkin–Huxley type conductance-based models of spiking neurons were constructed using the NEURON v8.2 simulation software. We constructed a single-compartment model of neuronal soma that contained Nav1.8 channels with the ionic mechanisms adapted from some existing small DRG neuron models. We then validated and compared the model with our experimental data from in vivo recordings on soma of small dorsal root ganglion (DRG) sensory neurons in animal models of neuropathic pain (NEP).ResultsWe show that Nav1.8 is an important parameter for the generation and maintenance of abnormal neuronal electrogenesis and hyperexcitability. The typical increased excitability seen is dominated by a left shift in the steady state of activation of this channel and is further modulated by this channel’s maximum conductance and steady state of inactivation. Therefore, modified action potential shape, decreased threshold, and increased repetitive firing of sensory neurons in our neuropathic animal models may be orchestrated by these modulations on Nav1.8.ConclusionComputational modeling is a novel strategy to understand the generation of chronic pain. In this study, we highlight that changes to the channel functions of Nav1.8 within the small DRG neuron may contribute to neuropathic pain.
{"title":"Computational modeling to study the impact of changes in Nav1.8 sodium channel on neuropathic pain","authors":"Peter Kan, Yong Fang Zhu, Junling Ma, Gurmit Singh","doi":"10.3389/fncom.2024.1327986","DOIUrl":"https://doi.org/10.3389/fncom.2024.1327986","url":null,"abstract":"ObjectiveNav1.8 expression is restricted to sensory neurons; it was hypothesized that aberrant expression and function of this channel at the site of injury contributed to pathological pain. However, the specific contributions of Nav1.8 to neuropathic pain are not as clear as its role in inflammatory pain. The aim of this study is to understand how Nav1.8 present in peripheral sensory neurons regulate neuronal excitability and induce various electrophysiological features on neuropathic pain.MethodsTo study the effect of changes in sodium channel Nav1.8 kinetics, Hodgkin–Huxley type conductance-based models of spiking neurons were constructed using the NEURON v8.2 simulation software. We constructed a single-compartment model of neuronal soma that contained Nav1.8 channels with the ionic mechanisms adapted from some existing small DRG neuron models. We then validated and compared the model with our experimental data from <jats:italic>in vivo</jats:italic> recordings on soma of small dorsal root ganglion (DRG) sensory neurons in animal models of neuropathic pain (NEP).ResultsWe show that Nav1.8 is an important parameter for the generation and maintenance of abnormal neuronal electrogenesis and hyperexcitability. The typical increased excitability seen is dominated by a left shift in the steady state of activation of this channel and is further modulated by this channel’s maximum conductance and steady state of inactivation. Therefore, modified action potential shape, decreased threshold, and increased repetitive firing of sensory neurons in our neuropathic animal models may be orchestrated by these modulations on Nav1.8.ConclusionComputational modeling is a novel strategy to understand the generation of chronic pain. In this study, we highlight that changes to the channel functions of Nav1.8 within the small DRG neuron may contribute to neuropathic pain.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"68 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140929786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.3389/fncom.2024.1365727
Aaron Kujawa, Reuben Dorent, Steve Connor, Suki Thomson, Marina Ivory, Ali Vahedi, Emily Guilhem, Navodini Wijethilake, Robert Bradford, Neil Kitchen, Sotirios Bisdas, Sebastien Ourselin, Tom Vercauteren, Jonathan Shapey
Automatic segmentation of vestibular schwannoma (VS) from routine clinical MRI has potential to improve clinical workflow, facilitate treatment decisions, and assist patient management. Previous work demonstrated reliable automatic segmentation performance on datasets of standardized MRI images acquired for stereotactic surgery planning. However, diagnostic clinical datasets are generally more diverse and pose a larger challenge to automatic segmentation algorithms, especially when post-operative images are included. In this work, we show for the first time that automatic segmentation of VS on routine MRI datasets is also possible with high accuracy. We acquired and publicly release a curated multi-center routine clinical (MC-RC) dataset of 160 patients with a single sporadic VS. For each patient up to three longitudinal MRI exams with contrast-enhanced T1-weighted (ceT1w) (n = 124) and T2-weighted (T2w) (n = 363) images were included and the VS manually annotated. Segmentations were produced and verified in an iterative process: (1) initial segmentations by a specialized company; (2) review by one of three trained radiologists; and (3) validation by an expert team. Inter- and intra-observer reliability experiments were performed on a subset of the dataset. A state-of-the-art deep learning framework was used to train segmentation models for VS. Model performance was evaluated on a MC-RC hold-out testing set, another public VS datasets, and a partially public dataset. The generalizability and robustness of the VS deep learning segmentation models increased significantly when trained on the MC-RC dataset. Dice similarity coefficients (DSC) achieved by our model are comparable to those achieved by trained radiologists in the inter-observer experiment. On the MC-RC testing set, median DSCs were 86.2(9.5) for ceT1w, 89.4(7.0) for T2w, and 86.4(8.6) for combined ceT1w+T2w input images. On another public dataset acquired for Gamma Knife stereotactic radiosurgery our model achieved median DSCs of 95.3(2.9), 92.8(3.8), and 95.5(3.3), respectively. In contrast, models trained on the Gamma Knife dataset did not generalize well as illustrated by significant underperformance on the MC-RC routine MRI dataset, highlighting the importance of data variability in the development of robust VS segmentation models. The MC-RC dataset and all trained deep learning models were made available online.
从常规临床磁共振成像中自动分割前庭分裂瘤(VS)有望改善临床工作流程、促进治疗决策并协助患者管理。之前的工作表明,在为立体定向手术规划而获取的标准化磁共振成像数据集上,自动分割性能可靠。然而,临床诊断数据集通常更加多样化,对自动分割算法提出了更大的挑战,尤其是在包含术后图像的情况下。在这项工作中,我们首次展示了在常规磁共振成像数据集上自动分割 VS 的高准确性。我们获得并公开发布了一个由 160 名单个散发性 VS 患者组成的多中心常规临床(MC-RC)数据集。每位患者最多可接受三次纵向 MRI 检查,包括对比增强 T1 加权(ceT1w)(124 人)和 T2 加权(T2w)(363 人)图像,并对 VS 进行人工标注。分段的制作和验证是一个反复的过程:(1) 由一家专业公司进行初步分段;(2) 由三位训练有素的放射科医生之一进行审查;(3) 由一个专家组进行验证。在数据集的一个子集上进行了观察者之间和观察者内部的可靠性实验。最先进的深度学习框架用于训练 VS 的分割模型。在 MC-RC 暂缓测试集、另一个公开 VS 数据集和一个部分公开数据集上对模型性能进行了评估。在 MC-RC 数据集上训练的 VS 深度学习分割模型的泛化能力和鲁棒性显著提高。在观察者间实验中,我们的模型获得的骰子相似系数(DSC)与经过培训的放射科医生获得的相似系数相当。在 MC-RC 测试集中,ceT1w 的 DSC 中位数为 86.2(9.5),T2w 为 89.4(7.0),ceT1w+T2w 组合输入图像的 DSC 中位数为 86.4(8.6)。在为伽马刀立体定向放射外科手术获取的另一个公共数据集上,我们的模型分别获得了 95.3(2.9)、92.8(3.8) 和 95.5(3.3) 的中位 DSCs。相比之下,在伽马刀数据集上训练的模型并不能很好地泛化,在 MC-RC 常规 MRI 数据集上的表现就说明了这一点,这突出了数据可变性在开发稳健的 VS 分割模型中的重要性。MC-RC 数据集和所有经过训练的深度学习模型均可在线获取。
{"title":"Deep learning for automatic segmentation of vestibular schwannoma: a retrospective study from multi-center routine MRI","authors":"Aaron Kujawa, Reuben Dorent, Steve Connor, Suki Thomson, Marina Ivory, Ali Vahedi, Emily Guilhem, Navodini Wijethilake, Robert Bradford, Neil Kitchen, Sotirios Bisdas, Sebastien Ourselin, Tom Vercauteren, Jonathan Shapey","doi":"10.3389/fncom.2024.1365727","DOIUrl":"https://doi.org/10.3389/fncom.2024.1365727","url":null,"abstract":"Automatic segmentation of vestibular schwannoma (VS) from routine clinical MRI has potential to improve clinical workflow, facilitate treatment decisions, and assist patient management. Previous work demonstrated reliable automatic segmentation performance on datasets of standardized MRI images acquired for stereotactic surgery planning. However, diagnostic clinical datasets are generally more diverse and pose a larger challenge to automatic segmentation algorithms, especially when post-operative images are included. In this work, we show for the first time that automatic segmentation of VS on routine MRI datasets is also possible with high accuracy. We acquired and publicly release a curated multi-center routine clinical (MC-RC) dataset of 160 patients with a single sporadic VS. For each patient up to three longitudinal MRI exams with contrast-enhanced T1-weighted (ceT1w) (<jats:italic>n</jats:italic> = 124) and T2-weighted (T2w) (<jats:italic>n</jats:italic> = 363) images were included and the VS manually annotated. Segmentations were produced and verified in an iterative process: (1) initial segmentations by a specialized company; (2) review by one of three trained radiologists; and (3) validation by an expert team. Inter- and intra-observer reliability experiments were performed on a subset of the dataset. A state-of-the-art deep learning framework was used to train segmentation models for VS. Model performance was evaluated on a MC-RC hold-out testing set, another public VS datasets, and a partially public dataset. The generalizability and robustness of the VS deep learning segmentation models increased significantly when trained on the MC-RC dataset. Dice similarity coefficients (DSC) achieved by our model are comparable to those achieved by trained radiologists in the inter-observer experiment. On the MC-RC testing set, median DSCs were 86.2(9.5) for ceT1w, 89.4(7.0) for T2w, and 86.4(8.6) for combined ceT1w+T2w input images. On another public dataset acquired for Gamma Knife stereotactic radiosurgery our model achieved median DSCs of 95.3(2.9), 92.8(3.8), and 95.5(3.3), respectively. In contrast, models trained on the Gamma Knife dataset did not generalize well as illustrated by significant underperformance on the MC-RC routine MRI dataset, highlighting the importance of data variability in the development of robust VS segmentation models. The MC-RC dataset and all trained deep learning models were made available online.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"21 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140929649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-02DOI: 10.3389/fncom.2024.1340019
Bozhi Qiu, Sheng Li, Lei Wang
Harnessing the remarkable ability of the human brain to recognize and process complex data is a significant challenge for researchers, particularly in the domain of point cloud classification—a technology that aims to replicate the neural structure of the brain for spatial recognition. The initial 3D point cloud data often suffers from noise, sparsity, and disorder, making accurate classification a formidable task, especially when extracting local information features. Therefore, in this study, we propose a novel attention-based end-to-end point cloud downsampling classification method, termed as PointAS, which is an experimental algorithm designed to be adaptable to various downstream tasks. PointAS consists of two primary modules: the adaptive sampling module and the attention module. Specifically, the attention module aggregates global features with the input point cloud data, while the adaptive module extracts local features. In the point cloud classification task, our method surpasses existing downsampling methods by a significant margin, allowing for more precise extraction of edge data points to capture overall contour features accurately. The classification accuracy of PointAS consistently exceeds 80% across various sampling ratios, with a remarkable accuracy of 75.37% even at ultra-high sampling ratios. Moreover, our method exhibits robustness in experiments, maintaining classification accuracies of 72.50% or higher under different noise disturbances. Both qualitative and quantitative experiments affirm the efficacy of our approach in the sampling classification task, providing researchers with a more accurate method to identify and classify neurons, synapses, and other structures, thereby promoting a deeper understanding of the nervous system.
{"title":"PointAS: an attention based sampling neural network for visual perception","authors":"Bozhi Qiu, Sheng Li, Lei Wang","doi":"10.3389/fncom.2024.1340019","DOIUrl":"https://doi.org/10.3389/fncom.2024.1340019","url":null,"abstract":"Harnessing the remarkable ability of the human brain to recognize and process complex data is a significant challenge for researchers, particularly in the domain of point cloud classification—a technology that aims to replicate the neural structure of the brain for spatial recognition. The initial 3D point cloud data often suffers from noise, sparsity, and disorder, making accurate classification a formidable task, especially when extracting local information features. Therefore, in this study, we propose a novel attention-based end-to-end point cloud downsampling classification method, termed as PointAS, which is an experimental algorithm designed to be adaptable to various downstream tasks. PointAS consists of two primary modules: the adaptive sampling module and the attention module. Specifically, the attention module aggregates global features with the input point cloud data, while the adaptive module extracts local features. In the point cloud classification task, our method surpasses existing downsampling methods by a significant margin, allowing for more precise extraction of edge data points to capture overall contour features accurately. The classification accuracy of PointAS consistently exceeds 80% across various sampling ratios, with a remarkable accuracy of 75.37% even at ultra-high sampling ratios. Moreover, our method exhibits robustness in experiments, maintaining classification accuracies of 72.50% or higher under different noise disturbances. Both qualitative and quantitative experiments affirm the efficacy of our approach in the sampling classification task, providing researchers with a more accurate method to identify and classify neurons, synapses, and other structures, thereby promoting a deeper understanding of the nervous system.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"10 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140834094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-02DOI: 10.3389/fncom.2024.1385047
Hui Tian, Xin Su, Yanfang Hou
BackgroundAs an important mathematical model, the finite state machine (FSM) has been used in many fields, such as manufacturing system, health care, and so on. This paper analyzes the current development status of FSMs. It is pointed out that the traditional methods are often inconvenient for analysis and design, or encounter high computational complexity problems when studying FSMs.MethodThe deep Q-network (DQN) technique, which is a model-free optimization method, is introduced to solve the stabilization problem of probabilistic finite state machines (PFSMs). In order to better understand the technique, some preliminaries, including Markov decision process, ϵ-greedy strategy, DQN, and so on, are recalled.ResultsFirst, a necessary and sufficient stabilizability condition for PFSMs is derived. Next, the feedback stabilization problem of PFSMs is transformed into an optimization problem. Finally, by using the stabilizability condition and deep Q-network, an algorithm for solving the optimization problem (equivalently, computing a state feedback stabilizer) is provided.DiscussionCompared with the traditional Q learning, DQN avoids the limited capacity problem. So our method can deal with high-dimensional complex systems efficiently. The effectiveness of our method is further demonstrated through an illustrative example.
{"title":"Feedback stabilization of probabilistic finite state machines based on deep Q-network","authors":"Hui Tian, Xin Su, Yanfang Hou","doi":"10.3389/fncom.2024.1385047","DOIUrl":"https://doi.org/10.3389/fncom.2024.1385047","url":null,"abstract":"BackgroundAs an important mathematical model, the finite state machine (FSM) has been used in many fields, such as manufacturing system, health care, and so on. This paper analyzes the current development status of FSMs. It is pointed out that the traditional methods are often inconvenient for analysis and design, or encounter high computational complexity problems when studying FSMs.MethodThe deep Q-network (DQN) technique, which is a model-free optimization method, is introduced to solve the stabilization problem of probabilistic finite state machines (PFSMs). In order to better understand the technique, some preliminaries, including Markov decision process, ϵ-greedy strategy, DQN, and so on, are recalled.ResultsFirst, a necessary and sufficient stabilizability condition for PFSMs is derived. Next, the feedback stabilization problem of PFSMs is transformed into an optimization problem. Finally, by using the stabilizability condition and deep Q-network, an algorithm for solving the optimization problem (equivalently, computing a state feedback stabilizer) is provided.DiscussionCompared with the traditional Q learning, DQN avoids the limited capacity problem. So our method can deal with high-dimensional complex systems efficiently. The effectiveness of our method is further demonstrated through an illustrative example.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"148 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140834210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}