Spinal health forms the cornerstone of the overall human body functionality with the lumbar spine playing a critical role and prone to various types of injuries due to inflammation and diseases, including lumbar vertebral fractures. This paper proposes automated method for segmentation of lumbar vertebral body (VB) using image processing techniques such as shape features and morphological operations. This entails an initial phase of image preprocessing, followed by detection and localizing of vertebral regions. Subsequently, vertebral are segmented and labeled, with each classified into normal or fractured using classification techniques, k-nearest neighbors (KNN) and support vector machines (SVM). The methodology leverages unique vertebral characteristics like gray scales, shape features, and textural elements through a range of machine learning methods. The approach is assessed and validated on a clinical spine dataset dice score used for segmentation, achieving an average accuracy rate of 95%, and for classification, achieving average accuracy of 97.01%.
{"title":"Analytical computation for segmentation and classification of lumbar vertebral fractures.","authors":"Roseline Nyange, Hemachandran Kannan, Channabasava Chola, Saurabh Singh, Jaejeung Kim, Anil Audumbar Pise","doi":"10.3389/fncom.2025.1536441","DOIUrl":"10.3389/fncom.2025.1536441","url":null,"abstract":"<p><p>Spinal health forms the cornerstone of the overall human body functionality with the lumbar spine playing a critical role and prone to various types of injuries due to inflammation and diseases, including lumbar vertebral fractures. This paper proposes automated method for segmentation of lumbar vertebral body (VB) using image processing techniques such as shape features and morphological operations. This entails an initial phase of image preprocessing, followed by detection and localizing of vertebral regions. Subsequently, vertebral are segmented and labeled, with each classified into normal or fractured using classification techniques, k-nearest neighbors (KNN) and support vector machines (SVM). The methodology leverages unique vertebral characteristics like gray scales, shape features, and textural elements through a range of machine learning methods. The approach is assessed and validated on a clinical spine dataset dice score used for segmentation, achieving an average accuracy rate of 95%, and for classification, achieving average accuracy of 97.01%.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1536441"},"PeriodicalIF":2.3,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12287019/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144706891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-09eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1627819
Fufeng Wang, Zihe Luo, Wei Lv, XiaoLin Zhu
ECoG signals are widely used in Brain-Computer Interfaces (BCIs) due to their high spatial resolution and superior signal quality, particularly in the field of neural control. ECoG enables more accurate decoding of brain activity compared to traditional EEG. By obtaining cortical ECoG signals directly from the cerebral cortex, complex motor commands, such as finger movement trajectories, can be decoded more efficiently. However, existing studies still face significant challenges in accurately decoding finger movement trajectories. Specifically, current models tend to confuse the movement information of different fingers and fail to fully exploit the dependencies within time series when predicting long sequences, resulting in limited decoding performance. To address these challenges, this paper proposes a novel decoding method that transforms 2D ECoG data samples into 3D spatio-temporal spectrograms with time-stamped features via wavelet transform. The method further enables accurate decoding of finger bending by using a 1D convolutional network composed of Dilated-Transposed convolution, which together extract channel band features and temporal variations in tandem. The proposed method achieved the best performance among three subjects in BCI Competition IV. Compared with existing studies, our method made the correlation coefficient between the predicted multi-finger motion trajectory and the actual multi-finger motion trajectory exceed 80% for the first time, with the highest correlation coefficient reaching 82%. This approach provides new insights and solutions for high-precision decoding of brain-machine signals, particularly in precise command control tasks, and advances the application of BCI systems in real-world neuroprosthetic control.
{"title":"DTCNet: finger flexion decoding with three-dimensional ECoG data.","authors":"Fufeng Wang, Zihe Luo, Wei Lv, XiaoLin Zhu","doi":"10.3389/fncom.2025.1627819","DOIUrl":"10.3389/fncom.2025.1627819","url":null,"abstract":"<p><p>ECoG signals are widely used in Brain-Computer Interfaces (BCIs) due to their high spatial resolution and superior signal quality, particularly in the field of neural control. ECoG enables more accurate decoding of brain activity compared to traditional EEG. By obtaining cortical ECoG signals directly from the cerebral cortex, complex motor commands, such as finger movement trajectories, can be decoded more efficiently. However, existing studies still face significant challenges in accurately decoding finger movement trajectories. Specifically, current models tend to confuse the movement information of different fingers and fail to fully exploit the dependencies within time series when predicting long sequences, resulting in limited decoding performance. To address these challenges, this paper proposes a novel decoding method that transforms 2D ECoG data samples into 3D spatio-temporal spectrograms with time-stamped features via wavelet transform. The method further enables accurate decoding of finger bending by using a 1D convolutional network composed of Dilated-Transposed convolution, which together extract channel band features and temporal variations in tandem. The proposed method achieved the best performance among three subjects in BCI Competition IV. Compared with existing studies, our method made the correlation coefficient between the predicted multi-finger motion trajectory and the actual multi-finger motion trajectory exceed 80% for the first time, with the highest correlation coefficient reaching 82%. This approach provides new insights and solutions for high-precision decoding of brain-machine signals, particularly in precise command control tasks, and advances the application of BCI systems in real-world neuroprosthetic control.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1627819"},"PeriodicalIF":2.1,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12283792/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144698022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-02eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1578135
Henrique Oyama, Takazumi Matsumoto, Jun Tani
Mind-wandering reflects a dynamic interplay between focused attention and off-task mental states. Despite its relevance in understanding fundamental cognitive processes, such as attention regulation, decision-making, and creativity, previous models have not yet provided an account of the neural mechanisms for autonomous shifts between focus state (FS) and mind-wandering (MW). To address this, we conduct model simulation experiments employing predictive coding as a theoretical framework of perception to investigate possible neural mechanisms underlying these autonomous shifts between the two states. In particular, we modeled perception processes of continuous sensory sequences using our previously proposed variational RNN model under free energy minimization. The current study extends this model by introducing an online adaptation mechanism of a meta-level parameter, referred to as the meta-prior w, which regulates the complexity term in the free energy minimization. Our simulation experiments demonstrated that autonomous shifts between FS and MW take place when w switches between low and high values responding to a decrease and increase of the average reconstruction error over a past time window. Particularly, high w prioritized top-down predictions while low w emphasized bottom-up sensations. In this work, we speculate that self-awareness of MW may occur when the error signal accumulated over time exceeds a certain threshold. Finally, this paper explores how our experiment results align with existing studies and highlights their potential for future research.
{"title":"Modeling autonomous shifts between focus state and mind-wandering using a predictive-coding-inspired variational recurrent neural network.","authors":"Henrique Oyama, Takazumi Matsumoto, Jun Tani","doi":"10.3389/fncom.2025.1578135","DOIUrl":"10.3389/fncom.2025.1578135","url":null,"abstract":"<p><p>Mind-wandering reflects a dynamic interplay between focused attention and off-task mental states. Despite its relevance in understanding fundamental cognitive processes, such as attention regulation, decision-making, and creativity, previous models have not yet provided an account of the neural mechanisms for autonomous shifts between focus state (FS) and mind-wandering (MW). To address this, we conduct model simulation experiments employing predictive coding as a theoretical framework of perception to investigate possible neural mechanisms underlying these autonomous shifts between the two states. In particular, we modeled perception processes of continuous sensory sequences using our previously proposed variational RNN model under free energy minimization. The current study extends this model by introducing an online adaptation mechanism of a meta-level parameter, referred to as the meta-prior <b>w</b>, which regulates the complexity term in the free energy minimization. Our simulation experiments demonstrated that autonomous shifts between FS and MW take place when <b>w</b> switches between low and high values responding to a decrease and increase of the average reconstruction error over a past time window. Particularly, high <b>w</b> prioritized top-down predictions while low <b>w</b> emphasized bottom-up sensations. In this work, we speculate that self-awareness of MW may occur when the error signal accumulated over time exceeds a certain threshold. Finally, this paper explores how our experiment results align with existing studies and highlights their potential for future research.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1578135"},"PeriodicalIF":2.1,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12263957/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144648974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-02eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1571109
Peter Cariani, Janet M Baker
Here we present evidence for the ubiquity of fine spike timing and temporal coding broadly observed across sensory systems and widely conserved across diverse phyla, spanning invertebrates and vertebrates. A taxonomy of basic neural coding types includes channel activation patterns, temporal patterns of spikes, and patterns of spike latencies. Various examples and types of combination temporal-channel codes are discussed, including firing sequence codes. Multiplexing of temporal codes and mixed channel-temporal codes are considered. Neurophysiological and perceptual evidence for temporal coding in many sensory modalities is surveyed: audition, mechanoreception, electroreception, vision, gustation, olfaction, cutaneous senses, proprioception, and the vestibular sense. Precise phase-locked, phase-triggered, and spike latency codes can be found in many sensory systems. Temporal resolutions on millisecond and submillisecond scales are common. General correlation-based representations and operations are discussed. In almost every modality, there is some role for temporal coding, often in surprising places, such as color vision and taste. More investigations into temporal coding are well-warranted.
{"title":"Survey of temporal coding of sensory information.","authors":"Peter Cariani, Janet M Baker","doi":"10.3389/fncom.2025.1571109","DOIUrl":"10.3389/fncom.2025.1571109","url":null,"abstract":"<p><p>Here we present evidence for the ubiquity of fine spike timing and temporal coding broadly observed across sensory systems and widely conserved across diverse phyla, spanning invertebrates and vertebrates. A taxonomy of basic neural coding types includes channel activation patterns, temporal patterns of spikes, and patterns of spike latencies. Various examples and types of combination temporal-channel codes are discussed, including firing sequence codes. Multiplexing of temporal codes and mixed channel-temporal codes are considered. Neurophysiological and perceptual evidence for temporal coding in many sensory modalities is surveyed: audition, mechanoreception, electroreception, vision, gustation, olfaction, cutaneous senses, proprioception, and the vestibular sense. Precise phase-locked, phase-triggered, and spike latency codes can be found in many sensory systems. Temporal resolutions on millisecond and submillisecond scales are common. General correlation-based representations and operations are discussed. In almost every modality, there is some role for temporal coding, often in surprising places, such as color vision and taste. More investigations into temporal coding are well-warranted.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1571109"},"PeriodicalIF":2.1,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12263675/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144648975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-19eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1612928
Fawad Khan, Syed Yaseen Shah, Jawad Ahmad, Alanoud Al Mazroa, Adnan Zahid, Muhammed Ilyas, Qammer Hussain Abbasi, Syed Aziz Shah
Contactless Human Activity Recognition (HAR) has played a critical role in smart healthcare and elderly care homes to monitor patient behavior, detect falls or abnormal activities in real time. The effectiveness of non-invasive HAR is often hindered by location-centric variations in Channel State Information (CSI). These variations limit the ability of HAR models to generalize across new unseen cross-domain environments, for instance, a model trained in one location might not perform well in another physical location. To address this challenge, in this study, we present a novel federated learning (FL) algorithm designed to train a robust global model from local datasets in different localizations. The proposed Federated Weighted Averaging for HAR (Fed-WAHAR) algorithm mitigates location-induced disparities, including heterogeneity and non-Independent and Identically Distributed (non-IID) data distributions. Fed-WAHAR employs a dynamic weighting approach based on local models' accuracy to improve global model classification accuracy and reduce convergence time effectively. We evaluated the performance of Fed-WAHAR using various metrics, including accuracy, precision, recall, F1 score, confusion matrix, and convergence analysis. Experimental results demonstrate that Fed-WAHAR achieves an accuracy of 85% in recognizing human activities across different locations, enhancing the ability of model to infer across new unseen locations.
{"title":"Generalizing location-centric variations to enhance contactless human activity recognition.","authors":"Fawad Khan, Syed Yaseen Shah, Jawad Ahmad, Alanoud Al Mazroa, Adnan Zahid, Muhammed Ilyas, Qammer Hussain Abbasi, Syed Aziz Shah","doi":"10.3389/fncom.2025.1612928","DOIUrl":"10.3389/fncom.2025.1612928","url":null,"abstract":"<p><p>Contactless Human Activity Recognition (HAR) has played a critical role in smart healthcare and elderly care homes to monitor patient behavior, detect falls or abnormal activities in real time. The effectiveness of non-invasive HAR is often hindered by location-centric variations in Channel State Information (CSI). These variations limit the ability of HAR models to generalize across new unseen cross-domain environments, for instance, a model trained in one location might not perform well in another physical location. To address this challenge, in this study, we present a novel federated learning (FL) algorithm designed to train a robust global model from local datasets in different localizations. The proposed Federated Weighted Averaging for HAR (Fed-WAHAR) algorithm mitigates location-induced disparities, including heterogeneity and non-Independent and Identically Distributed (non-IID) data distributions. Fed-WAHAR employs a dynamic weighting approach based on local models' accuracy to improve global model classification accuracy and reduce convergence time effectively. We evaluated the performance of Fed-WAHAR using various metrics, including accuracy, precision, recall, F1 score, confusion matrix, and convergence analysis. Experimental results demonstrate that Fed-WAHAR achieves an accuracy of 85% in recognizing human activities across different locations, enhancing the ability of model to infer across new unseen locations.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1612928"},"PeriodicalIF":2.1,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12222214/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144559650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-16eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1474860
Pegah Ramezani, Achim Schilling, Patrick Krauss
Understanding how language and linguistic constructions are processed in the brain is a fundamental question in cognitive computational neuroscience. This study builds directly on our previous work analyzing Argument Structure Constructions (ASCs) in the BERT language model, extending the investigation to a simpler, brain-constrained architecture: a recurrent neural language model. Specifically, we explore the representation and processing of four ASCs-transitive, ditransitive, caused-motion, and resultative-in a Long Short-Term Memory (LSTM) network. We trained the LSTM on a custom GPT-4-generated dataset of 2,000 syntactically balanced sentences. We then analyzed the internal hidden layer activations using Multidimensional Scaling (MDS) and t-Distributed Stochastic Neighbor Embedding (t-SNE) to visualize sentence representations. The Generalized Discrimination Value (GDV) was calculated to quantify cluster separation. Our results show distinct clusters for the four ASCs across all hidden layers, with the strongest separation observed in the final layer. These findings are consistent with our earlier study based on a large language model and demonstrate that even relatively simple RNNs can form abstract, construction-level representations. This supports the hypothesis that hierarchical linguistic structure can emerge through prediction-based learning. In future work, we plan to compare these model-derived representations with neuroimaging data from continuous speech perception, further bridging computational and biological perspectives on language processing.
{"title":"Analysis of argument structure constructions in a deep recurrent language model.","authors":"Pegah Ramezani, Achim Schilling, Patrick Krauss","doi":"10.3389/fncom.2025.1474860","DOIUrl":"10.3389/fncom.2025.1474860","url":null,"abstract":"<p><p>Understanding how language and linguistic constructions are processed in the brain is a fundamental question in cognitive computational neuroscience. This study builds directly on our previous work analyzing Argument Structure Constructions (ASCs) in the BERT language model, extending the investigation to a simpler, brain-constrained architecture: a recurrent neural language model. Specifically, we explore the representation and processing of four ASCs-transitive, ditransitive, caused-motion, and resultative-in a Long Short-Term Memory (LSTM) network. We trained the LSTM on a custom GPT-4-generated dataset of 2,000 syntactically balanced sentences. We then analyzed the internal hidden layer activations using Multidimensional Scaling (MDS) and t-Distributed Stochastic Neighbor Embedding (t-SNE) to visualize sentence representations. The Generalized Discrimination Value (GDV) was calculated to quantify cluster separation. Our results show distinct clusters for the four ASCs across all hidden layers, with the strongest separation observed in the final layer. These findings are consistent with our earlier study based on a large language model and demonstrate that even relatively simple RNNs can form abstract, construction-level representations. This supports the hypothesis that hierarchical linguistic structure can emerge through prediction-based learning. In future work, we plan to compare these model-derived representations with neuroimaging data from continuous speech perception, further bridging computational and biological perspectives on language processing.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1474860"},"PeriodicalIF":2.1,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12206872/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144527127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-16eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1589247
Shaohua Zhang, Yan Feng, Ruzhen Chen, Song Huang, Qianchu Wang
EEG emotion recognition has important applications in human-computer interaction and mental health assessment, but existing models have limitations in capturing the complex spatial and temporal features of EEG signals. To overcome this problem, we propose an innovative model that combines CNN-BiLSTM and DC-IGN and fused both outputs for sentiment classification via a fully connected layer. In addition, we use a piecewise exponential decay strategy to optimize the training process. We conducted a comprehensive comparative experiment on the SEED and DEAP datasets, it includes traditional models, existing advanced models, and different combination models (such as CNN + LSTM, CNN + LSTM+DC-IGN). The results show that our model achieves 94.35% accuracy on SEED dataset, 89.84% on DEAP-valence, 90.31% on DEAP-arousal, which is significantly better than other models. In addition, we further verified the superiority of the model through subject independent experiment and learning rate scheduling strategy comparison experiment. These results not only improve the performance of EEG emotion recognition, but also provide new ideas and methods for research in related fields, and prove the significant advantages of our model in capturing complex features and improving classification accuracy.
{"title":"CNN-BiLSTM and DC-IGN fusion model and piecewise exponential attenuation optimization: an innovative approach to improve EEG emotion recognition performance.","authors":"Shaohua Zhang, Yan Feng, Ruzhen Chen, Song Huang, Qianchu Wang","doi":"10.3389/fncom.2025.1589247","DOIUrl":"10.3389/fncom.2025.1589247","url":null,"abstract":"<p><p>EEG emotion recognition has important applications in human-computer interaction and mental health assessment, but existing models have limitations in capturing the complex spatial and temporal features of EEG signals. To overcome this problem, we propose an innovative model that combines CNN-BiLSTM and DC-IGN and fused both outputs for sentiment classification via a fully connected layer. In addition, we use a piecewise exponential decay strategy to optimize the training process. We conducted a comprehensive comparative experiment on the SEED and DEAP datasets, it includes traditional models, existing advanced models, and different combination models (such as CNN + LSTM, CNN + LSTM+DC-IGN). The results show that our model achieves 94.35% accuracy on SEED dataset, 89.84% on DEAP-valence, 90.31% on DEAP-arousal, which is significantly better than other models. In addition, we further verified the superiority of the model through subject independent experiment and learning rate scheduling strategy comparison experiment. These results not only improve the performance of EEG emotion recognition, but also provide new ideas and methods for research in related fields, and prove the significant advantages of our model in capturing complex features and improving classification accuracy.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1589247"},"PeriodicalIF":2.1,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12206643/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144527128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-13eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1565552
Mustafa Zeki, Tamer Dag
Mathematical analysis of biological neural networks, specifically inhibitory networks with all-to-all connections, is challenging due to their complexity and non-linearity. In examining the dynamics of individual neurons, many fast currents are involved solely in spike generation, while slower currents play a significant role in shaping a neuron's behavior. We propose a discrete map approach to analyze the behavior of inhibitory neurons that exhibit bursting modulated by slow calcium currents, leveraging the time-scale differences among neural currents. This discrete map tracks the number of spikes per burst for individual neurons. We compared the map's predictions for the number of spikes per burst and the long-term system behavior to data obtained from the continuous system. Our findings demonstrate that the discrete map can accurately predict the canonical behavioral signatures of bursting performance observed in the continuous system. Specifically, we show that the proposed map a) accounts for the dependence of the number of spikes per burst on initial calcium levels, b) explains the roles of individual currents in shaping the system's behavior, and c) can be explicitly analyzed to determine fixed points and assess their stability.
{"title":"Reductionist modeling of calcium-dependent dynamics in recurrent neural networks.","authors":"Mustafa Zeki, Tamer Dag","doi":"10.3389/fncom.2025.1565552","DOIUrl":"10.3389/fncom.2025.1565552","url":null,"abstract":"<p><p>Mathematical analysis of biological neural networks, specifically inhibitory networks with all-to-all connections, is challenging due to their complexity and non-linearity. In examining the dynamics of individual neurons, many fast currents are involved solely in spike generation, while slower currents play a significant role in shaping a neuron's behavior. We propose a discrete map approach to analyze the behavior of inhibitory neurons that exhibit bursting modulated by slow calcium currents, leveraging the time-scale differences among neural currents. This discrete map tracks the number of spikes per burst for individual neurons. We compared the map's predictions for the number of spikes per burst and the long-term system behavior to data obtained from the continuous system. Our findings demonstrate that the discrete map can accurately predict the canonical behavioral signatures of bursting performance observed in the continuous system. Specifically, we show that the proposed map a) accounts for the dependence of the number of spikes per burst on initial calcium levels, b) explains the roles of individual currents in shaping the system's behavior, and c) can be explicitly analyzed to determine fixed points and assess their stability.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1565552"},"PeriodicalIF":2.1,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12202366/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144527129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-09eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1537284
Nojod M Alotaibi, Areej M Alhothali, Manar S Ali
Major depressive disorder (MDD) is one of the most common mental disorders, with significant impacts on many daily activities and quality of life. It stands as one of the most common mental disorders globally and ranks as the second leading cause of disability. The current diagnostic approach for MDD primarily relies on clinical observations and patient-reported symptoms, overlooking the diverse underlying causes and pathophysiological factors contributing to depression. Therefore, scientific researchers and clinicians must gain a deeper understanding of the pathophysiological mechanisms involved in MDD. There is growing evidence in neuroscience that depression is a brain network disorder, and the use of neuroimaging, such as magnetic resonance imaging (MRI), plays a significant role in identifying and treating MDD. Rest-state functional MRI (rs-fMRI) is among the most popular neuroimaging techniques used to study MDD. Deep learning techniques have been widely applied to neuroimaging data to help with early mental health disorder detection. Recent years have seen a rise in interest in graph neural networks (GNNs), which are deep neural architectures specifically designed to handle graph-structured data like rs-fMRI. This research aimed to develop an ensemble-based GNN model capable of detecting discriminative features from rs-fMRI images for the purpose of diagnosing MDD. Specifically, we constructed an ensemble model by combining functional connectivity features from multiple brain region segmentation atlases to capture brain complexity and detect distinct features more accurately than single atlas-based models. Further, the effectiveness of our model is demonstrated by assessing its performance on a large multi-site MDD dataset. We applied the synthetic minority over-sampling technique (SMOTE) to handle class imbalance across sites. Using stratified 10-fold cross-validation, the best performing model achieved an accuracy of 75.80%, a sensitivity of 88.89%, a specificity of 61.84%, a precision of 71.29%, and an F1-score of 79.12%. The results indicate that the proposed multi-atlas ensemble GNN model provides a reliable and generalizable solution for accurately detecting MDD.
{"title":"Multi-atlas ensemble graph neural network model for major depressive disorder detection using functional MRI data.","authors":"Nojod M Alotaibi, Areej M Alhothali, Manar S Ali","doi":"10.3389/fncom.2025.1537284","DOIUrl":"10.3389/fncom.2025.1537284","url":null,"abstract":"<p><p>Major depressive disorder (MDD) is one of the most common mental disorders, with significant impacts on many daily activities and quality of life. It stands as one of the most common mental disorders globally and ranks as the second leading cause of disability. The current diagnostic approach for MDD primarily relies on clinical observations and patient-reported symptoms, overlooking the diverse underlying causes and pathophysiological factors contributing to depression. Therefore, scientific researchers and clinicians must gain a deeper understanding of the pathophysiological mechanisms involved in MDD. There is growing evidence in neuroscience that depression is a brain network disorder, and the use of neuroimaging, such as magnetic resonance imaging (MRI), plays a significant role in identifying and treating MDD. Rest-state functional MRI (rs-fMRI) is among the most popular neuroimaging techniques used to study MDD. Deep learning techniques have been widely applied to neuroimaging data to help with early mental health disorder detection. Recent years have seen a rise in interest in graph neural networks (GNNs), which are deep neural architectures specifically designed to handle graph-structured data like rs-fMRI. This research aimed to develop an ensemble-based GNN model capable of detecting discriminative features from rs-fMRI images for the purpose of diagnosing MDD. Specifically, we constructed an ensemble model by combining functional connectivity features from multiple brain region segmentation atlases to capture brain complexity and detect distinct features more accurately than single atlas-based models. Further, the effectiveness of our model is demonstrated by assessing its performance on a large multi-site MDD dataset. We applied the synthetic minority over-sampling technique (SMOTE) to handle class imbalance across sites. Using stratified 10-fold cross-validation, the best performing model achieved an accuracy of 75.80%, a sensitivity of 88.89%, a specificity of 61.84%, a precision of 71.29%, and an F1-score of 79.12%. The results indicate that the proposed multi-atlas ensemble GNN model provides a reliable and generalizable solution for accurately detecting MDD.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1537284"},"PeriodicalIF":2.1,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12183270/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144474463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-06eCollection Date: 2025-01-01DOI: 10.3389/fncom.2025.1591972
Fatima Asiri, Wajdan Al Malwi, Tamara Zhukabayeva, Ibtehal Nafea, Abdullah Aziz, Nadhmi A Gazem, Abdullah Qayyum
Introduction: Preserving privacy is a critical concern in medical imaging, especially in resource limited settings like smart devices connected to the IoT. To address this, a novel encryption method for medical images that operates at the bit plane level, tailored for IoT environments, is developed.
Methods: The approach initializes by processing the original image through the Secure Hash Algorithm (SHA) to derive the initial conditions for the Chen chaotic map. Using the Chen chaotic system, three random number vectors are generated. The first two vectors are employed to shuffle each bit plane of the plaintext image, rearranging rows and columns. The third vector is used to create a random matrix, which further diffuses the permuted bit planes. Finally, the bit planes are combined to produce the ciphertext image. For further security enhancement, this ciphertext is embedded into a carrier image, resulting in a visually secured output.
Results: To evaluate the effectiveness of our algorithm, various tests are conducted, including correlation coefficient analysis (C.C < or negative), histogram analysis, key space [(1090)8] and sensitivity assessments, entropy evaluation [E(S) > 7.98], and occlusion analysis.
Conclusion: Extensive evaluations have proven that the designed scheme exhibits a high degree of resilience to attacks, making it particularly suitable for small IoT devices with limited processing power and memory.
{"title":"Enhancing medical image privacy in IoT with bit-plane level encryption using chaotic map.","authors":"Fatima Asiri, Wajdan Al Malwi, Tamara Zhukabayeva, Ibtehal Nafea, Abdullah Aziz, Nadhmi A Gazem, Abdullah Qayyum","doi":"10.3389/fncom.2025.1591972","DOIUrl":"10.3389/fncom.2025.1591972","url":null,"abstract":"<p><strong>Introduction: </strong>Preserving privacy is a critical concern in medical imaging, especially in resource limited settings like smart devices connected to the IoT. To address this, a novel encryption method for medical images that operates at the bit plane level, tailored for IoT environments, is developed.</p><p><strong>Methods: </strong>The approach initializes by processing the original image through the Secure Hash Algorithm (SHA) to derive the initial conditions for the Chen chaotic map. Using the Chen chaotic system, three random number vectors are generated. The first two vectors are employed to shuffle each bit plane of the plaintext image, rearranging rows and columns. The third vector is used to create a random matrix, which further diffuses the permuted bit planes. Finally, the bit planes are combined to produce the ciphertext image. For further security enhancement, this ciphertext is embedded into a carrier image, resulting in a visually secured output.</p><p><strong>Results: </strong>To evaluate the effectiveness of our algorithm, various tests are conducted, including correlation coefficient analysis (<i>C</i>.<i>C</i> < or negative), histogram analysis, key space [(10<sup>90</sup>)<sup>8</sup>] and sensitivity assessments, entropy evaluation [<i>E</i>(<i>S</i>) > 7.98], and occlusion analysis.</p><p><strong>Conclusion: </strong>Extensive evaluations have proven that the designed scheme exhibits a high degree of resilience to attacks, making it particularly suitable for small IoT devices with limited processing power and memory.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1591972"},"PeriodicalIF":2.1,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12179213/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144474434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}