Electroencephalography (EEG) can objectively reflect an individual's emotional state. However, due to significant inter-subject differences, existing methods exhibit low generalization performance in emotion recognition across different individuals. Therefore, an EEG emotion classification framework based on deep feature aggregation and multi-source domain adaptation is proposed by us. First, we design a deep feature aggregation module that introduces a novel approach for extracting EEG hemisphere asymmetry features and integrates these features with the frequency and spatiotemporal characteristics of the EEG signals. Additionally, a multi-source domain adaptation strategy is proposed, where multiple independent feature extraction sub-networks are employed to process each domain separately, extracting discriminative features and thereby alleviating the feature shift problem between domains. Then, a domain adaptation strategy is employed to align multiple source domains with the target domain, thereby reducing inter-domain distribution discrepancies and facilitating effective cross-domain knowledge transfer. Simultaneously, to enhance the learning ability of target samples near the decision boundary, pseudo-labels are dynamically generated for the unlabeled samples in the target domain. By leveraging predictions from multiple classifiers, we calculate the average confidence of each pseudo-label group and select the pseudo-label set with the highest confidence as the final label for the target sample. Finally, the mean of the outputs from multiple classifiers is used as the model's final prediction. A comprehensive set of experiments was performed using the publicly available SEED and SEED-IV datasets. The findings indicate that the method we proposed outperforms alternative methods.
{"title":"EEG emotion recognition across subjects based on deep feature aggregation and multi-source domain adaptation.","authors":"Kunqiang Lin, Ying Li, Yiren He, Zihan Jiang, Renjie He, Xianzhe Wang, Hongxu Guo, Lei Guo","doi":"10.1007/s11571-025-10379-y","DOIUrl":"https://doi.org/10.1007/s11571-025-10379-y","url":null,"abstract":"<p><p>Electroencephalography (EEG) can objectively reflect an individual's emotional state. However, due to significant inter-subject differences, existing methods exhibit low generalization performance in emotion recognition across different individuals. Therefore, an EEG emotion classification framework based on deep feature aggregation and multi-source domain adaptation is proposed by us. First, we design a deep feature aggregation module that introduces a novel approach for extracting EEG hemisphere asymmetry features and integrates these features with the frequency and spatiotemporal characteristics of the EEG signals. Additionally, a multi-source domain adaptation strategy is proposed, where multiple independent feature extraction sub-networks are employed to process each domain separately, extracting discriminative features and thereby alleviating the feature shift problem between domains. Then, a domain adaptation strategy is employed to align multiple source domains with the target domain, thereby reducing inter-domain distribution discrepancies and facilitating effective cross-domain knowledge transfer. Simultaneously, to enhance the learning ability of target samples near the decision boundary, pseudo-labels are dynamically generated for the unlabeled samples in the target domain. By leveraging predictions from multiple classifiers, we calculate the average confidence of each pseudo-label group and select the pseudo-label set with the highest confidence as the final label for the target sample. Finally, the mean of the outputs from multiple classifiers is used as the model's final prediction. A comprehensive set of experiments was performed using the publicly available SEED and SEED-IV datasets. The findings indicate that the method we proposed outperforms alternative methods.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"8"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12644276/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145630828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Biometric traits are unique characteristics of an individual's body or behavior that can be used for identification and authentication. Biometric authentication uses unique physiological and behavioral traits for secure identity verification. Traditional unimodal biometric authentication systems often suffer from spoofing attacks, sensor noise, forgery, and environmental dependencies. To overcome these limitations, our work presents multimodal biometric authentication integrated with the characteristics of electroencephalograph (EEG) signals and handwritten signatures to enhance security, efficiency, and robustness. EEG-based authentication uses the brainwave patterns' intrinsic and unforgeable nature, while signature recognition demonstrates an additional behavioral trait for effectiveness. Our system processes EEG data of an individual with 14-channel readings, and the signature with the images ensures a seamless fusion of both modalities.Combining physiological and behavioral biometrics, our approach will significantly decrease the risk of unimodal authentication, including forgery, spoofing, and sensor failures. Our system, evaluated on a dataset of 30 subjects with genuine and forged data, demonstrates a 97% accuracy. Designed for small organizations, the modular structure, low computation algorithms, and simplicity of the hardware promote deployment scalability.
{"title":"Multimodal biometric authentication systems: exploring EEG and signature.","authors":"Banee Bandana Das, Chinthala Varnitha Reddy, Ujwala Matha, Chinni Yandapalli, Saswat Kumar Ram","doi":"10.1007/s11571-025-10389-w","DOIUrl":"https://doi.org/10.1007/s11571-025-10389-w","url":null,"abstract":"<p><p>Biometric traits are unique characteristics of an individual's body or behavior that can be used for identification and authentication. Biometric authentication uses unique physiological and behavioral traits for secure identity verification. Traditional unimodal biometric authentication systems often suffer from spoofing attacks, sensor noise, forgery, and environmental dependencies. To overcome these limitations, our work presents multimodal biometric authentication integrated with the characteristics of electroencephalograph (EEG) signals and handwritten signatures to enhance security, efficiency, and robustness. EEG-based authentication uses the brainwave patterns' intrinsic and unforgeable nature, while signature recognition demonstrates an additional behavioral trait for effectiveness. Our system processes EEG data of an individual with 14-channel readings, and the signature with the images ensures a seamless fusion of both modalities.Combining physiological and behavioral biometrics, our approach will significantly decrease the risk of unimodal authentication, including forgery, spoofing, and sensor failures. Our system, evaluated on a dataset of 30 subjects with genuine and forged data, demonstrates a 97% accuracy. Designed for small organizations, the modular structure, low computation algorithms, and simplicity of the hardware promote deployment scalability.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"17"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12681505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145707485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2025-12-26DOI: 10.1007/s11571-025-10398-9
Victor J Barranca
Recent experiments have revealed that the inter-regional connectivity of the cerebral cortex exhibits strengths spanning over several orders of magnitude and decaying with distance. We demonstrate this to be a fundamental organizing feature that fosters high complexity in both connectivity structure and network dynamics, achieving an advantageous balance between integration and differentiation of information. This is verified through analysis of a multi-scale neuronal network model with nonlinear integrate-and-fire dynamics, incorporating inter-regional connection strengths decaying exponentially with spatial separation at the macroscale as well as small-world local connectivity at the microscale. Through numerical simulation and optimization over the model parameterspace, we show that inter-regional connectivity over intermediate spatial scales naturally facilitates maximally heterogeneous connection strengths, agreeing well with experimental measurements. In addition, we formulate complementary notions of structural and dynamical complexity, which are computationally feasible to calculate for large multi-scale networks, and we show that high complexity manifests for each over a similar parameter regime. We expect this work may help explain the link between distance-dependence in brain connectivity and the richness of neuronal network dynamics in achieving robust brain computations and effective information processing.
{"title":"Distance-dependent connectivity in the brain facilitates high dynamical and structural complexity.","authors":"Victor J Barranca","doi":"10.1007/s11571-025-10398-9","DOIUrl":"10.1007/s11571-025-10398-9","url":null,"abstract":"<p><p>Recent experiments have revealed that the inter-regional connectivity of the cerebral cortex exhibits strengths spanning over several orders of magnitude and decaying with distance. We demonstrate this to be a fundamental organizing feature that fosters high complexity in both connectivity structure and network dynamics, achieving an advantageous balance between integration and differentiation of information. This is verified through analysis of a multi-scale neuronal network model with nonlinear integrate-and-fire dynamics, incorporating inter-regional connection strengths decaying exponentially with spatial separation at the macroscale as well as small-world local connectivity at the microscale. Through numerical simulation and optimization over the model parameterspace, we show that inter-regional connectivity over intermediate spatial scales naturally facilitates maximally heterogeneous connection strengths, agreeing well with experimental measurements. In addition, we formulate complementary notions of structural and dynamical complexity, which are computationally feasible to calculate for large multi-scale networks, and we show that high complexity manifests for each over a similar parameter regime. We expect this work may help explain the link between distance-dependence in brain connectivity and the richness of neuronal network dynamics in achieving robust brain computations and effective information processing.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"23"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12743046/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145849087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2026-02-03DOI: 10.1007/s11571-026-10411-9
Uma Jaishankar, Jagannath H Nirmal, Girish Gidaye
A crucial method for determining a person's mental health and assessing their degree of depression is depression detection. To identify depression through speech or conversation, a number of sophisticated methods and questionnaires have been created. The constraints of the current system are as follows: reduced effectiveness as a result of poor feature selection and extraction, problems with interpretability, and the difficulty of identifying depression in different languages. As a result, the proposed model is presented to offer improved accuracy and efficient performance. While adaptive threshold-based pre-processing (AdaT) is used to eliminate quiet and unnecessary information, the twinned Savitzky-Golay filter (TSaG) is used to minimize noise in the dataset. To turn the signal into an image, a Synchro-Squeezed Adaptive Wavelet Transform Algorithm (SSawT) is employed. The Singular Empirical Decomposition and Sparse Autoencoder (SiFE) model is used to extract linear and deep features. Input's deep, linear, and statistical properties are combined using the Weighted Soft Attention-based Fusion (WSAttF) model. From the fused features, the Chaotic Mud Ring Optimization algorithm (ChMR) chooses the best features. A Dilated Convolutional Neural Network (CNN) based Bidirectional-Long Short Term Memory-Bi-LSTM (DiCBiL) is used to detect different stages of depression, which lowers error rates and increases detection accuracy. The proposed method achieves 93.22% of F1-score, 93.11% precision, 93.12% recall, and 93.31% accuracy on the DAIC-WOZ original test set. During the testing time, two more datasets, namely AVEC 2019 and MELD, are used to validate the proposed performance, attaining an accuracy of 93.91% and 85.34% respectively.
Supplementary information: The online version contains supplementary material available at 10.1007/s11571-026-10411-9.
{"title":"A novel dilated Bi-LSTM framework for depression detection from speech signals through feature fusion.","authors":"Uma Jaishankar, Jagannath H Nirmal, Girish Gidaye","doi":"10.1007/s11571-026-10411-9","DOIUrl":"https://doi.org/10.1007/s11571-026-10411-9","url":null,"abstract":"<p><p>A crucial method for determining a person's mental health and assessing their degree of depression is depression detection. To identify depression through speech or conversation, a number of sophisticated methods and questionnaires have been created. The constraints of the current system are as follows: reduced effectiveness as a result of poor feature selection and extraction, problems with interpretability, and the difficulty of identifying depression in different languages. As a result, the proposed model is presented to offer improved accuracy and efficient performance. While adaptive threshold-based pre-processing (AdaT) is used to eliminate quiet and unnecessary information, the twinned Savitzky-Golay filter (TSaG) is used to minimize noise in the dataset. To turn the signal into an image, a Synchro-Squeezed Adaptive Wavelet Transform Algorithm (SSawT) is employed. The Singular Empirical Decomposition and Sparse Autoencoder (SiFE) model is used to extract linear and deep features. Input's deep, linear, and statistical properties are combined using the Weighted Soft Attention-based Fusion (WSAttF) model. From the fused features, the Chaotic Mud Ring Optimization algorithm (ChMR) chooses the best features. A Dilated Convolutional Neural Network (CNN) based Bidirectional-Long Short Term Memory-Bi-LSTM (DiCBiL) is used to detect different stages of depression, which lowers error rates and increases detection accuracy. The proposed method achieves 93.22% of F1-score, 93.11% precision, 93.12% recall, and 93.31% accuracy on the DAIC-WOZ original test set. During the testing time, two more datasets, namely AVEC 2019 and MELD, are used to validate the proposed performance, attaining an accuracy of 93.91% and 85.34% respectively.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s11571-026-10411-9.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"44"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12868482/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146123895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2025-11-14DOI: 10.1007/s11571-025-10383-2
Belle Krubitski, Cesar Ceballos, Ty Roachford, Rodrigo F O Pena
Co-transmission, the release of multiple neurotransmitters from a single neuron, is an increasingly recognized phenomenon in the nervous system. A particularly interesting combination of neurotransmitters exhibiting co-transmission is glutamate and GABA, which, when co-released from neurons, demonstrate complex biphasic activity patterns that vary depending on the time or amplitude differences from the excitatory (AMPA) or inhibitory (GABAA) signals. Naively, the outcome signal produced by these differences can be functionally interpreted as simple mechanisms that only add or remove spikes by excitation or inhibition. However, the complex interaction of multiple time-scales and amplitudes may deliver a more complex temporal coding, which is experimentally difficult to access and interpret. In this work, we employ an extensive computational approach to distinguish these postsynaptic co-transmission patterns and how they interact with dendritic filtering and ionic currents. We specifically focus on modeling the summation patterns and their flexible dynamics that arise from the many combinations of temporal and amplitude co-transmission differences. Our results indicate a number of summation patterns that excite, inhibit, and act transiently, which have been previously attributed to the interplay between the intrinsic active and passive electrical properties of the postsynaptic dendritic membrane. Our computational framework provides an insight into the complex interplay that arises between co-transmission and dendritic filtering, allowing for a mechanistic understanding underlying the integration and processing of co-transmitted signals in neural circuits.
Supplementary information: The online version contains supplementary material available at 10.1007/s11571-025-10383-2.
{"title":"Synaptic summation shapes information transfer in GABA-glutamate co-transmission.","authors":"Belle Krubitski, Cesar Ceballos, Ty Roachford, Rodrigo F O Pena","doi":"10.1007/s11571-025-10383-2","DOIUrl":"https://doi.org/10.1007/s11571-025-10383-2","url":null,"abstract":"<p><p>Co-transmission, the release of multiple neurotransmitters from a single neuron, is an increasingly recognized phenomenon in the nervous system. A particularly interesting combination of neurotransmitters exhibiting co-transmission is glutamate and GABA, which, when co-released from neurons, demonstrate complex biphasic activity patterns that vary depending on the time or amplitude differences from the excitatory (AMPA) or inhibitory (GABA<sub>A</sub>) signals. Naively, the outcome signal produced by these differences can be functionally interpreted as simple mechanisms that only add or remove spikes by excitation or inhibition. However, the complex interaction of multiple time-scales and amplitudes may deliver a more complex temporal coding, which is experimentally difficult to access and interpret. In this work, we employ an extensive computational approach to distinguish these postsynaptic co-transmission patterns and how they interact with dendritic filtering and ionic currents. We specifically focus on modeling the summation patterns and their flexible dynamics that arise from the many combinations of temporal and amplitude co-transmission differences. Our results indicate a number of summation patterns that excite, inhibit, and act transiently, which have been previously attributed to the interplay between the intrinsic active and passive electrical properties of the postsynaptic dendritic membrane. Our computational framework provides an insight into the complex interplay that arises between co-transmission and dendritic filtering, allowing for a mechanistic understanding underlying the integration and processing of co-transmitted signals in neural circuits.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s11571-025-10383-2.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"6"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618799/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145539075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As healthcare text data becomes increasingly complex, it is vital for sentiment analysis to capture local patterns and global contextual dependencies. In this paper, we propose a hybrid Swin Transformer-BiLSTM-Spatial MLP (Swin-MLP) model that leverages hierarchical attention, shifted-window mechanisms, and spatial MLP layers to extract features from domain-specific healthcare text better. The framework is tested on domain-specific datasets for Drug Review and Medical Text, and performance is assessed against baseline models (BERT, LSTM, and GRU). Our findings show that the Swin-MLP model performs significantly better overall, achieving superior metrics (accuracy, precision, recall, F1-score, and AUC) and improving mean accuracy by 1-2% over BERT. Statistical tests to assess significance (McNemar's test and paired t-test) indicate that improvements are statistically significant (p < 0.05), suggesting the efficacy of the architectural innovations. The results' implications indicate that the model is robust, efficiently converges to classification, and is potentially helpful for a wide range of domain-specific sentiment analyses in healthcare. We will examine future research directions into exploring lightweight attention mechanisms, cross-domain multimodal sentiment analysis, federated learning to protect privacy, and hardware implications for rapid training and inference.
{"title":"Leveraging Swin Transformer for advanced sentiment analysis: a new paradigm.","authors":"Gaurav Kumar Rajput, Saurabh Kumar Srivastava, Namit Gupta","doi":"10.1007/s11571-025-10378-z","DOIUrl":"https://doi.org/10.1007/s11571-025-10378-z","url":null,"abstract":"<p><p>As healthcare text data becomes increasingly complex, it is vital for sentiment analysis to capture local patterns and global contextual dependencies. In this paper, we propose a hybrid Swin Transformer-BiLSTM-Spatial MLP (Swin-MLP) model that leverages hierarchical attention, shifted-window mechanisms, and spatial MLP layers to extract features from domain-specific healthcare text better. The framework is tested on domain-specific datasets for Drug Review and Medical Text, and performance is assessed against baseline models (BERT, LSTM, and GRU). Our findings show that the Swin-MLP model performs significantly better overall, achieving superior metrics (accuracy, precision, recall, F1-score, and AUC) and improving mean accuracy by 1-2% over BERT. Statistical tests to assess significance (McNemar's test and paired t-test) indicate that improvements are statistically significant (p < 0.05), suggesting the efficacy of the architectural innovations. The results' implications indicate that the model is robust, efficiently converges to classification, and is potentially helpful for a wide range of domain-specific sentiment analyses in healthcare. We will examine future research directions into exploring lightweight attention mechanisms, cross-domain multimodal sentiment analysis, federated learning to protect privacy, and hardware implications for rapid training and inference.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"13"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12660549/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145647175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2026-02-03DOI: 10.1007/s11571-026-10410-w
Huan Zhao, Junxiao Xie, Guowu Wei, Anmin Liu, Richard Jones, Qiumin Qu, Hongmei Cao, Junyi Cao
Parkinson's disease (PD) is a neurodegenerative disorder that affects both motor and cognitive functions. An objective and easily measurable digital marker is crucial for improving the diagnosis and monitoring of PD. Since gait is a complex activity that requires both motor control and cognitive input, this study assumes that kinetic parameters of the foot sensitive to the cognitive load (dual-tasking) for healthy adults can be used to diagnose PD. In this study, walking with a cognitive task has been conducted on healthy subjects, the kinetic parameters have been calculated with algorithms of inverse dynamics in Opensim. Subsequently, the moment-related variables, including the bend and force of the plantar surface, were collected from 13 patients with PD and 32 healthy controls using the wearable system. Statistical analysis of the focused kinetic parameters indicates that the moment of the metatarsophalangeal joint has a significant difference between dual-task walking and single walking. The experimental results demonstrate that features extracted from the bend and force signal of the plantar surface can diagnose PD with an average accuracy of 95.55% with 5-fold cross validation. It demonstrates that kinetic data from the foot captured by wearable sensors can serve as an objective digital marker for PD.
{"title":"Kinetic parameters sensitive to cognitive activity during walking for diagnosis of Parkinson's disease.","authors":"Huan Zhao, Junxiao Xie, Guowu Wei, Anmin Liu, Richard Jones, Qiumin Qu, Hongmei Cao, Junyi Cao","doi":"10.1007/s11571-026-10410-w","DOIUrl":"https://doi.org/10.1007/s11571-026-10410-w","url":null,"abstract":"<p><p>Parkinson's disease (PD) is a neurodegenerative disorder that affects both motor and cognitive functions. An objective and easily measurable digital marker is crucial for improving the diagnosis and monitoring of PD. Since gait is a complex activity that requires both motor control and cognitive input, this study assumes that kinetic parameters of the foot sensitive to the cognitive load (dual-tasking) for healthy adults can be used to diagnose PD. In this study, walking with a cognitive task has been conducted on healthy subjects, the kinetic parameters have been calculated with algorithms of inverse dynamics in Opensim. Subsequently, the moment-related variables, including the bend and force of the plantar surface, were collected from 13 patients with PD and 32 healthy controls using the wearable system. Statistical analysis of the focused kinetic parameters indicates that the moment of the metatarsophalangeal joint has a significant difference between dual-task walking and single walking. The experimental results demonstrate that features extracted from the bend and force signal of the plantar surface can diagnose PD with an average accuracy of 95.55% with 5-fold cross validation. It demonstrates that kinetic data from the foot captured by wearable sensors can serve as an objective digital marker for PD.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"40"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12868486/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146123987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2025-12-26DOI: 10.1007/s11571-025-10363-6
Sang-Yoon Kim, Woochang Lim
[This corrects the article DOI: 10.1007/s11571-024-10119-8.].
[这更正了文章DOI: 10.1007/s11571-024-10119-8]。
{"title":"Correction: Quantifying harmony between direct and indirect pathways in the basal ganglia: healthy and Parkinsonian states.","authors":"Sang-Yoon Kim, Woochang Lim","doi":"10.1007/s11571-025-10363-6","DOIUrl":"https://doi.org/10.1007/s11571-025-10363-6","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.1007/s11571-024-10119-8.].</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"28"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12743037/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145848926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emotion Recognition generally involves the identification of the present mental state or psychological conditions of the human while interacting with others. Among the various modalities, Electroencephalography is the most deceptive emotion recognition technique because of its ability to characterize brain activities accurately. Several emotion recognition methods have been designed utilizing Deep Learning approaches from EEG signals. Yet, their inability to capture the complex features and the occurrence of the overfitting problems with increased computational complexity affected their extensive application. Therefore, this research proposes the Cross-Connected Distributive Learning-enabled Graph Convolutional Network (C2DGCN) for effective emotion recognition. Specifically, the cross-connected distributive learning in the C2DGCN enables extensive feature sharing and integration, thus reducing the computation complexity and improving the accuracy. Further, the application of the Statistical Time-Frequency Signal descriptor aids in the extraction of complex features and mitigates the overfitting issue. The experimental validation revealed the effectiveness of the C2DGCN by achieving a high accuracy of 97.73%, sensitivity of 98.32%, specificity of 98.22%, and precision of 98.32% with 90% of training using the SEED-IV dataset. For the evaluation using the DEAP dataset, the proposed C2DGCN model reaches an accuracy of 97.66%, precision of 97.98%, sensitivity of 97.25%, and specificity of 98.07%.
{"title":"C2DGCN: cross-connected distributive learning-enabled graph convolutional network for human emotion recognition using electroencephalography signal.","authors":"Puja Cholke, Shailaja Uke, Jyoti Jayesh Chavhan, Ashutosh Madhukar Kulkarni, Neelam Chandolikar, Rajashree Tukaram Gadhave","doi":"10.1007/s11571-025-10399-8","DOIUrl":"https://doi.org/10.1007/s11571-025-10399-8","url":null,"abstract":"<p><p>Emotion Recognition generally involves the identification of the present mental state or psychological conditions of the human while interacting with others. Among the various modalities, Electroencephalography is the most deceptive emotion recognition technique because of its ability to characterize brain activities accurately. Several emotion recognition methods have been designed utilizing Deep Learning approaches from EEG signals. Yet, their inability to capture the complex features and the occurrence of the overfitting problems with increased computational complexity affected their extensive application. Therefore, this research proposes the Cross-Connected Distributive Learning-enabled Graph Convolutional Network (C2DGCN) for effective emotion recognition. Specifically, the cross-connected distributive learning in the C2DGCN enables extensive feature sharing and integration, thus reducing the computation complexity and improving the accuracy. Further, the application of the Statistical Time-Frequency Signal descriptor aids in the extraction of complex features and mitigates the overfitting issue. The experimental validation revealed the effectiveness of the C2DGCN by achieving a high accuracy of 97.73%, sensitivity of 98.32%, specificity of 98.22%, and precision of 98.32% with 90% of training using the SEED-IV dataset. For the evaluation using the DEAP dataset, the proposed C2DGCN model reaches an accuracy of 97.66%, precision of 97.98%, sensitivity of 97.25%, and specificity of 98.07%.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"21"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12743052/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145848952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2025-11-24DOI: 10.1007/s11571-025-10368-1
Vivekanandan N, Rajeswari K, Yuvraj Kanna Nallu Vivekanandan
Vertigo, a prevalent neurovestibular disorder, arises from dysfunction in the vestibular system and often lacks precise, personalized treatments. This study proposes a bio-inspired spiking neural network (SNN) model that simulates vestibular dysfunction and adaptive recovery using Leaky Integrate-and-Fire (LIF) neurons with spike-timing-dependent plasticity (STDP). The architecture mimics the vestibular pathway through biologically plausible layers: hair cells, afferents, and cerebellar integrators, and models pathological states such as hair cell hypofunction and synaptic disruption. A reinforcement-based feedback mechanism enables the simulation of therapy-induced plasticity, resulting in a 48-62% drop and 38% recovery in cerebellar spike activity during adaptation epochs. The model demonstrates real-time feasibility, with an average simulation runtime of 4 s per epoch on standard hardware. Its design is scalable and well-suited for future deployment on neuromorphic platforms (e.g., Loihi, SpiNNaker). Its modular and interpretable design enables in silico testing of rehabilitation strategies, real-time monitoring of dysfunction, and future personalization using clinical datasets. This work establishes a computational foundation for AI-driven vestibular therapy that is adaptive, explainable, and hardware compatible.
Supplementary information: The online version contains supplementary material available at 10.1007/s11571-025-10368-1.
{"title":"Bio-inspired spiking neural network for modeling and optimizing adaptive vertigo therapy.","authors":"Vivekanandan N, Rajeswari K, Yuvraj Kanna Nallu Vivekanandan","doi":"10.1007/s11571-025-10368-1","DOIUrl":"https://doi.org/10.1007/s11571-025-10368-1","url":null,"abstract":"<p><p>Vertigo, a prevalent neurovestibular disorder, arises from dysfunction in the vestibular system and often lacks precise, personalized treatments. This study proposes a bio-inspired spiking neural network (SNN) model that simulates vestibular dysfunction and adaptive recovery using Leaky Integrate-and-Fire (LIF) neurons with spike-timing-dependent plasticity (STDP). The architecture mimics the vestibular pathway through biologically plausible layers: hair cells, afferents, and cerebellar integrators, and models pathological states such as hair cell hypofunction and synaptic disruption. A reinforcement-based feedback mechanism enables the simulation of therapy-induced plasticity, resulting in a 48-62% drop and 38% recovery in cerebellar spike activity during adaptation epochs. The model demonstrates real-time feasibility, with an average simulation runtime of 4 s per epoch on standard hardware. Its design is scalable and well-suited for future deployment on neuromorphic platforms (e.g., Loihi, SpiNNaker). Its modular and interpretable design enables in silico testing of rehabilitation strategies, real-time monitoring of dysfunction, and future personalization using clinical datasets. This work establishes a computational foundation for AI-driven vestibular therapy that is adaptive, explainable, and hardware compatible.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s11571-025-10368-1.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"11"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12644390/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145630750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}