Electroencephalography (EEG) can objectively reflect an individual's emotional state. However, due to significant inter-subject differences, existing methods exhibit low generalization performance in emotion recognition across different individuals. Therefore, an EEG emotion classification framework based on deep feature aggregation and multi-source domain adaptation is proposed by us. First, we design a deep feature aggregation module that introduces a novel approach for extracting EEG hemisphere asymmetry features and integrates these features with the frequency and spatiotemporal characteristics of the EEG signals. Additionally, a multi-source domain adaptation strategy is proposed, where multiple independent feature extraction sub-networks are employed to process each domain separately, extracting discriminative features and thereby alleviating the feature shift problem between domains. Then, a domain adaptation strategy is employed to align multiple source domains with the target domain, thereby reducing inter-domain distribution discrepancies and facilitating effective cross-domain knowledge transfer. Simultaneously, to enhance the learning ability of target samples near the decision boundary, pseudo-labels are dynamically generated for the unlabeled samples in the target domain. By leveraging predictions from multiple classifiers, we calculate the average confidence of each pseudo-label group and select the pseudo-label set with the highest confidence as the final label for the target sample. Finally, the mean of the outputs from multiple classifiers is used as the model's final prediction. A comprehensive set of experiments was performed using the publicly available SEED and SEED-IV datasets. The findings indicate that the method we proposed outperforms alternative methods.
{"title":"EEG emotion recognition across subjects based on deep feature aggregation and multi-source domain adaptation.","authors":"Kunqiang Lin, Ying Li, Yiren He, Zihan Jiang, Renjie He, Xianzhe Wang, Hongxu Guo, Lei Guo","doi":"10.1007/s11571-025-10379-y","DOIUrl":"https://doi.org/10.1007/s11571-025-10379-y","url":null,"abstract":"<p><p>Electroencephalography (EEG) can objectively reflect an individual's emotional state. However, due to significant inter-subject differences, existing methods exhibit low generalization performance in emotion recognition across different individuals. Therefore, an EEG emotion classification framework based on deep feature aggregation and multi-source domain adaptation is proposed by us. First, we design a deep feature aggregation module that introduces a novel approach for extracting EEG hemisphere asymmetry features and integrates these features with the frequency and spatiotemporal characteristics of the EEG signals. Additionally, a multi-source domain adaptation strategy is proposed, where multiple independent feature extraction sub-networks are employed to process each domain separately, extracting discriminative features and thereby alleviating the feature shift problem between domains. Then, a domain adaptation strategy is employed to align multiple source domains with the target domain, thereby reducing inter-domain distribution discrepancies and facilitating effective cross-domain knowledge transfer. Simultaneously, to enhance the learning ability of target samples near the decision boundary, pseudo-labels are dynamically generated for the unlabeled samples in the target domain. By leveraging predictions from multiple classifiers, we calculate the average confidence of each pseudo-label group and select the pseudo-label set with the highest confidence as the final label for the target sample. Finally, the mean of the outputs from multiple classifiers is used as the model's final prediction. A comprehensive set of experiments was performed using the publicly available SEED and SEED-IV datasets. The findings indicate that the method we proposed outperforms alternative methods.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"8"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12644276/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145630828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Biometric traits are unique characteristics of an individual's body or behavior that can be used for identification and authentication. Biometric authentication uses unique physiological and behavioral traits for secure identity verification. Traditional unimodal biometric authentication systems often suffer from spoofing attacks, sensor noise, forgery, and environmental dependencies. To overcome these limitations, our work presents multimodal biometric authentication integrated with the characteristics of electroencephalograph (EEG) signals and handwritten signatures to enhance security, efficiency, and robustness. EEG-based authentication uses the brainwave patterns' intrinsic and unforgeable nature, while signature recognition demonstrates an additional behavioral trait for effectiveness. Our system processes EEG data of an individual with 14-channel readings, and the signature with the images ensures a seamless fusion of both modalities.Combining physiological and behavioral biometrics, our approach will significantly decrease the risk of unimodal authentication, including forgery, spoofing, and sensor failures. Our system, evaluated on a dataset of 30 subjects with genuine and forged data, demonstrates a 97% accuracy. Designed for small organizations, the modular structure, low computation algorithms, and simplicity of the hardware promote deployment scalability.
{"title":"Multimodal biometric authentication systems: exploring EEG and signature.","authors":"Banee Bandana Das, Chinthala Varnitha Reddy, Ujwala Matha, Chinni Yandapalli, Saswat Kumar Ram","doi":"10.1007/s11571-025-10389-w","DOIUrl":"https://doi.org/10.1007/s11571-025-10389-w","url":null,"abstract":"<p><p>Biometric traits are unique characteristics of an individual's body or behavior that can be used for identification and authentication. Biometric authentication uses unique physiological and behavioral traits for secure identity verification. Traditional unimodal biometric authentication systems often suffer from spoofing attacks, sensor noise, forgery, and environmental dependencies. To overcome these limitations, our work presents multimodal biometric authentication integrated with the characteristics of electroencephalograph (EEG) signals and handwritten signatures to enhance security, efficiency, and robustness. EEG-based authentication uses the brainwave patterns' intrinsic and unforgeable nature, while signature recognition demonstrates an additional behavioral trait for effectiveness. Our system processes EEG data of an individual with 14-channel readings, and the signature with the images ensures a seamless fusion of both modalities.Combining physiological and behavioral biometrics, our approach will significantly decrease the risk of unimodal authentication, including forgery, spoofing, and sensor failures. Our system, evaluated on a dataset of 30 subjects with genuine and forged data, demonstrates a 97% accuracy. Designed for small organizations, the modular structure, low computation algorithms, and simplicity of the hardware promote deployment scalability.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"17"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12681505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145707485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2025-12-26DOI: 10.1007/s11571-025-10398-9
Victor J Barranca
Recent experiments have revealed that the inter-regional connectivity of the cerebral cortex exhibits strengths spanning over several orders of magnitude and decaying with distance. We demonstrate this to be a fundamental organizing feature that fosters high complexity in both connectivity structure and network dynamics, achieving an advantageous balance between integration and differentiation of information. This is verified through analysis of a multi-scale neuronal network model with nonlinear integrate-and-fire dynamics, incorporating inter-regional connection strengths decaying exponentially with spatial separation at the macroscale as well as small-world local connectivity at the microscale. Through numerical simulation and optimization over the model parameterspace, we show that inter-regional connectivity over intermediate spatial scales naturally facilitates maximally heterogeneous connection strengths, agreeing well with experimental measurements. In addition, we formulate complementary notions of structural and dynamical complexity, which are computationally feasible to calculate for large multi-scale networks, and we show that high complexity manifests for each over a similar parameter regime. We expect this work may help explain the link between distance-dependence in brain connectivity and the richness of neuronal network dynamics in achieving robust brain computations and effective information processing.
{"title":"Distance-dependent connectivity in the brain facilitates high dynamical and structural complexity.","authors":"Victor J Barranca","doi":"10.1007/s11571-025-10398-9","DOIUrl":"10.1007/s11571-025-10398-9","url":null,"abstract":"<p><p>Recent experiments have revealed that the inter-regional connectivity of the cerebral cortex exhibits strengths spanning over several orders of magnitude and decaying with distance. We demonstrate this to be a fundamental organizing feature that fosters high complexity in both connectivity structure and network dynamics, achieving an advantageous balance between integration and differentiation of information. This is verified through analysis of a multi-scale neuronal network model with nonlinear integrate-and-fire dynamics, incorporating inter-regional connection strengths decaying exponentially with spatial separation at the macroscale as well as small-world local connectivity at the microscale. Through numerical simulation and optimization over the model parameterspace, we show that inter-regional connectivity over intermediate spatial scales naturally facilitates maximally heterogeneous connection strengths, agreeing well with experimental measurements. In addition, we formulate complementary notions of structural and dynamical complexity, which are computationally feasible to calculate for large multi-scale networks, and we show that high complexity manifests for each over a similar parameter regime. We expect this work may help explain the link between distance-dependence in brain connectivity and the richness of neuronal network dynamics in achieving robust brain computations and effective information processing.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"23"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12743046/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145849087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2026-02-03DOI: 10.1007/s11571-026-10411-9
Uma Jaishankar, Jagannath H Nirmal, Girish Gidaye
A crucial method for determining a person's mental health and assessing their degree of depression is depression detection. To identify depression through speech or conversation, a number of sophisticated methods and questionnaires have been created. The constraints of the current system are as follows: reduced effectiveness as a result of poor feature selection and extraction, problems with interpretability, and the difficulty of identifying depression in different languages. As a result, the proposed model is presented to offer improved accuracy and efficient performance. While adaptive threshold-based pre-processing (AdaT) is used to eliminate quiet and unnecessary information, the twinned Savitzky-Golay filter (TSaG) is used to minimize noise in the dataset. To turn the signal into an image, a Synchro-Squeezed Adaptive Wavelet Transform Algorithm (SSawT) is employed. The Singular Empirical Decomposition and Sparse Autoencoder (SiFE) model is used to extract linear and deep features. Input's deep, linear, and statistical properties are combined using the Weighted Soft Attention-based Fusion (WSAttF) model. From the fused features, the Chaotic Mud Ring Optimization algorithm (ChMR) chooses the best features. A Dilated Convolutional Neural Network (CNN) based Bidirectional-Long Short Term Memory-Bi-LSTM (DiCBiL) is used to detect different stages of depression, which lowers error rates and increases detection accuracy. The proposed method achieves 93.22% of F1-score, 93.11% precision, 93.12% recall, and 93.31% accuracy on the DAIC-WOZ original test set. During the testing time, two more datasets, namely AVEC 2019 and MELD, are used to validate the proposed performance, attaining an accuracy of 93.91% and 85.34% respectively.
Supplementary information: The online version contains supplementary material available at 10.1007/s11571-026-10411-9.
{"title":"A novel dilated Bi-LSTM framework for depression detection from speech signals through feature fusion.","authors":"Uma Jaishankar, Jagannath H Nirmal, Girish Gidaye","doi":"10.1007/s11571-026-10411-9","DOIUrl":"https://doi.org/10.1007/s11571-026-10411-9","url":null,"abstract":"<p><p>A crucial method for determining a person's mental health and assessing their degree of depression is depression detection. To identify depression through speech or conversation, a number of sophisticated methods and questionnaires have been created. The constraints of the current system are as follows: reduced effectiveness as a result of poor feature selection and extraction, problems with interpretability, and the difficulty of identifying depression in different languages. As a result, the proposed model is presented to offer improved accuracy and efficient performance. While adaptive threshold-based pre-processing (AdaT) is used to eliminate quiet and unnecessary information, the twinned Savitzky-Golay filter (TSaG) is used to minimize noise in the dataset. To turn the signal into an image, a Synchro-Squeezed Adaptive Wavelet Transform Algorithm (SSawT) is employed. The Singular Empirical Decomposition and Sparse Autoencoder (SiFE) model is used to extract linear and deep features. Input's deep, linear, and statistical properties are combined using the Weighted Soft Attention-based Fusion (WSAttF) model. From the fused features, the Chaotic Mud Ring Optimization algorithm (ChMR) chooses the best features. A Dilated Convolutional Neural Network (CNN) based Bidirectional-Long Short Term Memory-Bi-LSTM (DiCBiL) is used to detect different stages of depression, which lowers error rates and increases detection accuracy. The proposed method achieves 93.22% of F1-score, 93.11% precision, 93.12% recall, and 93.31% accuracy on the DAIC-WOZ original test set. During the testing time, two more datasets, namely AVEC 2019 and MELD, are used to validate the proposed performance, attaining an accuracy of 93.91% and 85.34% respectively.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s11571-026-10411-9.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"44"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12868482/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146123895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2026-02-10DOI: 10.1007/s11571-026-10422-6
Soodeh Moallemian, Abolfazl Saghafi, Rutvik Deshpande, Jose M Perez, Miray Budak, Bernadette A Fausto, Fanny M Elahi, Mark A Gluck
Alzheimer's disease (AD) pathology begins years before symptoms appear, and dynamic flexibility of the medial temporal lobe (MTL) may serve as an early functional biomarker. Using data from 656 older adults in the Rutgers Aging and Brain Health Alliance study, we evaluated whether cognitive, genetic, biochemical, and demographic predictors could estimate MTL dynamic flexibility, despite substantial missingness (1,866 missing values; 25.86%). Only 42 participants (6.40%) had complete data; therefore, we compared case deletion with five imputation strategies (MICE, GAIN, MissForest, MIWAE, ReMasker) and eight regression models, assessing prediction accuracy using repeated 5-fold cross-validation. Complete-case analysis yielded limited performance (average [Formula: see text], [Formula: see text]). After imputation, all methods improved accuracy, with MissForest paired with Bagging Trees or Random Forest achieving the lowest prediction error ([Formula: see text]). The greatest improvement in concordance occurred when GAIN was combined with Bagging Trees/Random Forest ([Formula: see text]), representing a 57% gain over the best complete-case model. A Scheirer-Ray-Hare ANOVA confirmed significant differences across imputation strategies ([Formula: see text]). Runtime analyses showed GAIN and MissForest to be both accurate and computationally efficient, while deep generative imputers were slower. These findings demonstrate that robust imputation is essential for maximizing data utility and predictive reliability in high-missingness neuroimaging studies and highlight the potential of ensemble tree models combined with advanced imputation techniques for estimating MTL dynamic flexibility in aging populations.
阿尔茨海默病(AD)的病理在症状出现前几年就开始了,内侧颞叶(MTL)的动态灵活性可能是一种早期功能生物标志物。利用罗格斯大学衰老与脑健康联盟研究中656名老年人的数据,我们评估了认知、遗传、生化和人口统计学预测指标是否可以估计MTL动态灵活性,尽管存在大量缺失(1866个缺失值,25.86%)。只有42名参与者(6.40%)有完整的数据;因此,我们将病例删除与五种imputation策略(MICE, GAIN, MissForest, MIWAE, ReMasker)和八种回归模型进行比较,使用重复的5倍交叉验证来评估预测准确性。完整案例分析产生有限的性能(平均[公式:见文本],[公式:见文本])。估算后,所有方法的准确率都有所提高,其中misforest与Bagging Trees或Random Forest配对的预测误差最低(公式见原文)。当GAIN与Bagging Trees/Random Forest(公式:见文本)结合使用时,一致性得到了最大的改善,比最佳的全案例模型增加了57%。Scheirer-Ray-Hare方差分析证实了不同归因策略之间的显著差异(公式:见原文)。运行时分析表明,GAIN和MissForest既准确又计算效率高,而深度生成输入器则较慢。这些研究结果表明,在高缺失神经影像学研究中,稳健的输入对于最大限度地提高数据效用和预测可靠性至关重要,并突出了集成树模型与先进的输入技术相结合的潜力,以估计老年人群的MTL动态灵活性。
{"title":"Machine learning for missing data imputation in Alzheimer's research: predicting medial temporal lobe dynamic flexibility.","authors":"Soodeh Moallemian, Abolfazl Saghafi, Rutvik Deshpande, Jose M Perez, Miray Budak, Bernadette A Fausto, Fanny M Elahi, Mark A Gluck","doi":"10.1007/s11571-026-10422-6","DOIUrl":"https://doi.org/10.1007/s11571-026-10422-6","url":null,"abstract":"<p><p>Alzheimer's disease (AD) pathology begins years before symptoms appear, and dynamic flexibility of the medial temporal lobe (MTL) may serve as an early functional biomarker. Using data from 656 older adults in the Rutgers Aging and Brain Health Alliance study, we evaluated whether cognitive, genetic, biochemical, and demographic predictors could estimate MTL dynamic flexibility, despite substantial missingness (1,866 missing values; 25.86%). Only 42 participants (6.40%) had complete data; therefore, we compared case deletion with five imputation strategies (MICE, GAIN, MissForest, MIWAE, ReMasker) and eight regression models, assessing prediction accuracy using repeated 5-fold cross-validation. Complete-case analysis yielded limited performance (average [Formula: see text], [Formula: see text]). After imputation, all methods improved accuracy, with MissForest paired with Bagging Trees or Random Forest achieving the lowest prediction error ([Formula: see text]). The greatest improvement in concordance occurred when GAIN was combined with Bagging Trees/Random Forest ([Formula: see text]), representing a 57% gain over the best complete-case model. A Scheirer-Ray-Hare ANOVA confirmed significant differences across imputation strategies ([Formula: see text]). Runtime analyses showed GAIN and MissForest to be both accurate and computationally efficient, while deep generative imputers were slower. These findings demonstrate that robust imputation is essential for maximizing data utility and predictive reliability in high-missingness neuroimaging studies and highlight the potential of ensemble tree models combined with advanced imputation techniques for estimating MTL dynamic flexibility in aging populations.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"51"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12891276/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146178219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2025-11-14DOI: 10.1007/s11571-025-10383-2
Belle Krubitski, Cesar Ceballos, Ty Roachford, Rodrigo F O Pena
Co-transmission, the release of multiple neurotransmitters from a single neuron, is an increasingly recognized phenomenon in the nervous system. A particularly interesting combination of neurotransmitters exhibiting co-transmission is glutamate and GABA, which, when co-released from neurons, demonstrate complex biphasic activity patterns that vary depending on the time or amplitude differences from the excitatory (AMPA) or inhibitory (GABAA) signals. Naively, the outcome signal produced by these differences can be functionally interpreted as simple mechanisms that only add or remove spikes by excitation or inhibition. However, the complex interaction of multiple time-scales and amplitudes may deliver a more complex temporal coding, which is experimentally difficult to access and interpret. In this work, we employ an extensive computational approach to distinguish these postsynaptic co-transmission patterns and how they interact with dendritic filtering and ionic currents. We specifically focus on modeling the summation patterns and their flexible dynamics that arise from the many combinations of temporal and amplitude co-transmission differences. Our results indicate a number of summation patterns that excite, inhibit, and act transiently, which have been previously attributed to the interplay between the intrinsic active and passive electrical properties of the postsynaptic dendritic membrane. Our computational framework provides an insight into the complex interplay that arises between co-transmission and dendritic filtering, allowing for a mechanistic understanding underlying the integration and processing of co-transmitted signals in neural circuits.
Supplementary information: The online version contains supplementary material available at 10.1007/s11571-025-10383-2.
{"title":"Synaptic summation shapes information transfer in GABA-glutamate co-transmission.","authors":"Belle Krubitski, Cesar Ceballos, Ty Roachford, Rodrigo F O Pena","doi":"10.1007/s11571-025-10383-2","DOIUrl":"https://doi.org/10.1007/s11571-025-10383-2","url":null,"abstract":"<p><p>Co-transmission, the release of multiple neurotransmitters from a single neuron, is an increasingly recognized phenomenon in the nervous system. A particularly interesting combination of neurotransmitters exhibiting co-transmission is glutamate and GABA, which, when co-released from neurons, demonstrate complex biphasic activity patterns that vary depending on the time or amplitude differences from the excitatory (AMPA) or inhibitory (GABA<sub>A</sub>) signals. Naively, the outcome signal produced by these differences can be functionally interpreted as simple mechanisms that only add or remove spikes by excitation or inhibition. However, the complex interaction of multiple time-scales and amplitudes may deliver a more complex temporal coding, which is experimentally difficult to access and interpret. In this work, we employ an extensive computational approach to distinguish these postsynaptic co-transmission patterns and how they interact with dendritic filtering and ionic currents. We specifically focus on modeling the summation patterns and their flexible dynamics that arise from the many combinations of temporal and amplitude co-transmission differences. Our results indicate a number of summation patterns that excite, inhibit, and act transiently, which have been previously attributed to the interplay between the intrinsic active and passive electrical properties of the postsynaptic dendritic membrane. Our computational framework provides an insight into the complex interplay that arises between co-transmission and dendritic filtering, allowing for a mechanistic understanding underlying the integration and processing of co-transmitted signals in neural circuits.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s11571-025-10383-2.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"6"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618799/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145539075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As healthcare text data becomes increasingly complex, it is vital for sentiment analysis to capture local patterns and global contextual dependencies. In this paper, we propose a hybrid Swin Transformer-BiLSTM-Spatial MLP (Swin-MLP) model that leverages hierarchical attention, shifted-window mechanisms, and spatial MLP layers to extract features from domain-specific healthcare text better. The framework is tested on domain-specific datasets for Drug Review and Medical Text, and performance is assessed against baseline models (BERT, LSTM, and GRU). Our findings show that the Swin-MLP model performs significantly better overall, achieving superior metrics (accuracy, precision, recall, F1-score, and AUC) and improving mean accuracy by 1-2% over BERT. Statistical tests to assess significance (McNemar's test and paired t-test) indicate that improvements are statistically significant (p < 0.05), suggesting the efficacy of the architectural innovations. The results' implications indicate that the model is robust, efficiently converges to classification, and is potentially helpful for a wide range of domain-specific sentiment analyses in healthcare. We will examine future research directions into exploring lightweight attention mechanisms, cross-domain multimodal sentiment analysis, federated learning to protect privacy, and hardware implications for rapid training and inference.
{"title":"Leveraging Swin Transformer for advanced sentiment analysis: a new paradigm.","authors":"Gaurav Kumar Rajput, Saurabh Kumar Srivastava, Namit Gupta","doi":"10.1007/s11571-025-10378-z","DOIUrl":"https://doi.org/10.1007/s11571-025-10378-z","url":null,"abstract":"<p><p>As healthcare text data becomes increasingly complex, it is vital for sentiment analysis to capture local patterns and global contextual dependencies. In this paper, we propose a hybrid Swin Transformer-BiLSTM-Spatial MLP (Swin-MLP) model that leverages hierarchical attention, shifted-window mechanisms, and spatial MLP layers to extract features from domain-specific healthcare text better. The framework is tested on domain-specific datasets for Drug Review and Medical Text, and performance is assessed against baseline models (BERT, LSTM, and GRU). Our findings show that the Swin-MLP model performs significantly better overall, achieving superior metrics (accuracy, precision, recall, F1-score, and AUC) and improving mean accuracy by 1-2% over BERT. Statistical tests to assess significance (McNemar's test and paired t-test) indicate that improvements are statistically significant (p < 0.05), suggesting the efficacy of the architectural innovations. The results' implications indicate that the model is robust, efficiently converges to classification, and is potentially helpful for a wide range of domain-specific sentiment analyses in healthcare. We will examine future research directions into exploring lightweight attention mechanisms, cross-domain multimodal sentiment analysis, federated learning to protect privacy, and hardware implications for rapid training and inference.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"13"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12660549/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145647175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2026-02-03DOI: 10.1007/s11571-026-10410-w
Huan Zhao, Junxiao Xie, Guowu Wei, Anmin Liu, Richard Jones, Qiumin Qu, Hongmei Cao, Junyi Cao
Parkinson's disease (PD) is a neurodegenerative disorder that affects both motor and cognitive functions. An objective and easily measurable digital marker is crucial for improving the diagnosis and monitoring of PD. Since gait is a complex activity that requires both motor control and cognitive input, this study assumes that kinetic parameters of the foot sensitive to the cognitive load (dual-tasking) for healthy adults can be used to diagnose PD. In this study, walking with a cognitive task has been conducted on healthy subjects, the kinetic parameters have been calculated with algorithms of inverse dynamics in Opensim. Subsequently, the moment-related variables, including the bend and force of the plantar surface, were collected from 13 patients with PD and 32 healthy controls using the wearable system. Statistical analysis of the focused kinetic parameters indicates that the moment of the metatarsophalangeal joint has a significant difference between dual-task walking and single walking. The experimental results demonstrate that features extracted from the bend and force signal of the plantar surface can diagnose PD with an average accuracy of 95.55% with 5-fold cross validation. It demonstrates that kinetic data from the foot captured by wearable sensors can serve as an objective digital marker for PD.
{"title":"Kinetic parameters sensitive to cognitive activity during walking for diagnosis of Parkinson's disease.","authors":"Huan Zhao, Junxiao Xie, Guowu Wei, Anmin Liu, Richard Jones, Qiumin Qu, Hongmei Cao, Junyi Cao","doi":"10.1007/s11571-026-10410-w","DOIUrl":"https://doi.org/10.1007/s11571-026-10410-w","url":null,"abstract":"<p><p>Parkinson's disease (PD) is a neurodegenerative disorder that affects both motor and cognitive functions. An objective and easily measurable digital marker is crucial for improving the diagnosis and monitoring of PD. Since gait is a complex activity that requires both motor control and cognitive input, this study assumes that kinetic parameters of the foot sensitive to the cognitive load (dual-tasking) for healthy adults can be used to diagnose PD. In this study, walking with a cognitive task has been conducted on healthy subjects, the kinetic parameters have been calculated with algorithms of inverse dynamics in Opensim. Subsequently, the moment-related variables, including the bend and force of the plantar surface, were collected from 13 patients with PD and 32 healthy controls using the wearable system. Statistical analysis of the focused kinetic parameters indicates that the moment of the metatarsophalangeal joint has a significant difference between dual-task walking and single walking. The experimental results demonstrate that features extracted from the bend and force signal of the plantar surface can diagnose PD with an average accuracy of 95.55% with 5-fold cross validation. It demonstrates that kinetic data from the foot captured by wearable sensors can serve as an objective digital marker for PD.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"40"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12868486/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146123987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2025-12-26DOI: 10.1007/s11571-025-10363-6
Sang-Yoon Kim, Woochang Lim
[This corrects the article DOI: 10.1007/s11571-024-10119-8.].
[这更正了文章DOI: 10.1007/s11571-024-10119-8]。
{"title":"Correction: Quantifying harmony between direct and indirect pathways in the basal ganglia: healthy and Parkinsonian states.","authors":"Sang-Yoon Kim, Woochang Lim","doi":"10.1007/s11571-025-10363-6","DOIUrl":"https://doi.org/10.1007/s11571-025-10363-6","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.1007/s11571-024-10119-8.].</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"28"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12743037/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145848926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emotion Recognition generally involves the identification of the present mental state or psychological conditions of the human while interacting with others. Among the various modalities, Electroencephalography is the most deceptive emotion recognition technique because of its ability to characterize brain activities accurately. Several emotion recognition methods have been designed utilizing Deep Learning approaches from EEG signals. Yet, their inability to capture the complex features and the occurrence of the overfitting problems with increased computational complexity affected their extensive application. Therefore, this research proposes the Cross-Connected Distributive Learning-enabled Graph Convolutional Network (C2DGCN) for effective emotion recognition. Specifically, the cross-connected distributive learning in the C2DGCN enables extensive feature sharing and integration, thus reducing the computation complexity and improving the accuracy. Further, the application of the Statistical Time-Frequency Signal descriptor aids in the extraction of complex features and mitigates the overfitting issue. The experimental validation revealed the effectiveness of the C2DGCN by achieving a high accuracy of 97.73%, sensitivity of 98.32%, specificity of 98.22%, and precision of 98.32% with 90% of training using the SEED-IV dataset. For the evaluation using the DEAP dataset, the proposed C2DGCN model reaches an accuracy of 97.66%, precision of 97.98%, sensitivity of 97.25%, and specificity of 98.07%.
{"title":"C2DGCN: cross-connected distributive learning-enabled graph convolutional network for human emotion recognition using electroencephalography signal.","authors":"Puja Cholke, Shailaja Uke, Jyoti Jayesh Chavhan, Ashutosh Madhukar Kulkarni, Neelam Chandolikar, Rajashree Tukaram Gadhave","doi":"10.1007/s11571-025-10399-8","DOIUrl":"https://doi.org/10.1007/s11571-025-10399-8","url":null,"abstract":"<p><p>Emotion Recognition generally involves the identification of the present mental state or psychological conditions of the human while interacting with others. Among the various modalities, Electroencephalography is the most deceptive emotion recognition technique because of its ability to characterize brain activities accurately. Several emotion recognition methods have been designed utilizing Deep Learning approaches from EEG signals. Yet, their inability to capture the complex features and the occurrence of the overfitting problems with increased computational complexity affected their extensive application. Therefore, this research proposes the Cross-Connected Distributive Learning-enabled Graph Convolutional Network (C2DGCN) for effective emotion recognition. Specifically, the cross-connected distributive learning in the C2DGCN enables extensive feature sharing and integration, thus reducing the computation complexity and improving the accuracy. Further, the application of the Statistical Time-Frequency Signal descriptor aids in the extraction of complex features and mitigates the overfitting issue. The experimental validation revealed the effectiveness of the C2DGCN by achieving a high accuracy of 97.73%, sensitivity of 98.32%, specificity of 98.22%, and precision of 98.32% with 90% of training using the SEED-IV dataset. For the evaluation using the DEAP dataset, the proposed C2DGCN model reaches an accuracy of 97.66%, precision of 97.98%, sensitivity of 97.25%, and specificity of 98.07%.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"21"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12743052/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145848952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}