Pub Date : 2026-12-01Epub Date: 2025-11-14DOI: 10.1007/s11571-025-10376-1
Md Al Emran, Md Ariful Islam, Md Obaydullahn Khan, Md Jewel Rana, Saida Tasnim Adrita, Md Ashik Ahmed, Mahmoud M A Eid, Ahmed Nabih Zaki Rashed
Traffic accidents usually result from driver's inattention, sleepiness, and distraction, posing a substantial danger to worldwide road safety. Advances in computer vision and artificial intelligence (AI) have provided new prospects for designing real-time driver monitoring systems to reduce these dangers. In this paper, we assessed four known deep learning models, MobileNetV2, DenseNet201, NASNetMobile, and VGG19, and offer a unique Hybrid CNN-Transformer architecture reinforced with Efficient Channel Attention (ECA) for multi-class driver activity categorization. The framework defines seven important driving behaviors: Closed Eye, Open Eye, Dangerous Driving, Distracted Driving, Drinking, Yawning, and Safe Driving. Among the baseline models, DenseNet201 (99.40%) and MobileNetV2 (99.31%) achieved the highest validation accuracies. In contrast, the proposed Hybrid CNN-Transformer with ECA attained a near-perfect validation accuracy of 99.72% and further demonstrated flawless generalization with 100% accuracy on the independent test set. Confusion matrix studies further indicate a few misclassifications, verifying the model's high generalization capacity. By merging CNN-based local feature extraction, attention-driven feature refinement, and Transformer-based global context modeling, the system provides both robustness and efficiency. These findings show the practicality of using the suggested technology in real-time intelligent transportation applications, presenting a viable avenue toward reducing traffic accidents and boosting overall road safety.
{"title":"Real-time driver activity detection using advanced deep learning models.","authors":"Md Al Emran, Md Ariful Islam, Md Obaydullahn Khan, Md Jewel Rana, Saida Tasnim Adrita, Md Ashik Ahmed, Mahmoud M A Eid, Ahmed Nabih Zaki Rashed","doi":"10.1007/s11571-025-10376-1","DOIUrl":"https://doi.org/10.1007/s11571-025-10376-1","url":null,"abstract":"<p><p>Traffic accidents usually result from driver's inattention, sleepiness, and distraction, posing a substantial danger to worldwide road safety. Advances in computer vision and artificial intelligence (AI) have provided new prospects for designing real-time driver monitoring systems to reduce these dangers. In this paper, we assessed four known deep learning models, MobileNetV2, DenseNet201, NASNetMobile, and VGG19, and offer a unique Hybrid CNN-Transformer architecture reinforced with Efficient Channel Attention (ECA) for multi-class driver activity categorization. The framework defines seven important driving behaviors: Closed Eye, Open Eye, Dangerous Driving, Distracted Driving, Drinking, Yawning, and Safe Driving. Among the baseline models, DenseNet201 (99.40%) and MobileNetV2 (99.31%) achieved the highest validation accuracies. In contrast, the proposed Hybrid CNN-Transformer with ECA attained a near-perfect validation accuracy of 99.72% and further demonstrated flawless generalization with 100% accuracy on the independent test set. Confusion matrix studies further indicate a few misclassifications, verifying the model's high generalization capacity. By merging CNN-based local feature extraction, attention-driven feature refinement, and Transformer-based global context modeling, the system provides both robustness and efficiency. These findings show the practicality of using the suggested technology in real-time intelligent transportation applications, presenting a viable avenue toward reducing traffic accidents and boosting overall road safety.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"7"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618750/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145538985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2025-11-12DOI: 10.1007/s11571-025-10375-2
Junjun Huang, Shuang Liu, Mengjie Lv, John W Schwieter, Huanhuan Liu
Little is known about whether direct and vicarious rewards affect bilingual language control in social learning. We used a dual-electroencephalogram (EEG) to simultaneously record the effects of direct and vicarious rewards on language control when bilinguals switched between their two languages. We found that both direct and vicarious rewards elicited more switch behavior. On an electrophysiological level, although both direct and vicarious rewards elicited Reward-positivity and Feedback-P3 when receiving reward outcomes, direct rewards induced greater reward effects than vicarious rewards. In addition to an N2 effect in language switching, vicarious rewards elicited more pronounced LPCs relative to direct rewards. More important, in the alpha band, there was a predictive effect of behaviors on rewards in binding vicarious rewards and language switching activities. These findings demonstrate that both direct and vicarious rewards influence language control during language selection.
{"title":"A dual brain EEG examination of the effects of direct and vicarious rewards on bilingual Language control.","authors":"Junjun Huang, Shuang Liu, Mengjie Lv, John W Schwieter, Huanhuan Liu","doi":"10.1007/s11571-025-10375-2","DOIUrl":"https://doi.org/10.1007/s11571-025-10375-2","url":null,"abstract":"<p><p>Little is known about whether direct and vicarious rewards affect bilingual language control in social learning. We used a dual-electroencephalogram (EEG) to simultaneously record the effects of direct and vicarious rewards on language control when bilinguals switched between their two languages. We found that both direct and vicarious rewards elicited more switch behavior. On an electrophysiological level, although both direct and vicarious rewards elicited Reward-positivity and Feedback-P3 when receiving reward outcomes, direct rewards induced greater reward effects than vicarious rewards. In addition to an N2 effect in language switching, vicarious rewards elicited more pronounced LPCs relative to direct rewards. More important, in the alpha band, there was a predictive effect of behaviors on rewards in binding vicarious rewards and language switching activities. These findings demonstrate that both direct and vicarious rewards influence language control during language selection.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"2"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12612500/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145539388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate assessment of mental workload (MWL) from electroencephalography (EEG) signals is crucial for real-time cognitive monitoring in safety-critical domains such as aviation and human-computer interaction. Although various computational approaches have been proposed, those mostly suffer from limited robustness, interpretability, or fail to fully exploit both temporal and non-linear neural dynamics. This article introduces a novel hybrid deep learning and XGBoost stacking ensemble framework for reliable and interpretable MWL classification from EEG. The proposed pipeline systematically includes preprocessing of raw EEGs, followed by comprehensive feature extraction (time-domain, frequency-domain, wavelet-based, entropy, and fractal dimension features), and subsequent discriminative feature selection phase using ANOVA F-values, yielding a compact set of 200 highly informative features. The proposed architecture consists of dual processing branches: a CNN-BiLSTM-Attention based deep learning branch for automatic learning of spatiotemporal dynamics, and an XGBoost branch for robust classification from engineered features. Predictions from both branches are integrated using a logistic regression stacking ensemble, maximizing complementary strengths and improving generalization. Experiments are conducted on the STEW (simultaneous workload) and EEGMAT (mental arithmetic task) dataset. Proposed model yields 96.87% and 99.40% of classification accuracy by outperforming 16 and 7 previously published state-of-the-art techniques on STEW and EEGMAT dataset respectively. Attention heatmaps and SHAP value analysis provide intuitive visual explanations and interpretability of the model's decision making, while systematic ablation studies validate the contribution of each architectural module. This work demonstrates that a carefully engineered stacking ensemble, informed by both deep learning and classical machine learning, capable of delivering not only improved performance but also enhanced interpretability for EEG-based MWL assessment in real-world applications.
{"title":"Attention-guided deep learning-machine learning and statistical feature fusion for interpretable mental workload classification from EEG.","authors":"Sukanta Majumder, Dibyendu Patra, Subhajit Gorai, Anindya Halder, Utpal Biswas","doi":"10.1007/s11571-025-10392-1","DOIUrl":"https://doi.org/10.1007/s11571-025-10392-1","url":null,"abstract":"<p><p>Accurate assessment of mental workload (MWL) from electroencephalography (EEG) signals is crucial for real-time cognitive monitoring in safety-critical domains such as aviation and human-computer interaction. Although various computational approaches have been proposed, those mostly suffer from limited robustness, interpretability, or fail to fully exploit both temporal and non-linear neural dynamics. This article introduces a novel hybrid deep learning and XGBoost stacking ensemble framework for reliable and interpretable MWL classification from EEG. The proposed pipeline systematically includes preprocessing of raw EEGs, followed by comprehensive feature extraction (time-domain, frequency-domain, wavelet-based, entropy, and fractal dimension features), and subsequent discriminative feature selection phase using ANOVA F-values, yielding a compact set of 200 highly informative features. The proposed architecture consists of dual processing branches: a CNN-BiLSTM-Attention based deep learning branch for automatic learning of spatiotemporal dynamics, and an XGBoost branch for robust classification from engineered features. Predictions from both branches are integrated using a logistic regression stacking ensemble, maximizing complementary strengths and improving generalization. Experiments are conducted on the STEW (simultaneous workload) and EEGMAT (mental arithmetic task) dataset. Proposed model yields 96.87% and 99.40% of classification accuracy by outperforming 16 and 7 previously published state-of-the-art techniques on STEW and EEGMAT dataset respectively. Attention heatmaps and SHAP value analysis provide intuitive visual explanations and interpretability of the model's decision making, while systematic ablation studies validate the contribution of each architectural module. This work demonstrates that a carefully engineered stacking ensemble, informed by both deep learning and classical machine learning, capable of delivering not only improved performance but also enhanced interpretability for EEG-based MWL assessment in real-world applications.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"18"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12681509/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145707567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electroencephalography (EEG) can objectively reflect an individual's emotional state. However, due to significant inter-subject differences, existing methods exhibit low generalization performance in emotion recognition across different individuals. Therefore, an EEG emotion classification framework based on deep feature aggregation and multi-source domain adaptation is proposed by us. First, we design a deep feature aggregation module that introduces a novel approach for extracting EEG hemisphere asymmetry features and integrates these features with the frequency and spatiotemporal characteristics of the EEG signals. Additionally, a multi-source domain adaptation strategy is proposed, where multiple independent feature extraction sub-networks are employed to process each domain separately, extracting discriminative features and thereby alleviating the feature shift problem between domains. Then, a domain adaptation strategy is employed to align multiple source domains with the target domain, thereby reducing inter-domain distribution discrepancies and facilitating effective cross-domain knowledge transfer. Simultaneously, to enhance the learning ability of target samples near the decision boundary, pseudo-labels are dynamically generated for the unlabeled samples in the target domain. By leveraging predictions from multiple classifiers, we calculate the average confidence of each pseudo-label group and select the pseudo-label set with the highest confidence as the final label for the target sample. Finally, the mean of the outputs from multiple classifiers is used as the model's final prediction. A comprehensive set of experiments was performed using the publicly available SEED and SEED-IV datasets. The findings indicate that the method we proposed outperforms alternative methods.
{"title":"EEG emotion recognition across subjects based on deep feature aggregation and multi-source domain adaptation.","authors":"Kunqiang Lin, Ying Li, Yiren He, Zihan Jiang, Renjie He, Xianzhe Wang, Hongxu Guo, Lei Guo","doi":"10.1007/s11571-025-10379-y","DOIUrl":"https://doi.org/10.1007/s11571-025-10379-y","url":null,"abstract":"<p><p>Electroencephalography (EEG) can objectively reflect an individual's emotional state. However, due to significant inter-subject differences, existing methods exhibit low generalization performance in emotion recognition across different individuals. Therefore, an EEG emotion classification framework based on deep feature aggregation and multi-source domain adaptation is proposed by us. First, we design a deep feature aggregation module that introduces a novel approach for extracting EEG hemisphere asymmetry features and integrates these features with the frequency and spatiotemporal characteristics of the EEG signals. Additionally, a multi-source domain adaptation strategy is proposed, where multiple independent feature extraction sub-networks are employed to process each domain separately, extracting discriminative features and thereby alleviating the feature shift problem between domains. Then, a domain adaptation strategy is employed to align multiple source domains with the target domain, thereby reducing inter-domain distribution discrepancies and facilitating effective cross-domain knowledge transfer. Simultaneously, to enhance the learning ability of target samples near the decision boundary, pseudo-labels are dynamically generated for the unlabeled samples in the target domain. By leveraging predictions from multiple classifiers, we calculate the average confidence of each pseudo-label group and select the pseudo-label set with the highest confidence as the final label for the target sample. Finally, the mean of the outputs from multiple classifiers is used as the model's final prediction. A comprehensive set of experiments was performed using the publicly available SEED and SEED-IV datasets. The findings indicate that the method we proposed outperforms alternative methods.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"8"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12644276/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145630828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Biometric traits are unique characteristics of an individual's body or behavior that can be used for identification and authentication. Biometric authentication uses unique physiological and behavioral traits for secure identity verification. Traditional unimodal biometric authentication systems often suffer from spoofing attacks, sensor noise, forgery, and environmental dependencies. To overcome these limitations, our work presents multimodal biometric authentication integrated with the characteristics of electroencephalograph (EEG) signals and handwritten signatures to enhance security, efficiency, and robustness. EEG-based authentication uses the brainwave patterns' intrinsic and unforgeable nature, while signature recognition demonstrates an additional behavioral trait for effectiveness. Our system processes EEG data of an individual with 14-channel readings, and the signature with the images ensures a seamless fusion of both modalities.Combining physiological and behavioral biometrics, our approach will significantly decrease the risk of unimodal authentication, including forgery, spoofing, and sensor failures. Our system, evaluated on a dataset of 30 subjects with genuine and forged data, demonstrates a 97% accuracy. Designed for small organizations, the modular structure, low computation algorithms, and simplicity of the hardware promote deployment scalability.
{"title":"Multimodal biometric authentication systems: exploring EEG and signature.","authors":"Banee Bandana Das, Chinthala Varnitha Reddy, Ujwala Matha, Chinni Yandapalli, Saswat Kumar Ram","doi":"10.1007/s11571-025-10389-w","DOIUrl":"https://doi.org/10.1007/s11571-025-10389-w","url":null,"abstract":"<p><p>Biometric traits are unique characteristics of an individual's body or behavior that can be used for identification and authentication. Biometric authentication uses unique physiological and behavioral traits for secure identity verification. Traditional unimodal biometric authentication systems often suffer from spoofing attacks, sensor noise, forgery, and environmental dependencies. To overcome these limitations, our work presents multimodal biometric authentication integrated with the characteristics of electroencephalograph (EEG) signals and handwritten signatures to enhance security, efficiency, and robustness. EEG-based authentication uses the brainwave patterns' intrinsic and unforgeable nature, while signature recognition demonstrates an additional behavioral trait for effectiveness. Our system processes EEG data of an individual with 14-channel readings, and the signature with the images ensures a seamless fusion of both modalities.Combining physiological and behavioral biometrics, our approach will significantly decrease the risk of unimodal authentication, including forgery, spoofing, and sensor failures. Our system, evaluated on a dataset of 30 subjects with genuine and forged data, demonstrates a 97% accuracy. Designed for small organizations, the modular structure, low computation algorithms, and simplicity of the hardware promote deployment scalability.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"17"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12681505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145707485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2025-11-14DOI: 10.1007/s11571-025-10383-2
Belle Krubitski, Cesar Ceballos, Ty Roachford, Rodrigo F O Pena
Co-transmission, the release of multiple neurotransmitters from a single neuron, is an increasingly recognized phenomenon in the nervous system. A particularly interesting combination of neurotransmitters exhibiting co-transmission is glutamate and GABA, which, when co-released from neurons, demonstrate complex biphasic activity patterns that vary depending on the time or amplitude differences from the excitatory (AMPA) or inhibitory (GABAA) signals. Naively, the outcome signal produced by these differences can be functionally interpreted as simple mechanisms that only add or remove spikes by excitation or inhibition. However, the complex interaction of multiple time-scales and amplitudes may deliver a more complex temporal coding, which is experimentally difficult to access and interpret. In this work, we employ an extensive computational approach to distinguish these postsynaptic co-transmission patterns and how they interact with dendritic filtering and ionic currents. We specifically focus on modeling the summation patterns and their flexible dynamics that arise from the many combinations of temporal and amplitude co-transmission differences. Our results indicate a number of summation patterns that excite, inhibit, and act transiently, which have been previously attributed to the interplay between the intrinsic active and passive electrical properties of the postsynaptic dendritic membrane. Our computational framework provides an insight into the complex interplay that arises between co-transmission and dendritic filtering, allowing for a mechanistic understanding underlying the integration and processing of co-transmitted signals in neural circuits.
Supplementary information: The online version contains supplementary material available at 10.1007/s11571-025-10383-2.
{"title":"Synaptic summation shapes information transfer in GABA-glutamate co-transmission.","authors":"Belle Krubitski, Cesar Ceballos, Ty Roachford, Rodrigo F O Pena","doi":"10.1007/s11571-025-10383-2","DOIUrl":"https://doi.org/10.1007/s11571-025-10383-2","url":null,"abstract":"<p><p>Co-transmission, the release of multiple neurotransmitters from a single neuron, is an increasingly recognized phenomenon in the nervous system. A particularly interesting combination of neurotransmitters exhibiting co-transmission is glutamate and GABA, which, when co-released from neurons, demonstrate complex biphasic activity patterns that vary depending on the time or amplitude differences from the excitatory (AMPA) or inhibitory (GABA<sub>A</sub>) signals. Naively, the outcome signal produced by these differences can be functionally interpreted as simple mechanisms that only add or remove spikes by excitation or inhibition. However, the complex interaction of multiple time-scales and amplitudes may deliver a more complex temporal coding, which is experimentally difficult to access and interpret. In this work, we employ an extensive computational approach to distinguish these postsynaptic co-transmission patterns and how they interact with dendritic filtering and ionic currents. We specifically focus on modeling the summation patterns and their flexible dynamics that arise from the many combinations of temporal and amplitude co-transmission differences. Our results indicate a number of summation patterns that excite, inhibit, and act transiently, which have been previously attributed to the interplay between the intrinsic active and passive electrical properties of the postsynaptic dendritic membrane. Our computational framework provides an insight into the complex interplay that arises between co-transmission and dendritic filtering, allowing for a mechanistic understanding underlying the integration and processing of co-transmitted signals in neural circuits.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s11571-025-10383-2.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"6"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618799/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145539075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As healthcare text data becomes increasingly complex, it is vital for sentiment analysis to capture local patterns and global contextual dependencies. In this paper, we propose a hybrid Swin Transformer-BiLSTM-Spatial MLP (Swin-MLP) model that leverages hierarchical attention, shifted-window mechanisms, and spatial MLP layers to extract features from domain-specific healthcare text better. The framework is tested on domain-specific datasets for Drug Review and Medical Text, and performance is assessed against baseline models (BERT, LSTM, and GRU). Our findings show that the Swin-MLP model performs significantly better overall, achieving superior metrics (accuracy, precision, recall, F1-score, and AUC) and improving mean accuracy by 1-2% over BERT. Statistical tests to assess significance (McNemar's test and paired t-test) indicate that improvements are statistically significant (p < 0.05), suggesting the efficacy of the architectural innovations. The results' implications indicate that the model is robust, efficiently converges to classification, and is potentially helpful for a wide range of domain-specific sentiment analyses in healthcare. We will examine future research directions into exploring lightweight attention mechanisms, cross-domain multimodal sentiment analysis, federated learning to protect privacy, and hardware implications for rapid training and inference.
{"title":"Leveraging Swin Transformer for advanced sentiment analysis: a new paradigm.","authors":"Gaurav Kumar Rajput, Saurabh Kumar Srivastava, Namit Gupta","doi":"10.1007/s11571-025-10378-z","DOIUrl":"https://doi.org/10.1007/s11571-025-10378-z","url":null,"abstract":"<p><p>As healthcare text data becomes increasingly complex, it is vital for sentiment analysis to capture local patterns and global contextual dependencies. In this paper, we propose a hybrid Swin Transformer-BiLSTM-Spatial MLP (Swin-MLP) model that leverages hierarchical attention, shifted-window mechanisms, and spatial MLP layers to extract features from domain-specific healthcare text better. The framework is tested on domain-specific datasets for Drug Review and Medical Text, and performance is assessed against baseline models (BERT, LSTM, and GRU). Our findings show that the Swin-MLP model performs significantly better overall, achieving superior metrics (accuracy, precision, recall, F1-score, and AUC) and improving mean accuracy by 1-2% over BERT. Statistical tests to assess significance (McNemar's test and paired t-test) indicate that improvements are statistically significant (p < 0.05), suggesting the efficacy of the architectural innovations. The results' implications indicate that the model is robust, efficiently converges to classification, and is potentially helpful for a wide range of domain-specific sentiment analyses in healthcare. We will examine future research directions into exploring lightweight attention mechanisms, cross-domain multimodal sentiment analysis, federated learning to protect privacy, and hardware implications for rapid training and inference.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"13"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12660549/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145647175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2025-11-24DOI: 10.1007/s11571-025-10368-1
Vivekanandan N, Rajeswari K, Yuvraj Kanna Nallu Vivekanandan
Vertigo, a prevalent neurovestibular disorder, arises from dysfunction in the vestibular system and often lacks precise, personalized treatments. This study proposes a bio-inspired spiking neural network (SNN) model that simulates vestibular dysfunction and adaptive recovery using Leaky Integrate-and-Fire (LIF) neurons with spike-timing-dependent plasticity (STDP). The architecture mimics the vestibular pathway through biologically plausible layers: hair cells, afferents, and cerebellar integrators, and models pathological states such as hair cell hypofunction and synaptic disruption. A reinforcement-based feedback mechanism enables the simulation of therapy-induced plasticity, resulting in a 48-62% drop and 38% recovery in cerebellar spike activity during adaptation epochs. The model demonstrates real-time feasibility, with an average simulation runtime of 4 s per epoch on standard hardware. Its design is scalable and well-suited for future deployment on neuromorphic platforms (e.g., Loihi, SpiNNaker). Its modular and interpretable design enables in silico testing of rehabilitation strategies, real-time monitoring of dysfunction, and future personalization using clinical datasets. This work establishes a computational foundation for AI-driven vestibular therapy that is adaptive, explainable, and hardware compatible.
Supplementary information: The online version contains supplementary material available at 10.1007/s11571-025-10368-1.
{"title":"Bio-inspired spiking neural network for modeling and optimizing adaptive vertigo therapy.","authors":"Vivekanandan N, Rajeswari K, Yuvraj Kanna Nallu Vivekanandan","doi":"10.1007/s11571-025-10368-1","DOIUrl":"https://doi.org/10.1007/s11571-025-10368-1","url":null,"abstract":"<p><p>Vertigo, a prevalent neurovestibular disorder, arises from dysfunction in the vestibular system and often lacks precise, personalized treatments. This study proposes a bio-inspired spiking neural network (SNN) model that simulates vestibular dysfunction and adaptive recovery using Leaky Integrate-and-Fire (LIF) neurons with spike-timing-dependent plasticity (STDP). The architecture mimics the vestibular pathway through biologically plausible layers: hair cells, afferents, and cerebellar integrators, and models pathological states such as hair cell hypofunction and synaptic disruption. A reinforcement-based feedback mechanism enables the simulation of therapy-induced plasticity, resulting in a 48-62% drop and 38% recovery in cerebellar spike activity during adaptation epochs. The model demonstrates real-time feasibility, with an average simulation runtime of 4 s per epoch on standard hardware. Its design is scalable and well-suited for future deployment on neuromorphic platforms (e.g., Loihi, SpiNNaker). Its modular and interpretable design enables in silico testing of rehabilitation strategies, real-time monitoring of dysfunction, and future personalization using clinical datasets. This work establishes a computational foundation for AI-driven vestibular therapy that is adaptive, explainable, and hardware compatible.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s11571-025-10368-1.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"11"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12644390/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145630750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2025-11-28DOI: 10.1007/s11571-025-10346-7
Changsoo Shin
Modern AI systems excel at pattern recognition and task execution, but they often fall short of replicating the layered, self-referential structure of human thought that unfolds over time. In this paper, we present a mathematically grounded and conceptually simple framework based on smoothed step functions-sigmoid approximations of Heaviside functions-to model the recursive development of mental activity. Each cognitive layer becomes active at a specific temporal threshold, with the abruptness or gradualness of activation governed by an impressiveness parameter [Formula: see text], which we interpret as a measure of emotional salience or situational impact. Small values of [Formula: see text] represent intense or traumatic experiences, producing sharp and impulsive responses, while large values correspond to persistent background stress, yielding slow but sustained cognitive activation. We formulate the recursive dynamics of these cognitive layers and demonstrate how they give rise to layered cognition, time-based attention, and adaptive memory reinforcement. Unlike conventional memory models, our approach captures thoughts and recall events through a recursive, impressiveness-sensitive pathway, leading to context-dependent memory traces. This recursive structure offers a new perspective on how awareness and memory evolve over time, and provides a promising foundation for designing artificial systems capable of simulating recursive, temporally grounded consciousness.
{"title":"Irreversibility of recursive Heaviside memory functions: a distributional perspective on structural cognition.","authors":"Changsoo Shin","doi":"10.1007/s11571-025-10346-7","DOIUrl":"10.1007/s11571-025-10346-7","url":null,"abstract":"<p><p>Modern AI systems excel at pattern recognition and task execution, but they often fall short of replicating the layered, self-referential structure of human thought that unfolds over time. In this paper, we present a mathematically grounded and conceptually simple framework based on smoothed step functions-sigmoid approximations of Heaviside functions-to model the recursive development of mental activity. Each cognitive layer becomes active at a specific temporal threshold, with the abruptness or gradualness of activation governed by an impressiveness parameter [Formula: see text], which we interpret as a measure of emotional salience or situational impact. Small values of [Formula: see text] represent intense or traumatic experiences, producing sharp and impulsive responses, while large values correspond to persistent background stress, yielding slow but sustained cognitive activation. We formulate the recursive dynamics of these cognitive layers and demonstrate how they give rise to layered cognition, time-based attention, and adaptive memory reinforcement. Unlike conventional memory models, our approach captures thoughts and recall events through a recursive, impressiveness-sensitive pathway, leading to context-dependent memory traces. This recursive structure offers a new perspective on how awareness and memory evolve over time, and provides a promising foundation for designing artificial systems capable of simulating recursive, temporally grounded consciousness.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"14"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12662915/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145647188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2025-11-14DOI: 10.1007/s11571-025-10377-0
Yuki Tomoda, Ichiro Tsuda, Yutaka Yamaguti
Functional differentiation in the brain emerges as distinct regions specialize and is key to understanding brain function as a complex system. Previous research has modeled this process using artificial neural networks with specific constraints. Here, we propose a novel approach that induces functional differentiation in recurrent neural networks by minimizing mutual information between neural subgroups via mutual information neural estimation. We apply our method to a 2-bit working memory task and a chaotic signal separation task involving Lorenz and Rössler time series. Analysis of network performance, correlation patterns, and weight matrices reveals that mutual information minimization yields high task performance alongside clear functional modularity and moderate structural modularity. Importantly, our results show that functional differentiation, which is measured through correlation structures, emerges earlier than structural modularity defined by synaptic weights. This suggests that functional specialization precedes and probably drives structural reorganization within developing neural networks. Our findings provide new insights into how information-theoretic principles may govern the emergence of specialized functions and modular structures during artificial and biological brain development.
{"title":"Emergence of functionally differentiated structures via mutual information minimization in recurrent neural networks.","authors":"Yuki Tomoda, Ichiro Tsuda, Yutaka Yamaguti","doi":"10.1007/s11571-025-10377-0","DOIUrl":"10.1007/s11571-025-10377-0","url":null,"abstract":"<p><p>Functional differentiation in the brain emerges as distinct regions specialize and is key to understanding brain function as a complex system. Previous research has modeled this process using artificial neural networks with specific constraints. Here, we propose a novel approach that induces functional differentiation in recurrent neural networks by minimizing mutual information between neural subgroups via mutual information neural estimation. We apply our method to a 2-bit working memory task and a chaotic signal separation task involving Lorenz and Rössler time series. Analysis of network performance, correlation patterns, and weight matrices reveals that mutual information minimization yields high task performance alongside clear functional modularity and moderate structural modularity. Importantly, our results show that functional differentiation, which is measured through correlation structures, emerges earlier than structural modularity defined by synaptic weights. This suggests that functional specialization precedes and probably drives structural reorganization within developing neural networks. Our findings provide new insights into how information-theoretic principles may govern the emergence of specialized functions and modular structures during artificial and biological brain development.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"20 1","pages":"5"},"PeriodicalIF":3.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12618794/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145538935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}