Decoding motor imagery electroencephalogram (MI-EEG) signals is fundamental to the development of brain-computer interface (BCI) systems. However, robust decoding remains a challenge due to the inherent complexity and variability of MI-EEG signals. This study proposes the Temporal Convolutional Attention Network (TCANet), a novel end-to-end model that hierarchically captures spatiotemporal dependencies by progressively integrating local, fused, and global features. Specifically, TCANet employs a multi-scale convolutional module to extract local spatiotemporal representations across multiple temporal resolutions. A temporal convolutional module then fuses and compresses these multi-scale features while modeling both short- and long-term dependencies. Subsequently, a stacked multi-head self-attention mechanism refines the global representations, followed by a fully connected layer that performs MI-EEG classification. The proposed model was systematically evaluated on the BCI IV-2a and IV-2b datasets under both subject-dependent and subject-independent settings. In subject-dependent classification, TCANet achieved accuracies of 83.06% and 88.52% on BCI IV-2a and IV-2b respectively, with corresponding Kappa values of 0.7742 and 0.7703, outperforming multiple representative baselines. In the more challenging subject-independent setting, TCANet achieved competitive performance on IV-2a and demonstrated potential for improvement on IV-2b. The code is available at https://github.com/snailpt/TCANet.
{"title":"TCANet: a temporal convolutional attention network for motor imagery EEG decoding.","authors":"Wei Zhao, Haodong Lu, Baocan Zhang, Xinwang Zheng, Wenfeng Wang, Haifeng Zhou","doi":"10.1007/s11571-025-10275-5","DOIUrl":"10.1007/s11571-025-10275-5","url":null,"abstract":"<p><p>Decoding motor imagery electroencephalogram (MI-EEG) signals is fundamental to the development of brain-computer interface (BCI) systems. However, robust decoding remains a challenge due to the inherent complexity and variability of MI-EEG signals. This study proposes the Temporal Convolutional Attention Network (TCANet), a novel end-to-end model that hierarchically captures spatiotemporal dependencies by progressively integrating local, fused, and global features. Specifically, TCANet employs a multi-scale convolutional module to extract local spatiotemporal representations across multiple temporal resolutions. A temporal convolutional module then fuses and compresses these multi-scale features while modeling both short- and long-term dependencies. Subsequently, a stacked multi-head self-attention mechanism refines the global representations, followed by a fully connected layer that performs MI-EEG classification. The proposed model was systematically evaluated on the BCI IV-2a and IV-2b datasets under both subject-dependent and subject-independent settings. In subject-dependent classification, TCANet achieved accuracies of 83.06% and 88.52% on BCI IV-2a and IV-2b respectively, with corresponding Kappa values of 0.7742 and 0.7703, outperforming multiple representative baselines. In the more challenging subject-independent setting, TCANet achieved competitive performance on IV-2a and demonstrated potential for improvement on IV-2b. The code is available at https://github.com/snailpt/TCANet.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"91"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167204/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144309661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-06-24DOI: 10.1007/s11571-025-10283-5
Xuefen Lin, Linhui Fan, Yifan Gu, Zhixian Wu
In recent years, emotion recognition, particularly EEG-based emotion recognition, has found widespread application across various domains. Enhancing EEG data processing and emotion recognition models remains a key research focus in this field. This paper presents an emotion recognition framework combining the CUSUM algorithm-based adaptive window selection technique with the convolutional attention-enhanced Kolmogorov-Arnold Networks (CA-KAN). The improved CUSUM algorithm effectively extracts the most emotion-relevant segments from raw EEG data. Furthermore, by enhancing the KAN network, the CA-KAN model achieves both high accuracy and efficiency in emotion recognition. The proposed framework achieved peak classification accuracies of 94.63% and 94.73% on the SEED and SEED-IV datasets, respectively. Additionally, the framework offers a lightweight advantage, demonstrating significant potential for real-world applications, including medical emotion monitoring and driver emotion detection.
{"title":"Emotion recognition framework based on adaptive window selection and CA-KAN.","authors":"Xuefen Lin, Linhui Fan, Yifan Gu, Zhixian Wu","doi":"10.1007/s11571-025-10283-5","DOIUrl":"10.1007/s11571-025-10283-5","url":null,"abstract":"<p><p>In recent years, emotion recognition, particularly EEG-based emotion recognition, has found widespread application across various domains. Enhancing EEG data processing and emotion recognition models remains a key research focus in this field. This paper presents an emotion recognition framework combining the CUSUM algorithm-based adaptive window selection technique with the convolutional attention-enhanced Kolmogorov-Arnold Networks (CA-KAN). The improved CUSUM algorithm effectively extracts the most emotion-relevant segments from raw EEG data. Furthermore, by enhancing the KAN network, the CA-KAN model achieves both high accuracy and efficiency in emotion recognition. The proposed framework achieved peak classification accuracies of 94.63% and 94.73% on the SEED and SEED-IV datasets, respectively. Additionally, the framework offers a lightweight advantage, demonstrating significant potential for real-world applications, including medical emotion monitoring and driver emotion detection.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"100"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12187633/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144504983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-08-19DOI: 10.1007/s11571-025-10315-0
Kuo-Shou Chiu, Jyh-Cheng Jeng, Tongxing Li, Fernando Córdova-Lepe
This paper investigates the global exponential stability and periodicity of the Cohen-Grossberg neural network model with generalized piecewise constant delay. By applying Schaefer's fixed-point theorem, a sufficient condition for the existence of periodic solutions in the model is established. Additionally, by constructing appropriate differential inequalities with generalized piecewise constant delay, sufficient conditions for the global exponential stability of the model are obtained. Finally, computer simulations are conducted to illustrate a globally exponentially stable periodic Cohen-Grossberg neural network model, thereby confirming the feasibility and effectiveness of the proposed results.
{"title":"Global exponential stability of periodic solutions for Cohen-Grossberg neural networks involving generalized piecewise constant delay.","authors":"Kuo-Shou Chiu, Jyh-Cheng Jeng, Tongxing Li, Fernando Córdova-Lepe","doi":"10.1007/s11571-025-10315-0","DOIUrl":"10.1007/s11571-025-10315-0","url":null,"abstract":"<p><p>This paper investigates the global exponential stability and periodicity of the Cohen-Grossberg neural network model with generalized piecewise constant delay. By applying Schaefer's fixed-point theorem, a sufficient condition for the existence of periodic solutions in the model is established. Additionally, by constructing appropriate differential inequalities with generalized piecewise constant delay, sufficient conditions for the global exponential stability of the model are obtained. Finally, computer simulations are conducted to illustrate a globally exponentially stable periodic Cohen-Grossberg neural network model, thereby confirming the feasibility and effectiveness of the proposed results.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"129"},"PeriodicalIF":3.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12364798/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144945566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-06-17DOI: 10.1007/s11571-025-10285-3
Baolong Sun, Yihong Wang, Xuying Xu, Xiaochuan Pan
The visual system has the ability to learn the statistical regularities (temporal and/or spatial) that characterize the visual scene automatically and implicitly. This ability is referred to as the visual statistical learning (VSL). The VSL could group several objects that have fixed statistical properties into a chunk. This complex process relies on the collaborative involvement of multiple brain regions that work together to learn the chunk. Although behavioral experiments have explored cognitive functions of the VSL, its computational mechanisms remain poorly understood. To address this issue, this study proposes a coupled shape-position recurrent neural network model based on the anatomical structure of the visual system to explain how chunk information is learned and represented in neural networks. The model comprises three core modules: the position network, which encodes object position information; the shape network, which encodes object shape information; and the decision network, which integrates the neuronal activity in the position and shape networks to make decisions. The model successfully simulates the results of a classic spatial VSL experiment. The distribution of neural firing rates in the decision network shows a significant difference between chunk and non-chunk conditions. Specifically, these neurons in the chunk condition exhibit stronger firing rates than those in the non-chunk condition. Furthermore, after the model learns a scene containing both chunk and non-chunk stimuli, neurons in the position network selectively encode far and near stimuli, respectively. In contrast, neurons in the shape network distinguish between chunk and non-chunk. The chunk encoding neurons selectively respond to specific chunks. These results indicate that the proposed model is able to learn spatial regularities of the stimuli to discriminate chunks from non-chunks, and neurons in the shape network selectively respond to chuck and non-chunk information. These findings offer important theoretical insights into the representation mechanisms of chunk information in neural networks and propose a new framework for modeling spatial VSL.
{"title":"Visual statistical learning based on a coupled shape-position recurrent neural network model.","authors":"Baolong Sun, Yihong Wang, Xuying Xu, Xiaochuan Pan","doi":"10.1007/s11571-025-10285-3","DOIUrl":"10.1007/s11571-025-10285-3","url":null,"abstract":"<p><p>The visual system has the ability to learn the statistical regularities (temporal and/or spatial) that characterize the visual scene automatically and implicitly. This ability is referred to as the visual statistical learning (VSL). The VSL could group several objects that have fixed statistical properties into a chunk. This complex process relies on the collaborative involvement of multiple brain regions that work together to learn the chunk. Although behavioral experiments have explored cognitive functions of the VSL, its computational mechanisms remain poorly understood. To address this issue, this study proposes a coupled shape-position recurrent neural network model based on the anatomical structure of the visual system to explain how chunk information is learned and represented in neural networks. The model comprises three core modules: the position network, which encodes object position information; the shape network, which encodes object shape information; and the decision network, which integrates the neuronal activity in the position and shape networks to make decisions. The model successfully simulates the results of a classic spatial VSL experiment. The distribution of neural firing rates in the decision network shows a significant difference between chunk and non-chunk conditions. Specifically, these neurons in the chunk condition exhibit stronger firing rates than those in the non-chunk condition. Furthermore, after the model learns a scene containing both chunk and non-chunk stimuli, neurons in the position network selectively encode far and near stimuli, respectively. In contrast, neurons in the shape network distinguish between chunk and non-chunk. The chunk encoding neurons selectively respond to specific chunks. These results indicate that the proposed model is able to learn spatial regularities of the stimuli to discriminate chunks from non-chunks, and neurons in the shape network selectively respond to chuck and non-chunk information. These findings offer important theoretical insights into the representation mechanisms of chunk information in neural networks and propose a new framework for modeling spatial VSL.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"96"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12174023/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144332590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-20DOI: 10.1007/s11571-025-10337-8
Rashmi Mishra, R K Agrawal, Jyoti Singh Kirar
Motor imagery classification is an essential component of Brain-computer interface systems to interpret and recognize brain signals generated during the visualization of motor imagery tasks by a subject. The objective of this work is to develop a novel DL model to extract discriminative features for better generalization performance to recognize motor imagery tasks. This paper presents a novel Multi-scale spatio-temporal network (MSST-EEGNet) to extract discriminative temporal, spectral, and spatial features for motor imagery task classification. The proposed MSST-EEGNet model includes three modules namely the inception module with dilated convolution, the temporal pyramid pooling module, and the classification module. Multi-scale temporal features along with spatial features are extracted using the inception block with the dilated convolution module. A set of multi-level fine-grained and coarse-grained features are extracted using a temporal pyramid pooling module. Further, categorical cross-entropy in combination with center loss is used as a loss function. Experiments are carried out on three benchmark datasets including the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset. The evaluation results shows that the proposed MSST-EEGNet model outperforms eight existing DL models in terms of classification accuracy for subject-specific and cross-session settings. It also outperforms eight existing DL models and six existing transfer-learning models for cross-subject setting. For the subject-specific classification the proposed MSST-EEGNet model achieved an accuracy of 0.8426 ± 0.1061, 0.7779 ± 0.0938, and 0.7365 ± 0.1477 on the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset respectively. For the cross-session setting, the proposed MSST-EEGNet model achieved an accuracy of 0.7709 ± 0.1098, 0.7524 ± 0.1017, and 0.6860 ± 0.0990 on the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset respectively. For the cross-subject setting, the proposed MSST-EEGNet model achieved an accuracy of 0.7288 ± 0.0730, 0.8161 ± 0.963, and 0.7075 ± 0.0746 on the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset respectively. Furthermore, a non-parametric Friedman statistical test demonstrates statistically significant superior performance of the proposed MSST-EEGNet model over the existing models.
{"title":"Msst-eegnet: multi-scale spatio-temporal feature extraction using inception and temporal pyramid pooling for motor imagery classification.","authors":"Rashmi Mishra, R K Agrawal, Jyoti Singh Kirar","doi":"10.1007/s11571-025-10337-8","DOIUrl":"https://doi.org/10.1007/s11571-025-10337-8","url":null,"abstract":"<p><p>Motor imagery classification is an essential component of Brain-computer interface systems to interpret and recognize brain signals generated during the visualization of motor imagery tasks by a subject. The objective of this work is to develop a novel DL model to extract discriminative features for better generalization performance to recognize motor imagery tasks. This paper presents a novel Multi-scale spatio-temporal network (MSST-EEGNet) to extract discriminative temporal, spectral, and spatial features for motor imagery task classification. The proposed MSST-EEGNet model includes three modules namely the inception module with dilated convolution, the temporal pyramid pooling module, and the classification module. Multi-scale temporal features along with spatial features are extracted using the inception block with the dilated convolution module. A set of multi-level fine-grained and coarse-grained features are extracted using a temporal pyramid pooling module. Further, categorical cross-entropy in combination with center loss is used as a loss function. Experiments are carried out on three benchmark datasets including the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset. The evaluation results shows that the proposed MSST-EEGNet model outperforms eight existing DL models in terms of classification accuracy for subject-specific and cross-session settings. It also outperforms eight existing DL models and six existing transfer-learning models for cross-subject setting. For the subject-specific classification the proposed MSST-EEGNet model achieved an accuracy of 0.8426 ± 0.1061, 0.7779 ± 0.0938, and 0.7365 ± 0.1477 on the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset respectively. For the cross-session setting, the proposed MSST-EEGNet model achieved an accuracy of 0.7709 ± 0.1098, 0.7524 ± 0.1017, and 0.6860 ± 0.0990 on the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset respectively. For the cross-subject setting, the proposed MSST-EEGNet model achieved an accuracy of 0.7288 ± 0.0730, 0.8161 ± 0.963, and 0.7075 ± 0.0746 on the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset respectively. Furthermore, a non-parametric Friedman statistical test demonstrates statistically significant superior performance of the proposed MSST-EEGNet model over the existing models.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"150"},"PeriodicalIF":3.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12450197/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145124262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-27DOI: 10.1007/s11571-025-10318-x
Tao Liang, Junxiao Yu, Keke Shi, Yihao Yao, Jie Li, Bin Liu, Wei Wang, Chengyu Liu, Liangcheng Qu, Kuiying Yin, Wentao Xiang, Jianqing Li
This work aimed to develop and validate an emotion-inducing video dataset for the Chinese elderly. The dataset was constructed by video collection, psychological evaluation, and elderly examination. 18 videos across six emotions (neutrality, sadness, anger, happiness, boredom, and tension) were selected for emotional induction. The effectiveness of the dataset was evaluated in 37 subjects, with two groups, 21 healthy controls (HC group) and 16 individuals with mild cognitive impairment (MCI group), who were assessed in a three-session experiment. Each session comprised one pretest and six emotion-inducing videos. The electrocardiogram (ECG) and electroencephalography (EEG) signals were synchronously recorded. After viewing each video, the subjects provided self-reports of discrete emotion labels, valence, and arousal scores using a modified Self-Assessment Manikin scale. Discrete emotion analysis, valence/arousal analysis, and ECG feature analysis were conducted by the ANOVA method. EEG feature analysis was assessed with a linear mixed-effects model. Discrete emotion analysis confirmed that happiness and sadness induced by the dataset show high agreement rates (e.g., happiness: HC 0.79, MCI 0.85 and sadness: HC 0.81, MCI 0.71), whereas boredom (HC 0.38, MCI 0.29) showed a comparatively lower consistency. Valence/arousal analysis revealed significant group differences for tension and boredom emotions. ECG feature analysis revealed significant differences in the baseline-normalized mean heart rate between HC and MCI groups in specific sessions. EEG feature analysis revealed that the MCI group exhibited higher relative band power values than did the HC group in the and bands.
Supplementary information: The online version contains supplementary material available at 10.1007/s11571-025-10318-x.
{"title":"Construction and evaluation of an emotion-inducing video dataset towards Chinese elderly healthy controls and individuals with mild cognitive impairment.","authors":"Tao Liang, Junxiao Yu, Keke Shi, Yihao Yao, Jie Li, Bin Liu, Wei Wang, Chengyu Liu, Liangcheng Qu, Kuiying Yin, Wentao Xiang, Jianqing Li","doi":"10.1007/s11571-025-10318-x","DOIUrl":"https://doi.org/10.1007/s11571-025-10318-x","url":null,"abstract":"<p><p>This work aimed to develop and validate an emotion-inducing video dataset for the Chinese elderly. The dataset was constructed by video collection, psychological evaluation, and elderly examination. 18 videos across six emotions (neutrality, sadness, anger, happiness, boredom, and tension) were selected for emotional induction. The effectiveness of the dataset was evaluated in 37 subjects, with two groups, 21 healthy controls (HC group) and 16 individuals with mild cognitive impairment (MCI group), who were assessed in a three-session experiment. Each session comprised one pretest and six emotion-inducing videos. The electrocardiogram (ECG) and electroencephalography (EEG) signals were synchronously recorded. After viewing each video, the subjects provided self-reports of discrete emotion labels, valence, and arousal scores using a modified Self-Assessment Manikin scale. Discrete emotion analysis, valence/arousal analysis, and ECG feature analysis were conducted by the ANOVA method. EEG feature analysis was assessed with a linear mixed-effects model. Discrete emotion analysis confirmed that happiness and sadness induced by the dataset show high agreement rates (e.g., happiness: HC 0.79, MCI 0.85 and sadness: HC 0.81, MCI 0.71), whereas boredom (HC 0.38, MCI 0.29) showed a comparatively lower consistency. Valence/arousal analysis revealed significant group differences for tension and boredom emotions. ECG feature analysis revealed significant differences in the baseline-normalized mean heart rate between HC and MCI groups in specific sessions. EEG feature analysis revealed that the MCI group exhibited higher relative band power values than did the HC group in the <math><mi>δ</mi></math> and <math><mi>θ</mi></math> bands.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s11571-025-10318-x.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"154"},"PeriodicalIF":3.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476350/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145191080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seizure prediction based on electroencephalogram (EEG) for people with epilepsy, a common brain disorder worldwide, has great potential for life quality improvement. To alleviate the high degree of heterogeneity among patients, several works have attempted to learn common seizure feature distributions based on the idea of domain adaptation to enhance the generalization ability of the model. However, existing methods ignore the inherent inter-patient discrepancy within the source patients, resulting in disjointed distributions that impede effective domain alignment. To eliminate this effect, we introduce the concept of multi-source domain adaptation (MSDA), considering each source patient as a separate domain. To avoid additional model complexity from MSDA, we propose a continuous domain adaptation approach for seizure prediction based on the convolutional neural network (CNN), which performs sequential training on multiple source domains. To relieve the model catastrophic forgetting during sequential training, we replay similar samples from each source domain, while learning common feature representations based on subdomain alignment. Evaluated on a publicly available epilepsy dataset, our proposed method attains a sensitivity of 85.0% and a false alarm rate (FPR) of 0.224/h. Compared to the prevailing domain adaptation paradigm and existing domain adaptation works in the field, the proposed method can efficiently capture the knowledge of different patients, extract better common seizure representations, and achieve state-of-the-art performance.
{"title":"Cross-patient seizure prediction via continuous domain adaptation and similar sample replay.","authors":"Ziye Zhang, Aiping Liu, Yikai Gao, Ruobing Qian, Xun Chen","doi":"10.1007/s11571-024-10216-8","DOIUrl":"10.1007/s11571-024-10216-8","url":null,"abstract":"<p><p>Seizure prediction based on electroencephalogram (EEG) for people with epilepsy, a common brain disorder worldwide, has great potential for life quality improvement. To alleviate the high degree of heterogeneity among patients, several works have attempted to learn common seizure feature distributions based on the idea of domain adaptation to enhance the generalization ability of the model. However, existing methods ignore the inherent inter-patient discrepancy within the source patients, resulting in disjointed distributions that impede effective domain alignment. To eliminate this effect, we introduce the concept of multi-source domain adaptation (MSDA), considering each source patient as a separate domain. To avoid additional model complexity from MSDA, we propose a continuous domain adaptation approach for seizure prediction based on the convolutional neural network (CNN), which performs sequential training on multiple source domains. To relieve the model catastrophic forgetting during sequential training, we replay similar samples from each source domain, while learning common feature representations based on subdomain alignment. Evaluated on a publicly available epilepsy dataset, our proposed method attains a sensitivity of 85.0% and a false alarm rate (FPR) of 0.224/h. Compared to the prevailing domain adaptation paradigm and existing domain adaptation works in the field, the proposed method can efficiently capture the knowledge of different patients, extract better common seizure representations, and achieve state-of-the-art performance.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"26"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11735696/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143001017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-02-20DOI: 10.1007/s11571-025-10222-4
Weixiong Jiang, Lin Li, Yulong Xia, Sajid Farooq, Gang Li, Shuaiqi Li, Jinhua Xu, Sailing He, Xiangyu Wu, Shoujun Huang, Jing Yuan, Dexing Kong
Deception is a complex behavior that requires greater cognitive effort than truth-telling, with brain states dynamically adapting to external stimuli and cognitive demands. Investigating these brain states provides valuable insights into the brain's temporal and spatial dynamics. In this study, we designed an experiment paradigm to efficiently simulate lying and constructed a temporal network of brain states. We applied the Louvain community clustering algorithm to identify characteristic brain states associated with lie-telling, inverse-telling, and truth-telling. Our analysis revealed six representative brain states with unique spatial characteristics. Notably, two distinct states-termed truth-preferred and lie-preferred-exhibited significant differences in fractional occupancy and average dwelling time. The truth-preferred state showed higher occupancy and dwelling time during truth-telling, while the lie-preferred state demonstrated these characteristics during lie-telling. Using the average z-score BOLD signals of these two states, we applied generalized linear models with elastic net regularization, achieving a classification accuracy of 88.46%, with a sensitivity of 92.31% and a specificity of 84.62% in distinguishing deception from truth-telling. These findings revealed representative brain states for lie-telling, inverse-telling, and truth-telling, highlighting two states specifically associated with truthful and deceptive behaviors. The spatial characteristics and dynamic attributes of these brain states indicate their potential as biomarkers of cognitive engagement in deception.
Supplementary information: The online version contains supplementary material available at 10.1007/s11571-025-10222-4.
{"title":"Neural dynamics of deception: insights from fMRI studies of brain states.","authors":"Weixiong Jiang, Lin Li, Yulong Xia, Sajid Farooq, Gang Li, Shuaiqi Li, Jinhua Xu, Sailing He, Xiangyu Wu, Shoujun Huang, Jing Yuan, Dexing Kong","doi":"10.1007/s11571-025-10222-4","DOIUrl":"10.1007/s11571-025-10222-4","url":null,"abstract":"<p><p>Deception is a complex behavior that requires greater cognitive effort than truth-telling, with brain states dynamically adapting to external stimuli and cognitive demands. Investigating these brain states provides valuable insights into the brain's temporal and spatial dynamics. In this study, we designed an experiment paradigm to efficiently simulate lying and constructed a temporal network of brain states. We applied the Louvain community clustering algorithm to identify characteristic brain states associated with lie-telling, inverse-telling, and truth-telling. Our analysis revealed six representative brain states with unique spatial characteristics. Notably, two distinct states-termed <i>truth-preferred</i> and <i>lie-preferred</i>-exhibited significant differences in fractional occupancy and average dwelling time. The truth-preferred state showed higher occupancy and dwelling time during truth-telling, while the lie-preferred state demonstrated these characteristics during lie-telling. Using the average z-score BOLD signals of these two states, we applied generalized linear models with elastic net regularization, achieving a classification accuracy of 88.46%, with a sensitivity of 92.31% and a specificity of 84.62% in distinguishing deception from truth-telling. These findings revealed representative brain states for lie-telling, inverse-telling, and truth-telling, highlighting two states specifically associated with truthful and deceptive behaviors. The spatial characteristics and dynamic attributes of these brain states indicate their potential as biomarkers of cognitive engagement in deception.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s11571-025-10222-4.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"42"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11842687/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143482401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigated the effects of physical activity on cognitive and motor function in Alzheimer's disease patients. This study searched randomized controlled trials (RCTs) from PubMed, EMBASE, Science Direct, and Web of Science databases up to October 2024. The main evaluation tools were Mini-Mental State Examination (MMSE), Timed Up and Go Test (TUG), 6-Minute walk test (6MWT) and Alzheimer's Disease Assessment Scale-cognitive subscale (ADAS-cog). Mean difference (MD) with 95% confidence interval (CI) were calculated. A total of 25 randomized controlled trials involving 2213 participants were included. The MMSE score in exercise group was higher than that in control group (MD = 2.24, p = 0.002). Aerobic exercise (MD = 2.83, p = 0.01) and combined exercise (MD = 3.09, p = 0.03) in exercise group were significantly better than those in control group. There was no significant difference in strength exercise between the two groups (MD = 0.54, p = 0.48). At low intensity (MD = 5.75, p < 0.001) and moderate intensity (MD = 1.74, p = 0.008), MMSE scores in the exercise group were higher than those in the control group, whereas high-intensity exercise showed no benefit (MD = 0, p = 0.99). On the 6MWT scale, aerobic exercise scores were higher in the exercise group (MD = 51.55, p = 0.03), while there was no significant difference between the two groups under combined exercise (MD = 62.76, p = 0.45). The TUG scale (MD = -0.76, p = 0.06) and the ADAS-cog scale (MD = -1.99, p = 0.23) showed no significant difference between the two groups. Low intensity aerobic exercise improved cognitive and motor function in Alzheimer's disease patients, while strength exercise or high-impact exercise had little effect.
Supplementary information: The online version contains supplementary material available at 10.1007/s11571-025-10326-x.
本研究探讨了体育锻炼对阿尔茨海默病患者认知和运动功能的影响。本研究检索了截至2024年10月PubMed、EMBASE、Science Direct和Web of Science数据库中的随机对照试验(rct)。主要评估工具为简易精神状态检查(MMSE)、计时起跑测试(TUG)、6分钟步行测试(6MWT)和阿尔茨海默病评估量表-认知子量表(ADAS-cog)。计算平均差(MD)和95%可信区间(CI)。共纳入25项随机对照试验,涉及2213名受试者。运动组MMSE评分高于对照组(MD = 2.24, p = 0.002)。运动组有氧运动(MD = 2.83, p = 0.01)和联合运动(MD = 3.09, p = 0.03)均显著优于对照组。两组在力量锻炼方面差异无统计学意义(MD = 0.54, p = 0.48)。在低强度(MD = 5.75, p = 0.008)下,运动组的MMSE评分高于对照组,而高强度运动组的MMSE评分未见改善(MD = 0, p = 0.99)。在6MWT量表上,运动组有氧运动得分较高(MD = 51.55, p = 0.03),而联合运动组与运动组之间差异无统计学意义(MD = 62.76, p = 0.45)。TUG量表(MD = -0.76, p = 0.06)和ADAS-cog量表(MD = -1.99, p = 0.23)两组间差异无统计学意义。低强度有氧运动改善了阿尔茨海默病患者的认知和运动功能,而力量运动或高强度运动几乎没有效果。补充信息:在线版本包含补充资料,提供地址为10.1007/s11571-025-10326-x。
{"title":"Effects of physical exercise on cognitive and motor function in patients with Alzheimer's disease: a meta-analysis based on randomized controlled trials.","authors":"Yuxin Gai, Xuelian Dai, Mengyi Qian, Guojian Lin, Piaorou Pan, Tianfu Dai, Yuedan Luo, Lijing Su","doi":"10.1007/s11571-025-10326-x","DOIUrl":"10.1007/s11571-025-10326-x","url":null,"abstract":"<p><p>This study investigated the effects of physical activity on cognitive and motor function in Alzheimer's disease patients. This study searched randomized controlled trials (RCTs) from PubMed, EMBASE, Science Direct, and Web of Science databases up to October 2024. The main evaluation tools were Mini-Mental State Examination (MMSE), Timed Up and Go Test (TUG), 6-Minute walk test (6MWT) and Alzheimer's Disease Assessment Scale-cognitive subscale (ADAS-cog). Mean difference (MD) with 95% confidence interval (CI) were calculated. A total of 25 randomized controlled trials involving 2213 participants were included. The MMSE score in exercise group was higher than that in control group (MD = 2.24, <i>p</i> = 0.002). Aerobic exercise (MD = 2.83, <i>p</i> = 0.01) and combined exercise (MD = 3.09, <i>p</i> = 0.03) in exercise group were significantly better than those in control group. There was no significant difference in strength exercise between the two groups (MD = 0.54, <i>p</i> = 0.48). At low intensity (MD = 5.75, <i>p</i> < 0.001) and moderate intensity (MD = 1.74, <i>p</i> = 0.008), MMSE scores in the exercise group were higher than those in the control group, whereas high-intensity exercise showed no benefit (MD = 0, <i>p</i> = 0.99). On the 6MWT scale, aerobic exercise scores were higher in the exercise group (MD = 51.55, <i>p</i> = 0.03), while there was no significant difference between the two groups under combined exercise (MD = 62.76, <i>p</i> = 0.45). The TUG scale (MD = -0.76, <i>p</i> = 0.06) and the ADAS-cog scale (MD = -1.99, <i>p</i> = 0.23) showed no significant difference between the two groups. Low intensity aerobic exercise improved cognitive and motor function in Alzheimer's disease patients, while strength exercise or high-impact exercise had little effect.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s11571-025-10326-x.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"133"},"PeriodicalIF":3.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12373596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144945540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-16DOI: 10.1007/s11571-025-10334-x
Yuhua Xu, Ying Du, Xuying Xu, Yihong Wang
The human brain constitutes a highly complex nonlinear network, comprising billions of interconnected neurons capable of rapid and precise responses to diverse internal and external perturbations. Disruptions in neural connectivity or functional impairments within this network can lead to neurological disorders, including epilepsy. In this study, we propose an improved double-column neural model, derived from the Jansen-Rit (JR) framework, to investigate the effects of external stimuli on epileptiform electroencephalogram (EEG) across multiple cortical regions. Our model specifically targets the signal transmission delays and dynamic synaptic interactions within and between cortical columns. Simulations demonstrate that the improved double-column model successfully reproduces diverse EEG phenomena, including alpha rhythms and epileptiform discharges, across distinct cortical layers. When configured within the same cortical region, the model exhibits symmetry dynamics governed by two connection constants, which is predictable within the symmetry framework of the system, validating its plausibility. Notably, in inter-cortical double-column simulations, parametric modulation of coupling strengths generated varied prefrontal cortical epileptiform discharge patterns. Most significantly, applying targeted external stimuli to visual cortex columns induced a state transition in prefrontal cortex column activity, shifting from epileptic like discharges to stable alpha rhythm, which did not occur in the single-column experiment. These findings suggest that focal neuromodulation of specific cortical regions could serve as a potential therapeutic strategy for suppressing pathological activity in epilepsy.
{"title":"Dynamics study of double-column model and its application in epilepsy EEG.","authors":"Yuhua Xu, Ying Du, Xuying Xu, Yihong Wang","doi":"10.1007/s11571-025-10334-x","DOIUrl":"https://doi.org/10.1007/s11571-025-10334-x","url":null,"abstract":"<p><p>The human brain constitutes a highly complex nonlinear network, comprising billions of interconnected neurons capable of rapid and precise responses to diverse internal and external perturbations. Disruptions in neural connectivity or functional impairments within this network can lead to neurological disorders, including epilepsy. In this study, we propose an improved double-column neural model, derived from the Jansen-Rit (JR) framework, to investigate the effects of external stimuli on epileptiform electroencephalogram (EEG) across multiple cortical regions. Our model specifically targets the signal transmission delays and dynamic synaptic interactions within and between cortical columns. Simulations demonstrate that the improved double-column model successfully reproduces diverse EEG phenomena, including alpha rhythms and epileptiform discharges, across distinct cortical layers. When configured within the same cortical region, the model exhibits symmetry dynamics governed by two connection constants, which is predictable within the symmetry framework of the system, validating its plausibility. Notably, in inter-cortical double-column simulations, parametric modulation of coupling strengths generated varied prefrontal cortical epileptiform discharge patterns. Most significantly, applying targeted external stimuli to visual cortex columns induced a state transition in prefrontal cortex column activity, shifting from epileptic like discharges to stable alpha rhythm, which did not occur in the single-column experiment. These findings suggest that focal neuromodulation of specific cortical regions could serve as a potential therapeutic strategy for suppressing pathological activity in epilepsy.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"148"},"PeriodicalIF":3.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12440850/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145085216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}