首页 > 最新文献

Cognitive Neurodynamics最新文献

英文 中文
Current status and challenges in electroencephalography (EEG)-based driver fatigue detection: a comprehensive survey. 基于脑电图(EEG)的驾驶员疲劳检测的现状与挑战综述。
IF 3.9 3区 工程技术 Q2 NEUROSCIENCES Pub Date : 2025-12-01 Epub Date: 2025-09-01 DOI: 10.1007/s11571-025-10320-3
Jahid Hassan, Shekh Naziullah, Mamunur Rashid, Thamina Islam, Md Nahidul Islam, Md Shofiqul Islam, Shoyeb Mahmud

Driver fatigue is a major contributor to traffic accidents, leading to increased fatality rates and severe damage compared to incidents involving alert drivers. Electroencephalography (EEG) has emerged as a widely used method for detecting driver fatigue due to its ability to capture brain activity patterns. This survey provides a thorough analysis of devices that detect driver fatigue using EEG, analyzing existing methodologies, challenges, and future research directions. This study was carried out according to PRISMA criteria. Relevant studies were retrieved from SpringerLink, Web of Science, IEEE Xplore, Scopus, and ScienceDirect, covering research published until February 16, 2025. After 267 publications were identified, 87 scientific papers were fully analyzed based on their relevance and contribution to the identification of driver fatigue using EEG. The review explores the article selection process, followed by an in-depth discussion of driver fatigue detection systems across various domains. Applications of Machine Learning (ML) in EEG-based fatigue evaluation are carefully reviewed, covering data collection, preliminary processing, feature extraction, categorization techniques, and performance assessment. Additionally, a comparative evaluation of cutting-edge research provides a comprehensive visualization of current research trends. This survey highlights the advantages, limitations, and future prospects of EEG-based driver fatigue detection, offering valuable insights for improving road safety. The findings contribute to the development of more reliable and real-time fatigue detection systems by addressing existing challenges and recommending potential solutions.

司机疲劳是交通事故的主要原因,与警觉的司机相比,导致死亡率增加和严重损害。由于脑电图(EEG)能够捕捉大脑活动模式,因此已成为一种广泛使用的检测驾驶员疲劳的方法。本调查对使用EEG检测驾驶员疲劳的设备进行了全面的分析,分析了现有的方法、挑战和未来的研究方向。本研究按照PRISMA标准进行。相关研究检索自SpringerLink、Web of Science、IEEE explore、Scopus和ScienceDirect,涵盖了截至2025年2月16日发表的研究。在确定了267篇论文后,对87篇科学论文进行了全面分析,基于它们对EEG识别驾驶员疲劳的相关性和贡献。这篇综述探讨了文章的选择过程,然后深入讨论了各个领域的驾驶员疲劳检测系统。对机器学习(ML)在基于脑电图的疲劳评估中的应用进行了仔细的回顾,包括数据收集、初步处理、特征提取、分类技术和性能评估。此外,对前沿研究的比较评估提供了当前研究趋势的全面可视化。这项调查强调了基于脑电图的驾驶员疲劳检测的优势、局限性和未来前景,为改善道路安全提供了有价值的见解。这些发现有助于开发更可靠和实时的疲劳检测系统,解决现有的挑战并推荐潜在的解决方案。
{"title":"Current status and challenges in electroencephalography (EEG)-based driver fatigue detection: a comprehensive survey.","authors":"Jahid Hassan, Shekh Naziullah, Mamunur Rashid, Thamina Islam, Md Nahidul Islam, Md Shofiqul Islam, Shoyeb Mahmud","doi":"10.1007/s11571-025-10320-3","DOIUrl":"10.1007/s11571-025-10320-3","url":null,"abstract":"<p><p>Driver fatigue is a major contributor to traffic accidents, leading to increased fatality rates and severe damage compared to incidents involving alert drivers. Electroencephalography (EEG) has emerged as a widely used method for detecting driver fatigue due to its ability to capture brain activity patterns. This survey provides a thorough analysis of devices that detect driver fatigue using EEG, analyzing existing methodologies, challenges, and future research directions. This study was carried out according to PRISMA criteria. Relevant studies were retrieved from SpringerLink, Web of Science, IEEE Xplore, Scopus, and ScienceDirect, covering research published until February 16, 2025. After 267 publications were identified, 87 scientific papers were fully analyzed based on their relevance and contribution to the identification of driver fatigue using EEG. The review explores the article selection process, followed by an in-depth discussion of driver fatigue detection systems across various domains. Applications of Machine Learning (ML) in EEG-based fatigue evaluation are carefully reviewed, covering data collection, preliminary processing, feature extraction, categorization techniques, and performance assessment. Additionally, a comparative evaluation of cutting-edge research provides a comprehensive visualization of current research trends. This survey highlights the advantages, limitations, and future prospects of EEG-based driver fatigue detection, offering valuable insights for improving road safety. The findings contribute to the development of more reliable and real-time fatigue detection systems by addressing existing challenges and recommending potential solutions.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"142"},"PeriodicalIF":3.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12401835/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144991610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A stacking classifier for distinguishing stages of Alzheimer's disease from a subnetwork perspective. 从子网角度区分阿尔茨海默病阶段的堆叠分类器。
IF 3.1 3区 工程技术 Q2 NEUROSCIENCES Pub Date : 2025-12-01 Epub Date: 2025-02-05 DOI: 10.1007/s11571-025-10221-5
Gaoxuan Li, Bo Chen, Weigang Sun, Zhenbing Liu

Accurately distinguishing stages of Alzheimer's disease (AD) is crucial for diagnosis and treatment. In this paper, we introduce a stacking classifier method that combines six single classifiers into a stacking classifier. Using brain network models and network metrics, we employ t-tests to identify abnormal brain regions, from which we construct a subnetwork and extract its features to form the training dataset. Our method is then applied to the ADNI (Alzheimer's Disease Neuroimaging Initiative) datasets, categorizing the stages into four categories: Alzheimer's disease, mild cognitive impairment (MCI), mixed Alzheimer's mild cognitive impairment (ADMCI), and healthy controls (HCs). We investigate four classification groups: AD-HCs, AD-MCI, HCs-ADMCI, and HCs-MCI. Finally, we compare the classification accuracy between a single classifier and our stacking classifier, demonstrating superior accuracy with our stacking classifier from a subnetwork-based viewpoint.

准确区分阿尔茨海默病(AD)的分期对诊断和治疗至关重要。本文介绍了一种将6个单一分类器组合成一个堆叠分类器的方法。利用大脑网络模型和网络指标,我们采用t检验来识别异常的大脑区域,并从中构建子网络并提取其特征以形成训练数据集。然后将我们的方法应用于ADNI(阿尔茨海默病神经影像学倡议)数据集,将这些阶段分为四类:阿尔茨海默病、轻度认知障碍(MCI)、混合性阿尔茨海默病轻度认知障碍(ADMCI)和健康对照(hc)。我们研究了四个分类组:ad - hcc、AD-MCI、hcc - admci和hcc - mci。最后,我们比较了单个分类器和我们的堆叠分类器之间的分类精度,从基于子网络的角度证明了我们的堆叠分类器具有更高的精度。
{"title":"A stacking classifier for distinguishing stages of Alzheimer's disease from a subnetwork perspective.","authors":"Gaoxuan Li, Bo Chen, Weigang Sun, Zhenbing Liu","doi":"10.1007/s11571-025-10221-5","DOIUrl":"10.1007/s11571-025-10221-5","url":null,"abstract":"<p><p>Accurately distinguishing stages of Alzheimer's disease (AD) is crucial for diagnosis and treatment. In this paper, we introduce a stacking classifier method that combines six single classifiers into a stacking classifier. Using brain network models and network metrics, we employ <i>t</i>-tests to identify abnormal brain regions, from which we construct a subnetwork and extract its features to form the training dataset. Our method is then applied to the ADNI (Alzheimer's Disease Neuroimaging Initiative) datasets, categorizing the stages into four categories: Alzheimer's disease, mild cognitive impairment (MCI), mixed Alzheimer's mild cognitive impairment (ADMCI), and healthy controls (HCs). We investigate four classification groups: AD-HCs, AD-MCI, HCs-ADMCI, and HCs-MCI. Finally, we compare the classification accuracy between a single classifier and our stacking classifier, demonstrating superior accuracy with our stacking classifier from a subnetwork-based viewpoint.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"38"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11799466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143381814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TCANet: a temporal convolutional attention network for motor imagery EEG decoding. TCANet:用于运动意象脑电解码的时间卷积注意网络。
IF 3.1 3区 工程技术 Q2 NEUROSCIENCES Pub Date : 2025-12-01 Epub Date: 2025-06-14 DOI: 10.1007/s11571-025-10275-5
Wei Zhao, Haodong Lu, Baocan Zhang, Xinwang Zheng, Wenfeng Wang, Haifeng Zhou

Decoding motor imagery electroencephalogram (MI-EEG) signals is fundamental to the development of brain-computer interface (BCI) systems. However, robust decoding remains a challenge due to the inherent complexity and variability of MI-EEG signals. This study proposes the Temporal Convolutional Attention Network (TCANet), a novel end-to-end model that hierarchically captures spatiotemporal dependencies by progressively integrating local, fused, and global features. Specifically, TCANet employs a multi-scale convolutional module to extract local spatiotemporal representations across multiple temporal resolutions. A temporal convolutional module then fuses and compresses these multi-scale features while modeling both short- and long-term dependencies. Subsequently, a stacked multi-head self-attention mechanism refines the global representations, followed by a fully connected layer that performs MI-EEG classification. The proposed model was systematically evaluated on the BCI IV-2a and IV-2b datasets under both subject-dependent and subject-independent settings. In subject-dependent classification, TCANet achieved accuracies of 83.06% and 88.52% on BCI IV-2a and IV-2b respectively, with corresponding Kappa values of 0.7742 and 0.7703, outperforming multiple representative baselines. In the more challenging subject-independent setting, TCANet achieved competitive performance on IV-2a and demonstrated potential for improvement on IV-2b. The code is available at https://github.com/snailpt/TCANet.

运动图像脑电图(MI-EEG)信号的解码是脑机接口(BCI)系统发展的基础。然而,由于MI-EEG信号固有的复杂性和可变性,鲁棒解码仍然是一个挑战。本研究提出了时间卷积注意网络(TCANet),这是一种新颖的端到端模型,通过逐步整合局部、融合和全局特征,分层次捕获时空依赖关系。具体而言,TCANet采用多尺度卷积模块来提取跨多个时间分辨率的局部时空表示。然后,一个时间卷积模块融合并压缩这些多尺度特征,同时对短期和长期依赖关系进行建模。随后,一个堆叠的多头自注意机制细化了全局表示,然后是一个执行MI-EEG分类的全连接层。在受试者依赖和受试者独立设置下,对所提出的模型在BCI IV-2a和IV-2b数据集上进行了系统评估。在主题依赖分类中,TCANet在BCI IV-2a和IV-2b上的准确率分别为83.06%和88.52%,Kappa值分别为0.7742和0.7703,优于多个代表性基线。在更具挑战性的科目独立设置中,TCANet在IV-2a上取得了具有竞争力的表现,并在IV-2b上展示了改进的潜力。代码可在https://github.com/snailpt/TCANet上获得。
{"title":"TCANet: a temporal convolutional attention network for motor imagery EEG decoding.","authors":"Wei Zhao, Haodong Lu, Baocan Zhang, Xinwang Zheng, Wenfeng Wang, Haifeng Zhou","doi":"10.1007/s11571-025-10275-5","DOIUrl":"10.1007/s11571-025-10275-5","url":null,"abstract":"<p><p>Decoding motor imagery electroencephalogram (MI-EEG) signals is fundamental to the development of brain-computer interface (BCI) systems. However, robust decoding remains a challenge due to the inherent complexity and variability of MI-EEG signals. This study proposes the Temporal Convolutional Attention Network (TCANet), a novel end-to-end model that hierarchically captures spatiotemporal dependencies by progressively integrating local, fused, and global features. Specifically, TCANet employs a multi-scale convolutional module to extract local spatiotemporal representations across multiple temporal resolutions. A temporal convolutional module then fuses and compresses these multi-scale features while modeling both short- and long-term dependencies. Subsequently, a stacked multi-head self-attention mechanism refines the global representations, followed by a fully connected layer that performs MI-EEG classification. The proposed model was systematically evaluated on the BCI IV-2a and IV-2b datasets under both subject-dependent and subject-independent settings. In subject-dependent classification, TCANet achieved accuracies of 83.06% and 88.52% on BCI IV-2a and IV-2b respectively, with corresponding Kappa values of 0.7742 and 0.7703, outperforming multiple representative baselines. In the more challenging subject-independent setting, TCANet achieved competitive performance on IV-2a and demonstrated potential for improvement on IV-2b. The code is available at https://github.com/snailpt/TCANet.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"91"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167204/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144309661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emotion recognition framework based on adaptive window selection and CA-KAN. 基于自适应窗口选择和CA-KAN的情绪识别框架。
IF 3.1 3区 工程技术 Q2 NEUROSCIENCES Pub Date : 2025-12-01 Epub Date: 2025-06-24 DOI: 10.1007/s11571-025-10283-5
Xuefen Lin, Linhui Fan, Yifan Gu, Zhixian Wu

In recent years, emotion recognition, particularly EEG-based emotion recognition, has found widespread application across various domains. Enhancing EEG data processing and emotion recognition models remains a key research focus in this field. This paper presents an emotion recognition framework combining the CUSUM algorithm-based adaptive window selection technique with the convolutional attention-enhanced Kolmogorov-Arnold Networks (CA-KAN). The improved CUSUM algorithm effectively extracts the most emotion-relevant segments from raw EEG data. Furthermore, by enhancing the KAN network, the CA-KAN model achieves both high accuracy and efficiency in emotion recognition. The proposed framework achieved peak classification accuracies of 94.63% and 94.73% on the SEED and SEED-IV datasets, respectively. Additionally, the framework offers a lightweight advantage, demonstrating significant potential for real-world applications, including medical emotion monitoring and driver emotion detection.

近年来,情感识别,特别是基于脑电图的情感识别,在各个领域得到了广泛的应用。增强脑电数据处理和情绪识别模型仍然是该领域的研究重点。本文提出了一种基于CUSUM算法的自适应窗口选择技术与卷积注意增强Kolmogorov-Arnold网络(CA-KAN)相结合的情绪识别框架。改进的CUSUM算法能有效地从原始脑电数据中提取出与情绪最相关的部分。此外,通过对KAN网络的改进,CA-KAN模型在情绪识别方面达到了较高的准确率和效率。该框架在SEED和SEED- iv数据集上的峰值分类准确率分别为94.63%和94.73%。此外,该框架还具有轻量级的优势,在现实世界的应用中具有巨大的潜力,包括医疗情绪监测和驾驶员情绪检测。
{"title":"Emotion recognition framework based on adaptive window selection and CA-KAN.","authors":"Xuefen Lin, Linhui Fan, Yifan Gu, Zhixian Wu","doi":"10.1007/s11571-025-10283-5","DOIUrl":"10.1007/s11571-025-10283-5","url":null,"abstract":"<p><p>In recent years, emotion recognition, particularly EEG-based emotion recognition, has found widespread application across various domains. Enhancing EEG data processing and emotion recognition models remains a key research focus in this field. This paper presents an emotion recognition framework combining the CUSUM algorithm-based adaptive window selection technique with the convolutional attention-enhanced Kolmogorov-Arnold Networks (CA-KAN). The improved CUSUM algorithm effectively extracts the most emotion-relevant segments from raw EEG data. Furthermore, by enhancing the KAN network, the CA-KAN model achieves both high accuracy and efficiency in emotion recognition. The proposed framework achieved peak classification accuracies of 94.63% and 94.73% on the SEED and SEED-IV datasets, respectively. Additionally, the framework offers a lightweight advantage, demonstrating significant potential for real-world applications, including medical emotion monitoring and driver emotion detection.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"100"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12187633/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144504983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global exponential stability of periodic solutions for Cohen-Grossberg neural networks involving generalized piecewise constant delay. 广义分段常延迟Cohen-Grossberg神经网络周期解的全局指数稳定性。
IF 3.9 3区 工程技术 Q2 NEUROSCIENCES Pub Date : 2025-12-01 Epub Date: 2025-08-19 DOI: 10.1007/s11571-025-10315-0
Kuo-Shou Chiu, Jyh-Cheng Jeng, Tongxing Li, Fernando Córdova-Lepe

This paper investigates the global exponential stability and periodicity of the Cohen-Grossberg neural network model with generalized piecewise constant delay. By applying Schaefer's fixed-point theorem, a sufficient condition for the existence of periodic solutions in the model is established. Additionally, by constructing appropriate differential inequalities with generalized piecewise constant delay, sufficient conditions for the global exponential stability of the model are obtained. Finally, computer simulations are conducted to illustrate a globally exponentially stable periodic Cohen-Grossberg neural network model, thereby confirming the feasibility and effectiveness of the proposed results.

研究了具有广义分段常时滞的Cohen-Grossberg神经网络模型的全局指数稳定性和周期性。利用Schaefer不动点定理,给出了模型周期解存在的充分条件。此外,通过构造适当的具有广义分段常时滞的微分不等式,得到了模型全局指数稳定的充分条件。最后,通过计算机仿真验证了一个全局指数稳定的周期Cohen-Grossberg神经网络模型,从而验证了所提结果的可行性和有效性。
{"title":"Global exponential stability of periodic solutions for Cohen-Grossberg neural networks involving generalized piecewise constant delay.","authors":"Kuo-Shou Chiu, Jyh-Cheng Jeng, Tongxing Li, Fernando Córdova-Lepe","doi":"10.1007/s11571-025-10315-0","DOIUrl":"10.1007/s11571-025-10315-0","url":null,"abstract":"<p><p>This paper investigates the global exponential stability and periodicity of the Cohen-Grossberg neural network model with generalized piecewise constant delay. By applying Schaefer's fixed-point theorem, a sufficient condition for the existence of periodic solutions in the model is established. Additionally, by constructing appropriate differential inequalities with generalized piecewise constant delay, sufficient conditions for the global exponential stability of the model are obtained. Finally, computer simulations are conducted to illustrate a globally exponentially stable periodic Cohen-Grossberg neural network model, thereby confirming the feasibility and effectiveness of the proposed results.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"129"},"PeriodicalIF":3.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12364798/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144945566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual statistical learning based on a coupled shape-position recurrent neural network model. 基于形状-位置耦合递归神经网络模型的视觉统计学习。
IF 3.1 3区 工程技术 Q2 NEUROSCIENCES Pub Date : 2025-12-01 Epub Date: 2025-06-17 DOI: 10.1007/s11571-025-10285-3
Baolong Sun, Yihong Wang, Xuying Xu, Xiaochuan Pan

The visual system has the ability to learn the statistical regularities (temporal and/or spatial) that characterize the visual scene automatically and implicitly. This ability is referred to as the visual statistical learning (VSL). The VSL could group several objects that have fixed statistical properties into a chunk. This complex process relies on the collaborative involvement of multiple brain regions that work together to learn the chunk. Although behavioral experiments have explored cognitive functions of the VSL, its computational mechanisms remain poorly understood. To address this issue, this study proposes a coupled shape-position recurrent neural network model based on the anatomical structure of the visual system to explain how chunk information is learned and represented in neural networks. The model comprises three core modules: the position network, which encodes object position information; the shape network, which encodes object shape information; and the decision network, which integrates the neuronal activity in the position and shape networks to make decisions. The model successfully simulates the results of a classic spatial VSL experiment. The distribution of neural firing rates in the decision network shows a significant difference between chunk and non-chunk conditions. Specifically, these neurons in the chunk condition exhibit stronger firing rates than those in the non-chunk condition. Furthermore, after the model learns a scene containing both chunk and non-chunk stimuli, neurons in the position network selectively encode far and near stimuli, respectively. In contrast, neurons in the shape network distinguish between chunk and non-chunk. The chunk encoding neurons selectively respond to specific chunks. These results indicate that the proposed model is able to learn spatial regularities of the stimuli to discriminate chunks from non-chunks, and neurons in the shape network selectively respond to chuck and non-chunk information. These findings offer important theoretical insights into the representation mechanisms of chunk information in neural networks and propose a new framework for modeling spatial VSL.

视觉系统有能力学习统计规律(时间和/或空间),自动和隐式地表征视觉场景。这种能力被称为视觉统计学习(VSL)。VSL可以将几个具有固定统计属性的对象分组到一个块中。这个复杂的过程依赖于多个大脑区域的协同参与,这些区域一起工作来学习大块。虽然行为实验已经探索了VSL的认知功能,但其计算机制仍然知之甚少。为了解决这一问题,本研究提出了一种基于视觉系统解剖结构的耦合形状-位置递归神经网络模型,以解释神经网络如何学习和表示块信息。该模型包括三个核心模块:位置网络,对目标位置信息进行编码;形状网络,对物体形状信息进行编码;还有决策网络,它整合了位置和形状网络中的神经元活动来做出决策。该模型成功地模拟了经典空间VSL实验的结果。决策网络中的神经放电率分布在分块和非分块条件下存在显著差异。具体来说,这些神经元在组块条件下表现出比非组块条件下更强的放电率。此外,在模型学习了包含块和非块刺激的场景后,位置网络中的神经元分别选择性地编码远刺激和近刺激。相反,形状网络中的神经元区分块和非块。编码块的神经元选择性地对特定块做出反应。这些结果表明,该模型能够学习刺激的空间规律,区分块和非块,并且形状网络中的神经元有选择地响应恰克和非块信息。这些发现为神经网络中块信息的表示机制提供了重要的理论见解,并提出了空间VSL建模的新框架。
{"title":"Visual statistical learning based on a coupled shape-position recurrent neural network model.","authors":"Baolong Sun, Yihong Wang, Xuying Xu, Xiaochuan Pan","doi":"10.1007/s11571-025-10285-3","DOIUrl":"10.1007/s11571-025-10285-3","url":null,"abstract":"<p><p>The visual system has the ability to learn the statistical regularities (temporal and/or spatial) that characterize the visual scene automatically and implicitly. This ability is referred to as the visual statistical learning (VSL). The VSL could group several objects that have fixed statistical properties into a chunk. This complex process relies on the collaborative involvement of multiple brain regions that work together to learn the chunk. Although behavioral experiments have explored cognitive functions of the VSL, its computational mechanisms remain poorly understood. To address this issue, this study proposes a coupled shape-position recurrent neural network model based on the anatomical structure of the visual system to explain how chunk information is learned and represented in neural networks. The model comprises three core modules: the position network, which encodes object position information; the shape network, which encodes object shape information; and the decision network, which integrates the neuronal activity in the position and shape networks to make decisions. The model successfully simulates the results of a classic spatial VSL experiment. The distribution of neural firing rates in the decision network shows a significant difference between chunk and non-chunk conditions. Specifically, these neurons in the chunk condition exhibit stronger firing rates than those in the non-chunk condition. Furthermore, after the model learns a scene containing both chunk and non-chunk stimuli, neurons in the position network selectively encode far and near stimuli, respectively. In contrast, neurons in the shape network distinguish between chunk and non-chunk. The chunk encoding neurons selectively respond to specific chunks. These results indicate that the proposed model is able to learn spatial regularities of the stimuli to discriminate chunks from non-chunks, and neurons in the shape network selectively respond to chuck and non-chunk information. These findings offer important theoretical insights into the representation mechanisms of chunk information in neural networks and propose a new framework for modeling spatial VSL.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"96"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12174023/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144332590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Msst-eegnet: multi-scale spatio-temporal feature extraction using inception and temporal pyramid pooling for motor imagery classification. mst -eegnet:基于初始和时间金字塔池的多尺度时空特征提取用于运动图像分类。
IF 3.9 3区 工程技术 Q2 NEUROSCIENCES Pub Date : 2025-12-01 Epub Date: 2025-09-20 DOI: 10.1007/s11571-025-10337-8
Rashmi Mishra, R K Agrawal, Jyoti Singh Kirar

Motor imagery classification is an essential component of Brain-computer interface systems to interpret and recognize brain signals generated during the visualization of motor imagery tasks by a subject. The objective of this work is to develop a novel DL model to extract discriminative features for better generalization performance to recognize motor imagery tasks. This paper presents a novel Multi-scale spatio-temporal network (MSST-EEGNet) to extract discriminative temporal, spectral, and spatial features for motor imagery task classification. The proposed MSST-EEGNet model includes three modules namely the inception module with dilated convolution, the temporal pyramid pooling module, and the classification module. Multi-scale temporal features along with spatial features are extracted using the inception block with the dilated convolution module. A set of multi-level fine-grained and coarse-grained features are extracted using a temporal pyramid pooling module. Further, categorical cross-entropy in combination with center loss is used as a loss function. Experiments are carried out on three benchmark datasets including the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset. The evaluation results shows that the proposed MSST-EEGNet model outperforms eight existing DL models in terms of classification accuracy for subject-specific and cross-session settings. It also outperforms eight existing DL models and six existing transfer-learning models for cross-subject setting. For the subject-specific classification the proposed MSST-EEGNet model achieved an accuracy of 0.8426 ± 0.1061, 0.7779 ± 0.0938, and 0.7365 ± 0.1477 on the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset respectively. For the cross-session setting, the proposed MSST-EEGNet model achieved an accuracy of 0.7709 ± 0.1098, 0.7524 ± 0.1017, and 0.6860 ± 0.0990 on the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset respectively. For the cross-subject setting, the proposed MSST-EEGNet model achieved an accuracy of 0.7288 ± 0.0730, 0.8161 ± 0.963, and 0.7075 ± 0.0746 on the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset respectively. Furthermore, a non-parametric Friedman statistical test demonstrates statistically significant superior performance of the proposed MSST-EEGNet model over the existing models.

运动意象分类是脑机接口系统解释和识别被试在运动意象任务可视化过程中产生的脑信号的重要组成部分。本工作的目的是开发一种新的深度学习模型来提取判别特征,以获得更好的泛化性能来识别运动图像任务。本文提出了一种新的多尺度时空网络(mst - eegnet),用于提取具有区别性的时间、光谱和空间特征,用于运动图像任务分类。提出的mst - eegnet模型包括三个模块,即扩展卷积初始模块、时间金字塔池化模块和分类模块。利用扩展卷积模块提取多尺度时间特征和空间特征。使用时间金字塔池模块提取一组多级细粒度和粗粒度特征。进一步,将分类交叉熵与中心损失相结合作为损失函数。在BCI Competition IV-2a数据集、BCI Competition IV-2b数据集和OpenBMI数据集三个基准数据集上进行了实验。评估结果表明,所提出的mst - eegnet模型在特定主题和跨会话设置的分类精度方面优于现有的8个DL模型。它也优于现有的八个深度学习模型和六个现有的跨学科迁移学习模型。在主题分类方面,mst - eegnet模型在BCI Competition IV-2a数据集、BCI Competition IV-2b数据集和OpenBMI数据集上的准确率分别为0.8426±0.1061、0.7779±0.0938和0.7365±0.1477。对于跨会话设置,所提出的mst - eegnet模型在BCI Competition IV-2a数据集、BCI Competition IV-2b数据集和OpenBMI数据集上的准确率分别为0.7709±0.1098、0.7524±0.1017和0.6860±0.0990。对于跨主题设置,本文提出的mst - eegnet模型在BCI Competition IV-2a数据集、BCI Competition IV-2b数据集和OpenBMI数据集上的准确率分别为0.7288±0.0730、0.8161±0.963和0.7075±0.0746。此外,非参数Friedman统计检验表明,所提出的mst - eegnet模型在统计上优于现有模型。
{"title":"Msst-eegnet: multi-scale spatio-temporal feature extraction using inception and temporal pyramid pooling for motor imagery classification.","authors":"Rashmi Mishra, R K Agrawal, Jyoti Singh Kirar","doi":"10.1007/s11571-025-10337-8","DOIUrl":"https://doi.org/10.1007/s11571-025-10337-8","url":null,"abstract":"<p><p>Motor imagery classification is an essential component of Brain-computer interface systems to interpret and recognize brain signals generated during the visualization of motor imagery tasks by a subject. The objective of this work is to develop a novel DL model to extract discriminative features for better generalization performance to recognize motor imagery tasks. This paper presents a novel Multi-scale spatio-temporal network (MSST-EEGNet) to extract discriminative temporal, spectral, and spatial features for motor imagery task classification. The proposed MSST-EEGNet model includes three modules namely the inception module with dilated convolution, the temporal pyramid pooling module, and the classification module. Multi-scale temporal features along with spatial features are extracted using the inception block with the dilated convolution module. A set of multi-level fine-grained and coarse-grained features are extracted using a temporal pyramid pooling module. Further, categorical cross-entropy in combination with center loss is used as a loss function. Experiments are carried out on three benchmark datasets including the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset. The evaluation results shows that the proposed MSST-EEGNet model outperforms eight existing DL models in terms of classification accuracy for subject-specific and cross-session settings. It also outperforms eight existing DL models and six existing transfer-learning models for cross-subject setting. For the subject-specific classification the proposed MSST-EEGNet model achieved an accuracy of 0.8426 ± 0.1061, 0.7779 ± 0.0938, and 0.7365 ± 0.1477 on the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset respectively. For the cross-session setting, the proposed MSST-EEGNet model achieved an accuracy of 0.7709 ± 0.1098, 0.7524 ± 0.1017, and 0.6860 ± 0.0990 on the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset respectively. For the cross-subject setting, the proposed MSST-EEGNet model achieved an accuracy of 0.7288 ± 0.0730, 0.8161 ± 0.963, and 0.7075 ± 0.0746 on the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset respectively. Furthermore, a non-parametric Friedman statistical test demonstrates statistically significant superior performance of the proposed MSST-EEGNet model over the existing models.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"150"},"PeriodicalIF":3.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12450197/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145124262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Construction and evaluation of an emotion-inducing video dataset towards Chinese elderly healthy controls and individuals with mild cognitive impairment. 中国老年人健康对照和轻度认知障碍个体情绪诱导视频数据集的构建与评价
IF 3.9 3区 工程技术 Q2 NEUROSCIENCES Pub Date : 2025-12-01 Epub Date: 2025-09-27 DOI: 10.1007/s11571-025-10318-x
Tao Liang, Junxiao Yu, Keke Shi, Yihao Yao, Jie Li, Bin Liu, Wei Wang, Chengyu Liu, Liangcheng Qu, Kuiying Yin, Wentao Xiang, Jianqing Li

This work aimed to develop and validate an emotion-inducing video dataset for the Chinese elderly. The dataset was constructed by video collection, psychological evaluation, and elderly examination. 18 videos across six emotions (neutrality, sadness, anger, happiness, boredom, and tension) were selected for emotional induction. The effectiveness of the dataset was evaluated in 37 subjects, with two groups, 21 healthy controls (HC group) and 16 individuals with mild cognitive impairment (MCI group), who were assessed in a three-session experiment. Each session comprised one pretest and six emotion-inducing videos. The electrocardiogram (ECG) and electroencephalography (EEG) signals were synchronously recorded. After viewing each video, the subjects provided self-reports of discrete emotion labels, valence, and arousal scores using a modified Self-Assessment Manikin scale. Discrete emotion analysis, valence/arousal analysis, and ECG feature analysis were conducted by the ANOVA method. EEG feature analysis was assessed with a linear mixed-effects model. Discrete emotion analysis confirmed that happiness and sadness induced by the dataset show high agreement rates (e.g., happiness: HC 0.79, MCI 0.85 and sadness: HC 0.81, MCI 0.71), whereas boredom (HC 0.38, MCI 0.29) showed a comparatively lower consistency. Valence/arousal analysis revealed significant group differences for tension and boredom emotions. ECG feature analysis revealed significant differences in the baseline-normalized mean heart rate between HC and MCI groups in specific sessions. EEG feature analysis revealed that the MCI group exhibited higher relative band power values than did the HC group in the δ and θ bands.

Supplementary information: The online version contains supplementary material available at 10.1007/s11571-025-10318-x.

本研究旨在开发并验证中国老年人情绪诱导视频数据集。数据集由视频采集、心理评估和老年人检查组成。选取了包含6种情绪(中立、悲伤、愤怒、快乐、无聊、紧张)的18个视频进行情绪诱导。该数据集的有效性在37名受试者中进行了评估,其中包括两组,21名健康对照组(HC组)和16名轻度认知障碍患者(MCI组),他们在三个阶段的实验中进行了评估。每个环节包括一个预测和六个情感诱导视频。同步记录心电图(ECG)和脑电图(EEG)信号。观看完每段视频后,受试者使用改良的自我评估模型量表提供离散情绪标签、效价和唤醒分数的自我报告。采用方差分析方法进行离散情绪分析、价/觉醒分析和心电特征分析。脑电特征分析采用线性混合效应模型。离散情绪分析证实,数据集引起的快乐和悲伤具有较高的一致性(例如,快乐:HC 0.79, MCI 0.85,悲伤:HC 0.81, MCI 0.71),而无聊(HC 0.38, MCI 0.29)的一致性相对较低。效价/唤醒分析揭示了紧张和无聊情绪的显著组间差异。心电图特征分析显示,HC组和MCI组在特定时段的基线标准化平均心率存在显著差异。脑电特征分析显示,MCI组在δ和θ波段的相对波段功率值高于HC组。补充信息:在线版本包含补充资料,提供地址为10.1007/s11571-025-10318-x。
{"title":"Construction and evaluation of an emotion-inducing video dataset towards Chinese elderly healthy controls and individuals with mild cognitive impairment.","authors":"Tao Liang, Junxiao Yu, Keke Shi, Yihao Yao, Jie Li, Bin Liu, Wei Wang, Chengyu Liu, Liangcheng Qu, Kuiying Yin, Wentao Xiang, Jianqing Li","doi":"10.1007/s11571-025-10318-x","DOIUrl":"https://doi.org/10.1007/s11571-025-10318-x","url":null,"abstract":"<p><p>This work aimed to develop and validate an emotion-inducing video dataset for the Chinese elderly. The dataset was constructed by video collection, psychological evaluation, and elderly examination. 18 videos across six emotions (neutrality, sadness, anger, happiness, boredom, and tension) were selected for emotional induction. The effectiveness of the dataset was evaluated in 37 subjects, with two groups, 21 healthy controls (HC group) and 16 individuals with mild cognitive impairment (MCI group), who were assessed in a three-session experiment. Each session comprised one pretest and six emotion-inducing videos. The electrocardiogram (ECG) and electroencephalography (EEG) signals were synchronously recorded. After viewing each video, the subjects provided self-reports of discrete emotion labels, valence, and arousal scores using a modified Self-Assessment Manikin scale. Discrete emotion analysis, valence/arousal analysis, and ECG feature analysis were conducted by the ANOVA method. EEG feature analysis was assessed with a linear mixed-effects model. Discrete emotion analysis confirmed that happiness and sadness induced by the dataset show high agreement rates (e.g., happiness: HC 0.79, MCI 0.85 and sadness: HC 0.81, MCI 0.71), whereas boredom (HC 0.38, MCI 0.29) showed a comparatively lower consistency. Valence/arousal analysis revealed significant group differences for tension and boredom emotions. ECG feature analysis revealed significant differences in the baseline-normalized mean heart rate between HC and MCI groups in specific sessions. EEG feature analysis revealed that the MCI group exhibited higher relative band power values than did the HC group in the <math><mi>δ</mi></math> and <math><mi>θ</mi></math> bands.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s11571-025-10318-x.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"154"},"PeriodicalIF":3.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476350/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145191080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-patient seizure prediction via continuous domain adaptation and similar sample replay. 通过连续域适应和相似样本回放来预测跨患者癫痫发作。
IF 3.1 3区 工程技术 Q2 NEUROSCIENCES Pub Date : 2025-12-01 Epub Date: 2025-01-15 DOI: 10.1007/s11571-024-10216-8
Ziye Zhang, Aiping Liu, Yikai Gao, Ruobing Qian, Xun Chen

Seizure prediction based on electroencephalogram (EEG) for people with epilepsy, a common brain disorder worldwide, has great potential for life quality improvement. To alleviate the high degree of heterogeneity among patients, several works have attempted to learn common seizure feature distributions based on the idea of domain adaptation to enhance the generalization ability of the model. However, existing methods ignore the inherent inter-patient discrepancy within the source patients, resulting in disjointed distributions that impede effective domain alignment. To eliminate this effect, we introduce the concept of multi-source domain adaptation (MSDA), considering each source patient as a separate domain. To avoid additional model complexity from MSDA, we propose a continuous domain adaptation approach for seizure prediction based on the convolutional neural network (CNN), which performs sequential training on multiple source domains. To relieve the model catastrophic forgetting during sequential training, we replay similar samples from each source domain, while learning common feature representations based on subdomain alignment. Evaluated on a publicly available epilepsy dataset, our proposed method attains a sensitivity of 85.0% and a false alarm rate (FPR) of 0.224/h. Compared to the prevailing domain adaptation paradigm and existing domain adaptation works in the field, the proposed method can efficiently capture the knowledge of different patients, extract better common seizure representations, and achieve state-of-the-art performance.

癫痫是一种世界范围内常见的脑部疾病,基于脑电图(EEG)的癫痫发作预测在改善生活质量方面具有巨大潜力。为了缓解患者之间的高度异质性,一些研究尝试基于领域适应的思想来学习常见的癫痫发作特征分布,以增强模型的泛化能力。然而,现有的方法忽略了源患者内部固有的患者间差异,导致分布脱节,阻碍了有效的域对齐。为了消除这种影响,我们引入了多源域适应(MSDA)的概念,将每个源患者视为一个单独的域。为了避免MSDA带来的额外模型复杂性,我们提出了一种基于卷积神经网络(CNN)的连续域自适应癫痫发作预测方法,该方法在多个源域上进行顺序训练。为了减轻序列训练过程中的模型灾难性遗忘,我们从每个源域重播相似的样本,同时基于子域对齐学习共同的特征表示。在公开可用的癫痫数据集上进行评估,我们提出的方法的灵敏度为85.0%,误报率(FPR)为0.224/h。与主流的领域自适应范式和现有领域自适应工作相比,该方法可以有效地捕获不同患者的知识,提取更好的常见癫痫表征,并达到最先进的性能。
{"title":"Cross-patient seizure prediction via continuous domain adaptation and similar sample replay.","authors":"Ziye Zhang, Aiping Liu, Yikai Gao, Ruobing Qian, Xun Chen","doi":"10.1007/s11571-024-10216-8","DOIUrl":"10.1007/s11571-024-10216-8","url":null,"abstract":"<p><p>Seizure prediction based on electroencephalogram (EEG) for people with epilepsy, a common brain disorder worldwide, has great potential for life quality improvement. To alleviate the high degree of heterogeneity among patients, several works have attempted to learn common seizure feature distributions based on the idea of domain adaptation to enhance the generalization ability of the model. However, existing methods ignore the inherent inter-patient discrepancy within the source patients, resulting in disjointed distributions that impede effective domain alignment. To eliminate this effect, we introduce the concept of multi-source domain adaptation (MSDA), considering each source patient as a separate domain. To avoid additional model complexity from MSDA, we propose a continuous domain adaptation approach for seizure prediction based on the convolutional neural network (CNN), which performs sequential training on multiple source domains. To relieve the model catastrophic forgetting during sequential training, we replay similar samples from each source domain, while learning common feature representations based on subdomain alignment. Evaluated on a publicly available epilepsy dataset, our proposed method attains a sensitivity of 85.0% and a false alarm rate (FPR) of 0.224/h. Compared to the prevailing domain adaptation paradigm and existing domain adaptation works in the field, the proposed method can efficiently capture the knowledge of different patients, extract better common seizure representations, and achieve state-of-the-art performance.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"26"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11735696/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143001017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural dynamics of deception: insights from fMRI studies of brain states. 欺骗的神经动力学:来自大脑状态的功能磁共振成像研究的见解。
IF 3.1 3区 工程技术 Q2 NEUROSCIENCES Pub Date : 2025-12-01 Epub Date: 2025-02-20 DOI: 10.1007/s11571-025-10222-4
Weixiong Jiang, Lin Li, Yulong Xia, Sajid Farooq, Gang Li, Shuaiqi Li, Jinhua Xu, Sailing He, Xiangyu Wu, Shoujun Huang, Jing Yuan, Dexing Kong

Deception is a complex behavior that requires greater cognitive effort than truth-telling, with brain states dynamically adapting to external stimuli and cognitive demands. Investigating these brain states provides valuable insights into the brain's temporal and spatial dynamics. In this study, we designed an experiment paradigm to efficiently simulate lying and constructed a temporal network of brain states. We applied the Louvain community clustering algorithm to identify characteristic brain states associated with lie-telling, inverse-telling, and truth-telling. Our analysis revealed six representative brain states with unique spatial characteristics. Notably, two distinct states-termed truth-preferred and lie-preferred-exhibited significant differences in fractional occupancy and average dwelling time. The truth-preferred state showed higher occupancy and dwelling time during truth-telling, while the lie-preferred state demonstrated these characteristics during lie-telling. Using the average z-score BOLD signals of these two states, we applied generalized linear models with elastic net regularization, achieving a classification accuracy of 88.46%, with a sensitivity of 92.31% and a specificity of 84.62% in distinguishing deception from truth-telling. These findings revealed representative brain states for lie-telling, inverse-telling, and truth-telling, highlighting two states specifically associated with truthful and deceptive behaviors. The spatial characteristics and dynamic attributes of these brain states indicate their potential as biomarkers of cognitive engagement in deception.

Supplementary information: The online version contains supplementary material available at 10.1007/s11571-025-10222-4.

欺骗是一种复杂的行为,比说真话需要更多的认知努力,大脑状态会动态地适应外部刺激和认知需求。研究这些大脑状态为大脑的时空动态提供了有价值的见解。在这项研究中,我们设计了一个实验范式来有效地模拟撒谎,并构建了一个大脑状态的时间网络。我们应用Louvain社区聚类算法来识别与说谎、反说谎和说真话相关的特征大脑状态。我们的分析揭示了六种具有独特空间特征的代表性大脑状态。值得注意的是,两种不同的状态——真相偏好和谎言偏好——在占用率和平均停留时间上表现出显著差异。真相偏好状态在说谎过程中表现出更高的占用率和停留时间,而谎言偏好状态在说谎过程中也表现出这些特征。利用这两种状态的平均z-score BOLD信号,我们采用弹性网络正则化的广义线性模型,在区分欺骗和说谎方面,准确率为88.46%,灵敏度为92.31%,特异性为84.62%。这些发现揭示了说谎、反说谎和说真话的典型大脑状态,突出了两种与诚实和欺骗行为特别相关的状态。这些大脑状态的空间特征和动态属性表明它们有可能成为欺骗认知参与的生物标志物。补充信息:在线版本包含补充资料,提供地址为10.1007/s11571-025-10222-4。
{"title":"Neural dynamics of deception: insights from fMRI studies of brain states.","authors":"Weixiong Jiang, Lin Li, Yulong Xia, Sajid Farooq, Gang Li, Shuaiqi Li, Jinhua Xu, Sailing He, Xiangyu Wu, Shoujun Huang, Jing Yuan, Dexing Kong","doi":"10.1007/s11571-025-10222-4","DOIUrl":"10.1007/s11571-025-10222-4","url":null,"abstract":"<p><p>Deception is a complex behavior that requires greater cognitive effort than truth-telling, with brain states dynamically adapting to external stimuli and cognitive demands. Investigating these brain states provides valuable insights into the brain's temporal and spatial dynamics. In this study, we designed an experiment paradigm to efficiently simulate lying and constructed a temporal network of brain states. We applied the Louvain community clustering algorithm to identify characteristic brain states associated with lie-telling, inverse-telling, and truth-telling. Our analysis revealed six representative brain states with unique spatial characteristics. Notably, two distinct states-termed <i>truth-preferred</i> and <i>lie-preferred</i>-exhibited significant differences in fractional occupancy and average dwelling time. The truth-preferred state showed higher occupancy and dwelling time during truth-telling, while the lie-preferred state demonstrated these characteristics during lie-telling. Using the average z-score BOLD signals of these two states, we applied generalized linear models with elastic net regularization, achieving a classification accuracy of 88.46%, with a sensitivity of 92.31% and a specificity of 84.62% in distinguishing deception from truth-telling. These findings revealed representative brain states for lie-telling, inverse-telling, and truth-telling, highlighting two states specifically associated with truthful and deceptive behaviors. The spatial characteristics and dynamic attributes of these brain states indicate their potential as biomarkers of cognitive engagement in deception.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s11571-025-10222-4.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"42"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11842687/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143482401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Cognitive Neurodynamics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1