基于对比学习图卷积模型的脑电信号情感识别。

Yiling Zhang, Yuan Liao, Wei Chen, Xiruo Zhang, Liya Huang
{"title":"基于对比学习图卷积模型的脑电信号情感识别。","authors":"Yiling Zhang, Yuan Liao, Wei Chen, Xiruo Zhang, Liya Huang","doi":"10.1088/1741-2552/ad7060","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective.</i>Electroencephalogram (EEG) signals offer invaluable insights into the complexities of emotion generation within the brain. Yet, the variability in EEG signals across individuals presents a formidable obstacle for empirical implementations. Our research addresses these challenges innovatively, focusing on the commonalities within distinct subjects' EEG data.<i>Approach.</i>We introduce a novel approach named Contrastive Learning Graph Convolutional Network (CLGCN). This method captures the distinctive features and crucial channel nodes related to individuals' emotional states. Specifically, CLGCN merges the dual benefits of CL's synchronous multisubject data learning and the GCN's proficiency in deciphering brain connectivity matrices. Understanding multifaceted brain functions and their information interchange processes is realized as CLGCN generates a standardized brain network learning matrix during a dataset's learning process.<i>Main results.</i>Our model underwent rigorous testing on the Database for Emotion Analysis using Physiological Signals (DEAP) and SEED datasets. In the five-fold cross-validation used for dependent subject experimental setting, it achieved an accuracy of 97.13% on the DEAP dataset and surpassed 99% on the SEED and SEED_IV datasets. In the incremental learning experiments with the SEED dataset, merely 5% of the data was sufficient to fine-tune the model, resulting in an accuracy of 92.8% for the new subject. These findings validate the model's efficacy.<i>Significance.</i>This work combines CL with GCN, improving the accuracy of decoding emotional states from EEG signals and offering valuable insights into uncovering the underlying mechanisms of emotional processes in the brain.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Emotion recognition of EEG signals based on contrastive learning graph convolutional model.\",\"authors\":\"Yiling Zhang, Yuan Liao, Wei Chen, Xiruo Zhang, Liya Huang\",\"doi\":\"10.1088/1741-2552/ad7060\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p><i>Objective.</i>Electroencephalogram (EEG) signals offer invaluable insights into the complexities of emotion generation within the brain. Yet, the variability in EEG signals across individuals presents a formidable obstacle for empirical implementations. Our research addresses these challenges innovatively, focusing on the commonalities within distinct subjects' EEG data.<i>Approach.</i>We introduce a novel approach named Contrastive Learning Graph Convolutional Network (CLGCN). This method captures the distinctive features and crucial channel nodes related to individuals' emotional states. Specifically, CLGCN merges the dual benefits of CL's synchronous multisubject data learning and the GCN's proficiency in deciphering brain connectivity matrices. Understanding multifaceted brain functions and their information interchange processes is realized as CLGCN generates a standardized brain network learning matrix during a dataset's learning process.<i>Main results.</i>Our model underwent rigorous testing on the Database for Emotion Analysis using Physiological Signals (DEAP) and SEED datasets. In the five-fold cross-validation used for dependent subject experimental setting, it achieved an accuracy of 97.13% on the DEAP dataset and surpassed 99% on the SEED and SEED_IV datasets. In the incremental learning experiments with the SEED dataset, merely 5% of the data was sufficient to fine-tune the model, resulting in an accuracy of 92.8% for the new subject. These findings validate the model's efficacy.<i>Significance.</i>This work combines CL with GCN, improving the accuracy of decoding emotional states from EEG signals and offering valuable insights into uncovering the underlying mechanisms of emotional processes in the brain.</p>\",\"PeriodicalId\":94096,\"journal\":{\"name\":\"Journal of neural engineering\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of neural engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/1741-2552/ad7060\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1741-2552/ad7060","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

脑电图(EEG)信号为了解大脑情绪产生的复杂性提供了宝贵的信息。然而,不同个体的脑电信号存在差异,这给经验性实施带来了巨大障碍。我们的研究以创新的方式应对了这些挑战,重点关注不同受试者脑电图数据的共性。这种方法能捕捉与个体情绪状态相关的显著特征和关键通道节点。具体来说,CLGCN 融合了对比学习(Contrastive Learning)的多主体同步数据学习和图卷积网络(Graph Convolutional Network)在解读大脑连接矩阵方面的双重优势。在数据集的学习过程中,CLGCN 会生成标准化的大脑网络学习矩阵,从而实现对多方面大脑功能及其信息交换过程的理解。我们的模型大大简化了新受试者的再训练过程,只需要初始样本量的 5%进行微调,就能达到 92.8% 的惊人准确率。此外,我们的模型还在 DEAP 和 SEED 数据集上进行了广泛测试,证明了我们模型的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Emotion recognition of EEG signals based on contrastive learning graph convolutional model.

Objective.Electroencephalogram (EEG) signals offer invaluable insights into the complexities of emotion generation within the brain. Yet, the variability in EEG signals across individuals presents a formidable obstacle for empirical implementations. Our research addresses these challenges innovatively, focusing on the commonalities within distinct subjects' EEG data.Approach.We introduce a novel approach named Contrastive Learning Graph Convolutional Network (CLGCN). This method captures the distinctive features and crucial channel nodes related to individuals' emotional states. Specifically, CLGCN merges the dual benefits of CL's synchronous multisubject data learning and the GCN's proficiency in deciphering brain connectivity matrices. Understanding multifaceted brain functions and their information interchange processes is realized as CLGCN generates a standardized brain network learning matrix during a dataset's learning process.Main results.Our model underwent rigorous testing on the Database for Emotion Analysis using Physiological Signals (DEAP) and SEED datasets. In the five-fold cross-validation used for dependent subject experimental setting, it achieved an accuracy of 97.13% on the DEAP dataset and surpassed 99% on the SEED and SEED_IV datasets. In the incremental learning experiments with the SEED dataset, merely 5% of the data was sufficient to fine-tune the model, resulting in an accuracy of 92.8% for the new subject. These findings validate the model's efficacy.Significance.This work combines CL with GCN, improving the accuracy of decoding emotional states from EEG signals and offering valuable insights into uncovering the underlying mechanisms of emotional processes in the brain.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Ultrasound pulse repetition frequency preferentially activates different neuron populations independent of cell type. Characterizing upper limb motor dysfunction with temporal and spatial distribution of muscle synergy extracted from high-density surface electromyography. Decoding working-memory load duringn-back task performance from high channel fNIRS data. Distraction impact of concurrent conversation on event-related potential based brain-computer interfaces. Utilizing diffusion tensor imaging as an image biomarker in exploring the therapeutic efficacy of forniceal deep brain stimulation in a mice model of Alzheimer's disease.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1