基于脑电图的跨主体情绪识别的半监督双流自关注对抗图对比学习

IF 9.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Affective Computing Pub Date : 2024-07-25 DOI:10.1109/TAFFC.2024.3433470
Weishan Ye;Zhiguo Zhang;Fei Teng;Min Zhang;Jianhong Wang;Dong Ni;Fali Li;Peng Xu;Zhen Liang
{"title":"基于脑电图的跨主体情绪识别的半监督双流自关注对抗图对比学习","authors":"Weishan Ye;Zhiguo Zhang;Fei Teng;Min Zhang;Jianhong Wang;Dong Ni;Fali Li;Peng Xu;Zhen Liang","doi":"10.1109/TAFFC.2024.3433470","DOIUrl":null,"url":null,"abstract":"Electroencephalography (EEG) is an objective tool for emotion recognition with promising applications. However, the scarcity of labeled data remains a major challenge in this field, limiting the widespread use of EEG-based emotion recognition. In this paper, a semi-supervised <bold>D</b>ual-stream <bold>S</b>elf-attentive <bold>A</b>dversarial <bold>G</b>raph <bold>C</b>ontrastive learning framework (termed as <italic>DS-AGC</i>) is proposed to tackle the challenge of limited labeled data in cross-subject EEG-based emotion recognition. The DS-AGC framework includes two parallel streams for extracting non-structural and structural EEG features. The non-structural stream incorporates a semi-supervised multi-domain adaptation method to alleviate distribution discrepancy among labeled source domain, unlabeled source domain, and unknown target domain. The structural stream develops a graph contrastive learning method to extract effective graph-based feature representation from multiple EEG channels in a semi-supervised manner. Further, a self-attentive fusion module is developed for feature fusion, sample selection, and emotion recognition, which highlights EEG features more relevant to emotions and data samples in the labeled source domain that are closer to the target domain. Extensive experiments are conducted on four benchmark databases (SEED, SEED-IV, SEED-V, and FACED) using a semi-supervised cross-subject leave-one-subject-out cross-validation evaluation protocol. The results show that the proposed model outperforms existing methods under different incomplete label conditions with an average improvement of 2.17%, which demonstrates its effectiveness in addressing the label scarcity problem in cross-subject EEG-based emotion recognition.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 1","pages":"290-305"},"PeriodicalIF":9.8000,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Semi-Supervised Dual-Stream Self-Attentive Adversarial Graph Contrastive Learning for Cross-Subject EEG-Based Emotion Recognition\",\"authors\":\"Weishan Ye;Zhiguo Zhang;Fei Teng;Min Zhang;Jianhong Wang;Dong Ni;Fali Li;Peng Xu;Zhen Liang\",\"doi\":\"10.1109/TAFFC.2024.3433470\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Electroencephalography (EEG) is an objective tool for emotion recognition with promising applications. However, the scarcity of labeled data remains a major challenge in this field, limiting the widespread use of EEG-based emotion recognition. In this paper, a semi-supervised <bold>D</b>ual-stream <bold>S</b>elf-attentive <bold>A</b>dversarial <bold>G</b>raph <bold>C</b>ontrastive learning framework (termed as <italic>DS-AGC</i>) is proposed to tackle the challenge of limited labeled data in cross-subject EEG-based emotion recognition. The DS-AGC framework includes two parallel streams for extracting non-structural and structural EEG features. The non-structural stream incorporates a semi-supervised multi-domain adaptation method to alleviate distribution discrepancy among labeled source domain, unlabeled source domain, and unknown target domain. The structural stream develops a graph contrastive learning method to extract effective graph-based feature representation from multiple EEG channels in a semi-supervised manner. Further, a self-attentive fusion module is developed for feature fusion, sample selection, and emotion recognition, which highlights EEG features more relevant to emotions and data samples in the labeled source domain that are closer to the target domain. Extensive experiments are conducted on four benchmark databases (SEED, SEED-IV, SEED-V, and FACED) using a semi-supervised cross-subject leave-one-subject-out cross-validation evaluation protocol. The results show that the proposed model outperforms existing methods under different incomplete label conditions with an average improvement of 2.17%, which demonstrates its effectiveness in addressing the label scarcity problem in cross-subject EEG-based emotion recognition.\",\"PeriodicalId\":13131,\"journal\":{\"name\":\"IEEE Transactions on Affective Computing\",\"volume\":\"16 1\",\"pages\":\"290-305\"},\"PeriodicalIF\":9.8000,\"publicationDate\":\"2024-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Affective Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10609510/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10609510/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

脑电图是一种客观的情绪识别工具,具有广阔的应用前景。然而,标记数据的稀缺性仍然是该领域的主要挑战,限制了基于脑电图的情感识别的广泛使用。本文提出了一种半监督双流自关注对抗图对比学习框架(DS-AGC),以解决跨主体脑电图情感识别中标记数据有限的问题。DS-AGC框架包括两个并行流,分别用于提取非结构和结构脑电特征。非结构流采用半监督多域自适应方法,缓解了标记源域、未标记源域和未知目标域之间的分布差异。结构流发展了一种图对比学习方法,以半监督的方式从多个脑电信号通道中提取有效的基于图的特征表示。进一步,开发了特征融合、样本选择和情绪识别的自关注融合模块,突出与情绪更相关的脑电特征和标记源域中更接近目标域的数据样本。在四个基准数据库(SEED、SEED- iv、SEED- v和faces)上进行了广泛的实验,使用半监督的交叉验证评估协议。结果表明,该模型在不同不完全标签条件下优于现有方法,平均改进率为2.17%,证明了该模型在解决跨主题脑电图情感识别中标签稀缺问题方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Semi-Supervised Dual-Stream Self-Attentive Adversarial Graph Contrastive Learning for Cross-Subject EEG-Based Emotion Recognition
Electroencephalography (EEG) is an objective tool for emotion recognition with promising applications. However, the scarcity of labeled data remains a major challenge in this field, limiting the widespread use of EEG-based emotion recognition. In this paper, a semi-supervised Dual-stream Self-attentive Adversarial Graph Contrastive learning framework (termed as DS-AGC) is proposed to tackle the challenge of limited labeled data in cross-subject EEG-based emotion recognition. The DS-AGC framework includes two parallel streams for extracting non-structural and structural EEG features. The non-structural stream incorporates a semi-supervised multi-domain adaptation method to alleviate distribution discrepancy among labeled source domain, unlabeled source domain, and unknown target domain. The structural stream develops a graph contrastive learning method to extract effective graph-based feature representation from multiple EEG channels in a semi-supervised manner. Further, a self-attentive fusion module is developed for feature fusion, sample selection, and emotion recognition, which highlights EEG features more relevant to emotions and data samples in the labeled source domain that are closer to the target domain. Extensive experiments are conducted on four benchmark databases (SEED, SEED-IV, SEED-V, and FACED) using a semi-supervised cross-subject leave-one-subject-out cross-validation evaluation protocol. The results show that the proposed model outperforms existing methods under different incomplete label conditions with an average improvement of 2.17%, which demonstrates its effectiveness in addressing the label scarcity problem in cross-subject EEG-based emotion recognition.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
期刊最新文献
Multi-Level Relation-Aware Knowledge Distillation With Hierarchical Fusion for Incomplete Multimodal Sentiment Analysis UCSM-TG: Utterance, Conversation and Speaker-level Speech Emotion Tracking Model in Conversations Using Transformer-GRU Strength in Numbers, Power in Subjectivity: Scalable Modeling of Individual Annotators for Emotion Recognition Within and Across Corpora LPM-Aug: Latent Pathology-Informed Multimodal Augmentation for Generalized Cognitive Decline Detection Via Speech MA-DLE: Speech-based Automatic Depression Level Estimation via Memory Augmentation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1