Bridge Graph Attention Based Graph Convolution Network With Multi-Scale Transformer for EEG Emotion Recognition

IF 9.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Affective Computing Pub Date : 2024-04-30 DOI:10.1109/TAFFC.2024.3394873
Huachao Yan;Kailing Guo;Xiaofen Xing;Xiangmin Xu
{"title":"Bridge Graph Attention Based Graph Convolution Network With Multi-Scale Transformer for EEG Emotion Recognition","authors":"Huachao Yan;Kailing Guo;Xiaofen Xing;Xiangmin Xu","doi":"10.1109/TAFFC.2024.3394873","DOIUrl":null,"url":null,"abstract":"In multichannel electroencephalograph (EEG) emotion recognition, most graph-based studies employ shallow graph model for spatial characteristics learning due to node over-smoothing caused by an increase in network depth. To address over-smoothing, we propose the bridge graph attention-based graph convolution network (BGAGCN). It bridges previous graph convolution layers to attention coefficients of the final layer by adaptively combining each graph convolution output based on the graph attention network, thereby enhancing feature distinctiveness. Considering that graph-based network primarily focus on local EEG channel relationships, we introduce a transformer for global dependency. Inspired by the neuroscience finding that neural activities of different timescales reflect distinct spatial connectivities, we modify the transformer to a multi-scale transformer (MT) by applying multi-head attention to multichannel EEG signals after 1D convolutions at different scales. MT learns spatial features more elaborately to enhance feature representation ability. By combining BGAGCN and MT, our model BGAGCN-MT achieves state-of-the-art accuracy under subject-dependent and subject-independent protocols across three benchmark EEG emotion datasets (SEED, SEED-IV and DREAMER). Notably, our model effectively addresses over-smoothing in graph neural networks and provides an efficient solution to learning spatial relationships of EEG features at different scales.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"15 4","pages":"2042-2054"},"PeriodicalIF":9.6000,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10510577/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In multichannel electroencephalograph (EEG) emotion recognition, most graph-based studies employ shallow graph model for spatial characteristics learning due to node over-smoothing caused by an increase in network depth. To address over-smoothing, we propose the bridge graph attention-based graph convolution network (BGAGCN). It bridges previous graph convolution layers to attention coefficients of the final layer by adaptively combining each graph convolution output based on the graph attention network, thereby enhancing feature distinctiveness. Considering that graph-based network primarily focus on local EEG channel relationships, we introduce a transformer for global dependency. Inspired by the neuroscience finding that neural activities of different timescales reflect distinct spatial connectivities, we modify the transformer to a multi-scale transformer (MT) by applying multi-head attention to multichannel EEG signals after 1D convolutions at different scales. MT learns spatial features more elaborately to enhance feature representation ability. By combining BGAGCN and MT, our model BGAGCN-MT achieves state-of-the-art accuracy under subject-dependent and subject-independent protocols across three benchmark EEG emotion datasets (SEED, SEED-IV and DREAMER). Notably, our model effectively addresses over-smoothing in graph neural networks and provides an efficient solution to learning spatial relationships of EEG features at different scales.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于多尺度变换器的桥图注意图卷积网络用于脑电图情感识别
在多通道脑电图(EEG)情感识别中,由于网络深度增加导致节点过度平滑,大多数基于图的研究采用浅图模型进行空间特征学习。为了解决过度平滑问题,我们提出了基于图注意力的桥接图卷积网络(BGAGCN)。它通过基于图注意力网络自适应地组合每个图卷积输出,将前面的图卷积层与最后一层的注意力系数桥接起来,从而增强特征的显著性。考虑到基于图的网络主要关注局部脑电图通道关系,我们引入了全局依赖性转换器。神经科学发现,不同时间尺度的神经活动反映了不同的空间连接性,受此启发,我们将转换器修改为多尺度转换器(MT),在不同尺度的一维卷积后,对多通道脑电信号应用多头注意力。多尺度变换器能更精细地学习空间特征,从而增强特征表征能力。通过结合 BGAGCN 和 MT,我们的 BGAGCN-MT 模型在三个基准脑电图情绪数据集(SEED、SEED-IV 和 DREAMER)中,在依赖主体和不依赖主体的协议下都达到了最先进的准确度。值得注意的是,我们的模型有效地解决了图神经网络中的过度平滑问题,并为学习不同尺度的脑电图特征的空间关系提供了有效的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
期刊最新文献
Beyond Overfitting: Doubly Adaptive Dropout for Generalizable AU Detection Exploring the Relationship between Stress-physiology and Pain in the Daily Life of Patients with Chronic Widespread Pain Progressive Masking Oriented Self-Taught Learning for Occluded Facial Expression Recognition Semantic and Emotional Dual Channel for Emotion Recognition in Conversation Empathy Level Alignment Via Reinforcement Learning for Empathetic Response Generation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1