基于脑电图的情绪识别多尺度时空表征学习

Xin Zhou, Xiaojing Peng
{"title":"基于脑电图的情绪识别多尺度时空表征学习","authors":"Xin Zhou, Xiaojing Peng","doi":"arxiv-2409.07589","DOIUrl":null,"url":null,"abstract":"EEG-based emotion recognition holds significant potential in the field of\nbrain-computer interfaces. A key challenge lies in extracting discriminative\nspatiotemporal features from electroencephalogram (EEG) signals. Existing\nstudies often rely on domain-specific time-frequency features and analyze\ntemporal dependencies and spatial characteristics separately, neglecting the\ninteraction between local-global relationships and spatiotemporal dynamics. To\naddress this, we propose a novel network called Multi-Scale Inverted Mamba\n(MS-iMamba), which consists of Multi-Scale Temporal Blocks (MSTB) and\nTemporal-Spatial Fusion Blocks (TSFB). Specifically, MSTBs are designed to\ncapture both local details and global temporal dependencies across different\nscale subsequences. The TSFBs, implemented with an inverted Mamba structure,\nfocus on the interaction between dynamic temporal dependencies and spatial\ncharacteristics. The primary advantage of MS-iMamba lies in its ability to\nleverage reconstructed multi-scale EEG sequences, exploiting the interaction\nbetween temporal and spatial features without the need for domain-specific\ntime-frequency feature extraction. Experimental results on the DEAP, DREAMER,\nand SEED datasets demonstrate that MS-iMamba achieves classification accuracies\nof 94.86%, 94.94%, and 91.36%, respectively, using only four-channel EEG\nsignals, outperforming state-of-the-art methods.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":"10 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-scale spatiotemporal representation learning for EEG-based emotion recognition\",\"authors\":\"Xin Zhou, Xiaojing Peng\",\"doi\":\"arxiv-2409.07589\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"EEG-based emotion recognition holds significant potential in the field of\\nbrain-computer interfaces. A key challenge lies in extracting discriminative\\nspatiotemporal features from electroencephalogram (EEG) signals. Existing\\nstudies often rely on domain-specific time-frequency features and analyze\\ntemporal dependencies and spatial characteristics separately, neglecting the\\ninteraction between local-global relationships and spatiotemporal dynamics. To\\naddress this, we propose a novel network called Multi-Scale Inverted Mamba\\n(MS-iMamba), which consists of Multi-Scale Temporal Blocks (MSTB) and\\nTemporal-Spatial Fusion Blocks (TSFB). Specifically, MSTBs are designed to\\ncapture both local details and global temporal dependencies across different\\nscale subsequences. The TSFBs, implemented with an inverted Mamba structure,\\nfocus on the interaction between dynamic temporal dependencies and spatial\\ncharacteristics. The primary advantage of MS-iMamba lies in its ability to\\nleverage reconstructed multi-scale EEG sequences, exploiting the interaction\\nbetween temporal and spatial features without the need for domain-specific\\ntime-frequency feature extraction. Experimental results on the DEAP, DREAMER,\\nand SEED datasets demonstrate that MS-iMamba achieves classification accuracies\\nof 94.86%, 94.94%, and 91.36%, respectively, using only four-channel EEG\\nsignals, outperforming state-of-the-art methods.\",\"PeriodicalId\":501034,\"journal\":{\"name\":\"arXiv - EE - Signal Processing\",\"volume\":\"10 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Signal Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07589\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07589","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

基于脑电图的情绪识别在脑机接口领域具有巨大潜力。从脑电图(EEG)信号中提取具有区分性的时空特征是一项关键挑战。现有的研究通常依赖于特定领域的时频特征,并分别分析时间依赖性和空间特征,从而忽视了局部-全局关系和时空动态之间的相互作用。为了解决这个问题,我们提出了一种名为多尺度反转曼巴(MS-iMamba)的新型网络,它由多尺度时空块(MSTB)和时空融合块(TSFB)组成。具体来说,MSTB 的设计目的是捕捉不同尺度子序列的局部细节和全局时间依赖性。TSFB 采用倒 Mamba 结构,重点关注动态时间依赖性与空间特征之间的相互作用。MS-iMamba 的主要优势在于它能够利用重建的多尺度脑电图序列,利用时间和空间特征之间的相互作用,而无需进行特定领域的时频特征提取。在 DEAP、DREAMER 和 SEED 数据集上的实验结果表明,仅使用四通道脑电信号,MS-iMamba 的分类准确率就分别达到了 94.86%、94.94% 和 91.36%,超过了最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Multi-scale spatiotemporal representation learning for EEG-based emotion recognition
EEG-based emotion recognition holds significant potential in the field of brain-computer interfaces. A key challenge lies in extracting discriminative spatiotemporal features from electroencephalogram (EEG) signals. Existing studies often rely on domain-specific time-frequency features and analyze temporal dependencies and spatial characteristics separately, neglecting the interaction between local-global relationships and spatiotemporal dynamics. To address this, we propose a novel network called Multi-Scale Inverted Mamba (MS-iMamba), which consists of Multi-Scale Temporal Blocks (MSTB) and Temporal-Spatial Fusion Blocks (TSFB). Specifically, MSTBs are designed to capture both local details and global temporal dependencies across different scale subsequences. The TSFBs, implemented with an inverted Mamba structure, focus on the interaction between dynamic temporal dependencies and spatial characteristics. The primary advantage of MS-iMamba lies in its ability to leverage reconstructed multi-scale EEG sequences, exploiting the interaction between temporal and spatial features without the need for domain-specific time-frequency feature extraction. Experimental results on the DEAP, DREAMER, and SEED datasets demonstrate that MS-iMamba achieves classification accuracies of 94.86%, 94.94%, and 91.36%, respectively, using only four-channel EEG signals, outperforming state-of-the-art methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Blind Deconvolution on Graphs: Exact and Stable Recovery End-to-End Learning of Transmitter and Receiver Filters in Bandwidth Limited Fiber Optic Communication Systems Atmospheric Turbulence-Immune Free Space Optical Communication System based on Discrete-Time Analog Transmission User Subgrouping in Scalable Cell-Free Massive MIMO Multicasting Systems Covert Communications Without Pre-Sharing of Side Information and Channel Estimation Over Quasi-Static Fading Channels
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1