基于脑电图的运动图像分类时空曼巴网络

Xiaoxiao Yang, Ziyu Jia
{"title":"基于脑电图的运动图像分类时空曼巴网络","authors":"Xiaoxiao Yang, Ziyu Jia","doi":"arxiv-2409.09627","DOIUrl":null,"url":null,"abstract":"Motor imagery (MI) classification is key for brain-computer interfaces\n(BCIs). Until recent years, numerous models had been proposed, ranging from\nclassical algorithms like Common Spatial Pattern (CSP) to deep learning models\nsuch as convolutional neural networks (CNNs) and transformers. However, these\nmodels have shown limitations in areas such as generalizability, contextuality\nand scalability when it comes to effectively extracting the complex\nspatial-temporal information inherent in electroencephalography (EEG) signals.\nTo address these limitations, we introduce Spatial-Temporal Mamba Network\n(STMambaNet), an innovative model leveraging the Mamba state space\narchitecture, which excels in processing extended sequences with linear\nscalability. By incorporating spatial and temporal Mamba encoders, STMambaNet\neffectively captures the intricate dynamics in both space and time,\nsignificantly enhancing the decoding performance of EEG signals for MI\nclassification. Experimental results on BCI Competition IV 2a and 2b datasets\ndemonstrate STMambaNet's superiority over existing models, establishing it as a\npowerful tool for advancing MI-based BCIs and improving real-world BCI systems.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"9 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Spatial-Temporal Mamba Network for EEG-based Motor Imagery Classification\",\"authors\":\"Xiaoxiao Yang, Ziyu Jia\",\"doi\":\"arxiv-2409.09627\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Motor imagery (MI) classification is key for brain-computer interfaces\\n(BCIs). Until recent years, numerous models had been proposed, ranging from\\nclassical algorithms like Common Spatial Pattern (CSP) to deep learning models\\nsuch as convolutional neural networks (CNNs) and transformers. However, these\\nmodels have shown limitations in areas such as generalizability, contextuality\\nand scalability when it comes to effectively extracting the complex\\nspatial-temporal information inherent in electroencephalography (EEG) signals.\\nTo address these limitations, we introduce Spatial-Temporal Mamba Network\\n(STMambaNet), an innovative model leveraging the Mamba state space\\narchitecture, which excels in processing extended sequences with linear\\nscalability. By incorporating spatial and temporal Mamba encoders, STMambaNet\\neffectively captures the intricate dynamics in both space and time,\\nsignificantly enhancing the decoding performance of EEG signals for MI\\nclassification. Experimental results on BCI Competition IV 2a and 2b datasets\\ndemonstrate STMambaNet's superiority over existing models, establishing it as a\\npowerful tool for advancing MI-based BCIs and improving real-world BCI systems.\",\"PeriodicalId\":501541,\"journal\":{\"name\":\"arXiv - CS - Human-Computer Interaction\",\"volume\":\"9 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Human-Computer Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09627\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Human-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09627","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

运动图像(MI)分类是脑机接口(BCI)的关键。近年来,从通用空间模式(CSP)等经典算法到卷积神经网络(CNN)和变换器等深度学习模型,人们已经提出了许多模型。为了解决这些局限性,我们引入了空间-时间曼巴网络(STMambaNet),这是一种利用曼巴状态空间架构的创新模型,在处理扩展序列时具有出色的线性可扩展性。通过结合空间和时间曼巴编码器,STMambaNet 有效地捕捉了空间和时间的复杂动态,大大提高了用于 MI 分类的脑电信号的解码性能。在 BCI Competition IV 2a 和 2b 数据集上的实验结果证明了 STMambaNet 优于现有模型,使其成为推进基于 MI 的 BCI 和改进实际 BCI 系统的有力工具。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Spatial-Temporal Mamba Network for EEG-based Motor Imagery Classification
Motor imagery (MI) classification is key for brain-computer interfaces (BCIs). Until recent years, numerous models had been proposed, ranging from classical algorithms like Common Spatial Pattern (CSP) to deep learning models such as convolutional neural networks (CNNs) and transformers. However, these models have shown limitations in areas such as generalizability, contextuality and scalability when it comes to effectively extracting the complex spatial-temporal information inherent in electroencephalography (EEG) signals. To address these limitations, we introduce Spatial-Temporal Mamba Network (STMambaNet), an innovative model leveraging the Mamba state space architecture, which excels in processing extended sequences with linear scalability. By incorporating spatial and temporal Mamba encoders, STMambaNet effectively captures the intricate dynamics in both space and time, significantly enhancing the decoding performance of EEG signals for MI classification. Experimental results on BCI Competition IV 2a and 2b datasets demonstrate STMambaNet's superiority over existing models, establishing it as a powerful tool for advancing MI-based BCIs and improving real-world BCI systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Equimetrics -- Applying HAR principles to equestrian activities AI paintings vs. Human Paintings? Deciphering Public Interactions and Perceptions towards AI-Generated Paintings on TikTok From Data Stories to Dialogues: A Randomised Controlled Trial of Generative AI Agents and Data Storytelling in Enhancing Data Visualisation Comprehension Exploring Gaze Pattern in Autistic Children: Clustering, Visualization, and Prediction Revealing the Challenge of Detecting Character Knowledge Errors in LLM Role-Playing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1