基于脑电图的跨主体运动图像识别的自监督对比学习

IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Journal of neural engineering Pub Date : 2024-04-11 DOI:10.1088/1741-2552/ad3986
Wenjie Li, Haoyu Li, Xinlin Sun, Huicong Kang, Shan An, Guoxin Wang, Zhongke Gao
{"title":"基于脑电图的跨主体运动图像识别的自监督对比学习","authors":"Wenjie Li, Haoyu Li, Xinlin Sun, Huicong Kang, Shan An, Guoxin Wang, Zhongke Gao","doi":"10.1088/1741-2552/ad3986","DOIUrl":null,"url":null,"abstract":"<italic toggle=\"yes\">Objective</italic>. The extensive application of electroencephalography (EEG) in brain-computer interfaces (BCIs) can be attributed to its non-invasive nature and capability to offer high-resolution data. The acquisition of EEG signals is a straightforward process, but the datasets associated with these signals frequently exhibit data scarcity and require substantial resources for proper labeling. Furthermore, there is a significant limitation in the generalization performance of EEG models due to the substantial inter-individual variability observed in EEG signals. <italic toggle=\"yes\">Approach</italic>. To address these issues, we propose a novel self-supervised contrastive learning framework for decoding motor imagery (MI) signals in cross-subject scenarios. Specifically, we design an encoder combining convolutional neural network and attention mechanism. In the contrastive learning training stage, the network undergoes training with the pretext task of data augmentation to minimize the distance between pairs of homologous transformations while simultaneously maximizing the distance between pairs of heterologous transformations. It enhances the amount of data utilized for training and improves the network’s ability to extract deep features from original signals without relying on the true labels of the data. <italic toggle=\"yes\">Main results</italic>. To evaluate our framework’s efficacy, we conduct extensive experiments on three public MI datasets: BCI IV IIa, BCI IV IIb, and HGD datasets. The proposed method achieves cross-subject classification accuracies of 67.32<inline-formula>\n<tex-math><?CDATA $\\%$?></tex-math>\n<mml:math overflow=\"scroll\"><mml:mrow><mml:mi mathvariant=\"normal\">%</mml:mi></mml:mrow></mml:math>\n<inline-graphic xlink:href=\"jnead3986ieqn1.gif\" xlink:type=\"simple\"></inline-graphic>\n</inline-formula>, 82.34<inline-formula>\n<tex-math><?CDATA $\\%$?></tex-math>\n<mml:math overflow=\"scroll\"><mml:mrow><mml:mi mathvariant=\"normal\">%</mml:mi></mml:mrow></mml:math>\n<inline-graphic xlink:href=\"jnead3986ieqn2.gif\" xlink:type=\"simple\"></inline-graphic>\n</inline-formula>, and 81.13<inline-formula>\n<tex-math><?CDATA $\\%$?></tex-math>\n<mml:math overflow=\"scroll\"><mml:mrow><mml:mi mathvariant=\"normal\">%</mml:mi></mml:mrow></mml:math>\n<inline-graphic xlink:href=\"jnead3986ieqn3.gif\" xlink:type=\"simple\"></inline-graphic>\n</inline-formula> on the three datasets, demonstrating superior performance compared to existing methods. <italic toggle=\"yes\">Significance</italic>. Therefore, this method has great promise for improving the performance of cross-subject transfer learning in MI-based BCI systems.","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"108 1","pages":""},"PeriodicalIF":3.7000,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Self-supervised contrastive learning for EEG-based cross-subject motor imagery recognition\",\"authors\":\"Wenjie Li, Haoyu Li, Xinlin Sun, Huicong Kang, Shan An, Guoxin Wang, Zhongke Gao\",\"doi\":\"10.1088/1741-2552/ad3986\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<italic toggle=\\\"yes\\\">Objective</italic>. The extensive application of electroencephalography (EEG) in brain-computer interfaces (BCIs) can be attributed to its non-invasive nature and capability to offer high-resolution data. The acquisition of EEG signals is a straightforward process, but the datasets associated with these signals frequently exhibit data scarcity and require substantial resources for proper labeling. Furthermore, there is a significant limitation in the generalization performance of EEG models due to the substantial inter-individual variability observed in EEG signals. <italic toggle=\\\"yes\\\">Approach</italic>. To address these issues, we propose a novel self-supervised contrastive learning framework for decoding motor imagery (MI) signals in cross-subject scenarios. Specifically, we design an encoder combining convolutional neural network and attention mechanism. In the contrastive learning training stage, the network undergoes training with the pretext task of data augmentation to minimize the distance between pairs of homologous transformations while simultaneously maximizing the distance between pairs of heterologous transformations. It enhances the amount of data utilized for training and improves the network’s ability to extract deep features from original signals without relying on the true labels of the data. <italic toggle=\\\"yes\\\">Main results</italic>. To evaluate our framework’s efficacy, we conduct extensive experiments on three public MI datasets: BCI IV IIa, BCI IV IIb, and HGD datasets. The proposed method achieves cross-subject classification accuracies of 67.32<inline-formula>\\n<tex-math><?CDATA $\\\\%$?></tex-math>\\n<mml:math overflow=\\\"scroll\\\"><mml:mrow><mml:mi mathvariant=\\\"normal\\\">%</mml:mi></mml:mrow></mml:math>\\n<inline-graphic xlink:href=\\\"jnead3986ieqn1.gif\\\" xlink:type=\\\"simple\\\"></inline-graphic>\\n</inline-formula>, 82.34<inline-formula>\\n<tex-math><?CDATA $\\\\%$?></tex-math>\\n<mml:math overflow=\\\"scroll\\\"><mml:mrow><mml:mi mathvariant=\\\"normal\\\">%</mml:mi></mml:mrow></mml:math>\\n<inline-graphic xlink:href=\\\"jnead3986ieqn2.gif\\\" xlink:type=\\\"simple\\\"></inline-graphic>\\n</inline-formula>, and 81.13<inline-formula>\\n<tex-math><?CDATA $\\\\%$?></tex-math>\\n<mml:math overflow=\\\"scroll\\\"><mml:mrow><mml:mi mathvariant=\\\"normal\\\">%</mml:mi></mml:mrow></mml:math>\\n<inline-graphic xlink:href=\\\"jnead3986ieqn3.gif\\\" xlink:type=\\\"simple\\\"></inline-graphic>\\n</inline-formula> on the three datasets, demonstrating superior performance compared to existing methods. <italic toggle=\\\"yes\\\">Significance</italic>. Therefore, this method has great promise for improving the performance of cross-subject transfer learning in MI-based BCI systems.\",\"PeriodicalId\":16753,\"journal\":{\"name\":\"Journal of neural engineering\",\"volume\":\"108 1\",\"pages\":\"\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of neural engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1088/1741-2552/ad3986\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1088/1741-2552/ad3986","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

目的。脑电图(EEG)在脑机接口(BCI)中的广泛应用归功于其非侵入性和提供高分辨率数据的能力。采集脑电信号是一个简单直接的过程,但与这些信号相关的数据集经常表现出数据稀缺性,需要大量资源才能进行正确标记。此外,由于在脑电信号中观察到的个体间差异很大,因此脑电图模型的泛化性能受到很大限制。方法。为了解决这些问题,我们提出了一种新颖的自监督对比学习框架,用于解码跨受试者场景中的运动图像(MI)信号。具体来说,我们设计了一种结合卷积神经网络和注意力机制的编码器。在对比学习训练阶段,网络以数据增强为借口进行训练,以最小化同源变换对之间的距离,同时最大化异源变换对之间的距离。这不仅增加了用于训练的数据量,还提高了网络从原始信号中提取深度特征的能力,而无需依赖数据的真实标签。主要结果为了评估我们框架的功效,我们在三个公开的人工智能数据集上进行了广泛的实验:BCI IV IIa、BCI IV IIb 和 HGD 数据集。所提出的方法在这三个数据集上的跨主体分类准确率分别达到了 67.32%、82.34% 和 81.13%,与现有方法相比表现出了卓越的性能。意义重大。因此,该方法在提高基于 MI 的 BCI 系统中的跨主体迁移学习性能方面大有可为。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Self-supervised contrastive learning for EEG-based cross-subject motor imagery recognition
Objective. The extensive application of electroencephalography (EEG) in brain-computer interfaces (BCIs) can be attributed to its non-invasive nature and capability to offer high-resolution data. The acquisition of EEG signals is a straightforward process, but the datasets associated with these signals frequently exhibit data scarcity and require substantial resources for proper labeling. Furthermore, there is a significant limitation in the generalization performance of EEG models due to the substantial inter-individual variability observed in EEG signals. Approach. To address these issues, we propose a novel self-supervised contrastive learning framework for decoding motor imagery (MI) signals in cross-subject scenarios. Specifically, we design an encoder combining convolutional neural network and attention mechanism. In the contrastive learning training stage, the network undergoes training with the pretext task of data augmentation to minimize the distance between pairs of homologous transformations while simultaneously maximizing the distance between pairs of heterologous transformations. It enhances the amount of data utilized for training and improves the network’s ability to extract deep features from original signals without relying on the true labels of the data. Main results. To evaluate our framework’s efficacy, we conduct extensive experiments on three public MI datasets: BCI IV IIa, BCI IV IIb, and HGD datasets. The proposed method achieves cross-subject classification accuracies of 67.32 % , 82.34 % , and 81.13 % on the three datasets, demonstrating superior performance compared to existing methods. Significance. Therefore, this method has great promise for improving the performance of cross-subject transfer learning in MI-based BCI systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of neural engineering
Journal of neural engineering 工程技术-工程:生物医学
CiteScore
7.80
自引率
12.50%
发文量
319
审稿时长
4.2 months
期刊介绍: The goal of Journal of Neural Engineering (JNE) is to act as a forum for the interdisciplinary field of neural engineering where neuroscientists, neurobiologists and engineers can publish their work in one periodical that bridges the gap between neuroscience and engineering. The journal publishes articles in the field of neural engineering at the molecular, cellular and systems levels. The scope of the journal encompasses experimental, computational, theoretical, clinical and applied aspects of: Innovative neurotechnology; Brain-machine (computer) interface; Neural interfacing; Bioelectronic medicines; Neuromodulation; Neural prostheses; Neural control; Neuro-rehabilitation; Neurorobotics; Optical neural engineering; Neural circuits: artificial & biological; Neuromorphic engineering; Neural tissue regeneration; Neural signal processing; Theoretical and computational neuroscience; Systems neuroscience; Translational neuroscience; Neuroimaging.
期刊最新文献
Building consensus on clinical outcome assessments for BCI devices. A summary of the 10th BCI society meeting 2023 workshop. o-CLEAN: a novel multi-stage algorithm for the ocular artifacts' correction from EEG data in out-of-the-lab applications. PDMS/CNT electrodes with bioamplifier for practical in-the-ear and conventional biosignal recordings. DOCTer: a novel EEG-based diagnosis framework for disorders of consciousness. I see artifacts: ICA-based EEG artifact removal does not improve deep network decoding across three BCI tasks.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1