Review of deep representation learning techniques for brain-computer interfaces.

Pierre Guetschel, Sara Ahmadi, Michael Tangermann
{"title":"Review of deep representation learning techniques for brain-computer interfaces.","authors":"Pierre Guetschel, Sara Ahmadi, Michael Tangermann","doi":"10.1088/1741-2552/ad8962","DOIUrl":null,"url":null,"abstract":"<p><p>In the field of brain-computer interfaces (BCIs), the potential for leveraging deep learning techniques for representing electroencephalogram (EEG) signals has gained substantial interest.<i>Objective</i>: This review synthesizes empirical findings from a collection of articles using deep representation learning techniques for BCI decoding, to provide a comprehensive analysis of the current state-of-the-art.<i>Approach</i>: Each article was scrutinized based on three criteria: (1) the deep representation learning technique employed, (2) the underlying motivation for its utilization, and (3) the approaches adopted for characterizing the learned representations.<i>Main results</i>: Among the 81 articles finally reviewed in depth, our analysis reveals a predominance of 31 articles using autoencoders. We identified 13 studies employing self-supervised learning (SSL) techniques, among which ten were published in 2022 or later, attesting to the relative youth of the field. However, at the time being, none of these have led to standard foundation models that are picked up by the BCI community. Likewise, only a few studies have introspected their learned representations. We observed that the motivation in most studies for using representation learning techniques is for solving transfer learning tasks, but we also found more specific motivations such as to learn robustness or invariances, as an algorithmic bridge, or finally to uncover the structure of the data.<i>Significance</i>: Given the potential of foundation models to effectively tackle these challenges, we advocate for a continued dedication to the advancement of foundation models specifically designed for EEG signal decoding by using SSL techniques. We also underline the imperative of establishing specialized benchmarks and datasets to facilitate the development and continuous improvement of such foundation models.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1741-2552/ad8962","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In the field of brain-computer interfaces (BCIs), the potential for leveraging deep learning techniques for representing electroencephalogram (EEG) signals has gained substantial interest.Objective: This review synthesizes empirical findings from a collection of articles using deep representation learning techniques for BCI decoding, to provide a comprehensive analysis of the current state-of-the-art.Approach: Each article was scrutinized based on three criteria: (1) the deep representation learning technique employed, (2) the underlying motivation for its utilization, and (3) the approaches adopted for characterizing the learned representations.Main results: Among the 81 articles finally reviewed in depth, our analysis reveals a predominance of 31 articles using autoencoders. We identified 13 studies employing self-supervised learning (SSL) techniques, among which ten were published in 2022 or later, attesting to the relative youth of the field. However, at the time being, none of these have led to standard foundation models that are picked up by the BCI community. Likewise, only a few studies have introspected their learned representations. We observed that the motivation in most studies for using representation learning techniques is for solving transfer learning tasks, but we also found more specific motivations such as to learn robustness or invariances, as an algorithmic bridge, or finally to uncover the structure of the data.Significance: Given the potential of foundation models to effectively tackle these challenges, we advocate for a continued dedication to the advancement of foundation models specifically designed for EEG signal decoding by using SSL techniques. We also underline the imperative of establishing specialized benchmarks and datasets to facilitate the development and continuous improvement of such foundation models.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
脑机接口深度表征学习技术综述。
在脑机接口(BCI)领域,利用深度学习技术表示脑电图(EEG)信号的潜力已引起了广泛关注。本综述综合了使用深度表征学习技术进行 BCI 解码的一系列文章中的实证研究结果,对当前最先进的技术进行了全面分析。每篇文章都根据三个标准进行了仔细研究:(1) 采用的深度表征学习技术;(2) 使用该技术的根本动机;(3) 采用的表征所学表征的方法。在最终深入研究的 81 篇文章中,我们的分析显示有 31 篇文章主要采用了自动编码器。我们发现有 13 篇研究采用了自我监督学习(SSL)技术,其中有 10 篇发表于 2022 年或之后,证明了该领域的相对年轻。不过,目前这些研究都还没有形成被生物识别(BCI)领域采用的标准基础模型。同样,只有少数研究对学习到的表征进行了反省。我们注意到,大多数研究使用表征学习技术的动机都是为了解决迁移学习任务,但我们也发现了一些更具体的动机,如学习鲁棒性或不变性,作为算法桥梁,或最终揭示数据结构。鉴于基础模型在有效解决这些挑战方面的潜力,我们主张继续利用 SSL 技术,致力于推进专为脑电信号解码设计的基础模型。我们还强调必须建立专门的基准和数据集,以促进此类基础模型的开发和持续改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A leadless power transfer and wireless telemetry solutions for an endovascular electrocorticography. SoftBoMI: a non-invasive wearable body-machine interface for mapping movement of shoulder to commands. Anchoring temporal convolutional networks for epileptic seizure prediction. Wide-angle simulated artificial vision enhances spatial navigation and object interaction in a naturalistic environment. High-quality multimodal MRI with simultaneous EEG using conductive ink and polymer-thick film nets.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1