利用深度神经网络将脑电图与连续语音联系起来:综述。

IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Journal of neural engineering Pub Date : 2023-08-03 DOI:10.1088/1741-2552/ace73f
Corentin Puffay, Bernd Accou, Lies Bollens, Mohammad Jalilpour Monesi, Jonas Vanthornhout, Hugo Van Hamme, Tom Francart
{"title":"利用深度神经网络将脑电图与连续语音联系起来:综述。","authors":"Corentin Puffay,&nbsp;Bernd Accou,&nbsp;Lies Bollens,&nbsp;Mohammad Jalilpour Monesi,&nbsp;Jonas Vanthornhout,&nbsp;Hugo Van Hamme,&nbsp;Tom Francart","doi":"10.1088/1741-2552/ace73f","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective.</i>When a person listens to continuous speech, a corresponding response is elicited in the brain and can be recorded using electroencephalography (EEG). Linear models are presently used to relate the EEG recording to the corresponding speech signal. The ability of linear models to find a mapping between these two signals is used as a measure of neural tracking of speech. Such models are limited as they assume linearity in the EEG-speech relationship, which omits the nonlinear dynamics of the brain. As an alternative, deep learning models have recently been used to relate EEG to continuous speech.<i>Approach.</i>This paper reviews and comments on deep-learning-based studies that relate EEG to continuous speech in single- or multiple-speakers paradigms. We point out recurrent methodological pitfalls and the need for a standard benchmark of model analysis.<i>Main results.</i>We gathered 29 studies. The main methodological issues we found are biased cross-validations, data leakage leading to over-fitted models, or disproportionate data size compared to the model's complexity. In addition, we address requirements for a standard benchmark model analysis, such as public datasets, common evaluation metrics, and good practices for the match-mismatch task.<i>Significance.</i>We present a review paper summarizing the main deep-learning-based studies that relate EEG to speech while addressing methodological pitfalls and important considerations for this newly expanding field. Our study is particularly relevant given the growing application of deep learning in EEG-speech decoding.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":3.7000,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Relating EEG to continuous speech using deep neural networks: a review.\",\"authors\":\"Corentin Puffay,&nbsp;Bernd Accou,&nbsp;Lies Bollens,&nbsp;Mohammad Jalilpour Monesi,&nbsp;Jonas Vanthornhout,&nbsp;Hugo Van Hamme,&nbsp;Tom Francart\",\"doi\":\"10.1088/1741-2552/ace73f\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p><i>Objective.</i>When a person listens to continuous speech, a corresponding response is elicited in the brain and can be recorded using electroencephalography (EEG). Linear models are presently used to relate the EEG recording to the corresponding speech signal. The ability of linear models to find a mapping between these two signals is used as a measure of neural tracking of speech. Such models are limited as they assume linearity in the EEG-speech relationship, which omits the nonlinear dynamics of the brain. As an alternative, deep learning models have recently been used to relate EEG to continuous speech.<i>Approach.</i>This paper reviews and comments on deep-learning-based studies that relate EEG to continuous speech in single- or multiple-speakers paradigms. We point out recurrent methodological pitfalls and the need for a standard benchmark of model analysis.<i>Main results.</i>We gathered 29 studies. The main methodological issues we found are biased cross-validations, data leakage leading to over-fitted models, or disproportionate data size compared to the model's complexity. In addition, we address requirements for a standard benchmark model analysis, such as public datasets, common evaluation metrics, and good practices for the match-mismatch task.<i>Significance.</i>We present a review paper summarizing the main deep-learning-based studies that relate EEG to speech while addressing methodological pitfalls and important considerations for this newly expanding field. Our study is particularly relevant given the growing application of deep learning in EEG-speech decoding.</p>\",\"PeriodicalId\":16753,\"journal\":{\"name\":\"Journal of neural engineering\",\"volume\":\"20 4\",\"pages\":\"\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2023-08-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of neural engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1088/1741-2552/ace73f\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1088/1741-2552/ace73f","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 10

摘要

目标。当一个人听到连续的讲话时,大脑会产生相应的反应,并可以用脑电图(EEG)记录下来。目前使用线性模型将脑电图记录与相应的语音信号联系起来。线性模型找到这两个信号之间的映射的能力被用作语音神经跟踪的测量。这样的模型是有限的,因为它们假设脑电图-言语关系是线性的,而忽略了大脑的非线性动力学。作为一种替代方法,深度学习模型最近被用于将脑电图与连续语音联系起来。本文回顾和评论了基于深度学习的研究,这些研究将脑电图与连续语音在单说话者或多说话者范式中联系起来。我们指出了反复出现的方法缺陷和对模型分析标准基准的需要。主要的结果。我们收集了29项研究。我们发现的主要方法问题是有偏差的交叉验证,导致模型过度拟合的数据泄漏,或者与模型复杂性相比不成比例的数据大小。此外,我们还讨论了标准基准模型分析的要求,如公共数据集、通用评估指标和匹配-不匹配任务的良好实践。意义。我们提出了一篇综述论文,总结了将脑电图与语音联系起来的主要基于深度学习的研究,同时解决了这个新扩展领域的方法缺陷和重要注意事项。鉴于深度学习在脑电图语音解码中的应用日益增多,我们的研究尤其相关。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Relating EEG to continuous speech using deep neural networks: a review.

Objective.When a person listens to continuous speech, a corresponding response is elicited in the brain and can be recorded using electroencephalography (EEG). Linear models are presently used to relate the EEG recording to the corresponding speech signal. The ability of linear models to find a mapping between these two signals is used as a measure of neural tracking of speech. Such models are limited as they assume linearity in the EEG-speech relationship, which omits the nonlinear dynamics of the brain. As an alternative, deep learning models have recently been used to relate EEG to continuous speech.Approach.This paper reviews and comments on deep-learning-based studies that relate EEG to continuous speech in single- or multiple-speakers paradigms. We point out recurrent methodological pitfalls and the need for a standard benchmark of model analysis.Main results.We gathered 29 studies. The main methodological issues we found are biased cross-validations, data leakage leading to over-fitted models, or disproportionate data size compared to the model's complexity. In addition, we address requirements for a standard benchmark model analysis, such as public datasets, common evaluation metrics, and good practices for the match-mismatch task.Significance.We present a review paper summarizing the main deep-learning-based studies that relate EEG to speech while addressing methodological pitfalls and important considerations for this newly expanding field. Our study is particularly relevant given the growing application of deep learning in EEG-speech decoding.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of neural engineering
Journal of neural engineering 工程技术-工程:生物医学
CiteScore
7.80
自引率
12.50%
发文量
319
审稿时长
4.2 months
期刊介绍: The goal of Journal of Neural Engineering (JNE) is to act as a forum for the interdisciplinary field of neural engineering where neuroscientists, neurobiologists and engineers can publish their work in one periodical that bridges the gap between neuroscience and engineering. The journal publishes articles in the field of neural engineering at the molecular, cellular and systems levels. The scope of the journal encompasses experimental, computational, theoretical, clinical and applied aspects of: Innovative neurotechnology; Brain-machine (computer) interface; Neural interfacing; Bioelectronic medicines; Neuromodulation; Neural prostheses; Neural control; Neuro-rehabilitation; Neurorobotics; Optical neural engineering; Neural circuits: artificial & biological; Neuromorphic engineering; Neural tissue regeneration; Neural signal processing; Theoretical and computational neuroscience; Systems neuroscience; Translational neuroscience; Neuroimaging.
期刊最新文献
Building consensus on clinical outcome assessments for BCI devices. A summary of the 10th BCI society meeting 2023 workshop. o-CLEAN: a novel multi-stage algorithm for the ocular artifacts' correction from EEG data in out-of-the-lab applications. PDMS/CNT electrodes with bioamplifier for practical in-the-ear and conventional biosignal recordings. DOCTer: a novel EEG-based diagnosis framework for disorders of consciousness. I see artifacts: ICA-based EEG artifact removal does not improve deep network decoding across three BCI tasks.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1