Decoding covert speech for intuitive control of brain-computer interfaces based on single-trial EEG: a feasibility study

L. Tøttrup, Kasper Leerskov, J. T. Hadsund, E. Kamavuako, R. L. Kæseler, M. Jochumsen
{"title":"Decoding covert speech for intuitive control of brain-computer interfaces based on single-trial EEG: a feasibility study","authors":"L. Tøttrup, Kasper Leerskov, J. T. Hadsund, E. Kamavuako, R. L. Kæseler, M. Jochumsen","doi":"10.1109/ICORR.2019.8779499","DOIUrl":null,"url":null,"abstract":"For individuals with severe motor deficiencies, controlling external devices such as robotic arms or wheelchairs can be challenging, as many devices require some degree of motor control to be operated, e.g. when controlled using a joystick. A brain-computer interface (BCI) relies only on signals from the brain and may be used as a controller instead of muscles. Motor imagery (MI) has been used in many studies as a control signal for BCIs. However, MI may not be suitable for all control purposes, and several people cannot obtain BCI control with MI. In this study, the aim was to investigate the feasibility of decoding covert speech from single-trial EEG and compare and combine it with MI. In seven healthy subjects, EEG was recorded with twenty-five channels during six different actions: Speaking three words (both covert and overt speech), two arm movements (both motor imagery and execution), and one idle class. Temporal and spectral features were derived from the epochs and classified with a random forest classifier. The average classification accuracy was $67 \\pm 9$ % and $75\\pm 7$ % for covert and overt speech, respectively; this was 5–10 % lower than the movement classification. The performance of the combined movement-speech decoder was $61 \\pm 9$ % and $67\\pm 7$ % (covert and overt), but it is possible to have more classes available for control. The possibility of using covert speech for controlling a BCI was outlined; this is a step towards a multimodal BCI system for improved usability.","PeriodicalId":130415,"journal":{"name":"2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICORR.2019.8779499","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

For individuals with severe motor deficiencies, controlling external devices such as robotic arms or wheelchairs can be challenging, as many devices require some degree of motor control to be operated, e.g. when controlled using a joystick. A brain-computer interface (BCI) relies only on signals from the brain and may be used as a controller instead of muscles. Motor imagery (MI) has been used in many studies as a control signal for BCIs. However, MI may not be suitable for all control purposes, and several people cannot obtain BCI control with MI. In this study, the aim was to investigate the feasibility of decoding covert speech from single-trial EEG and compare and combine it with MI. In seven healthy subjects, EEG was recorded with twenty-five channels during six different actions: Speaking three words (both covert and overt speech), two arm movements (both motor imagery and execution), and one idle class. Temporal and spectral features were derived from the epochs and classified with a random forest classifier. The average classification accuracy was $67 \pm 9$ % and $75\pm 7$ % for covert and overt speech, respectively; this was 5–10 % lower than the movement classification. The performance of the combined movement-speech decoder was $61 \pm 9$ % and $67\pm 7$ % (covert and overt), but it is possible to have more classes available for control. The possibility of using covert speech for controlling a BCI was outlined; this is a step towards a multimodal BCI system for improved usability.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于单次脑电的隐蔽语音解码脑机接口直观控制的可行性研究
对于有严重运动缺陷的人来说,控制机械臂或轮椅等外部设备可能具有挑战性,因为许多设备需要一定程度的运动控制才能操作,例如使用操纵杆控制时。脑机接口(BCI)仅依赖于来自大脑的信号,可以用作控制器而不是肌肉。运动意象(MI)作为脑机接口的控制信号在许多研究中得到应用。然而,MI可能并不适用于所有的控制目的,并且一些人无法通过MI获得BCI控制。在本研究中,目的是研究从单次脑电图中解码隐蔽语音的可行性,并将其与MI进行比较和结合。在7名健康受试者中,在6种不同的动作中记录了25个通道的脑电图:说三个词(隐蔽和公开语言),两次手臂运动(运动想象和执行),以及一次空闲课程。利用随机森林分类器对时间和光谱特征进行分类。隐蔽和公开言语的平均分类准确率分别为67 \pm 9 %和75\pm 7 %;这比运动分类低5 - 10%。组合运动-语音解码器的性能为$61 \pm 9$ %和$67\pm 7$ %(隐蔽和公开),但可能有更多的类可用于控制。概述了使用隐蔽语音控制脑机接口的可能性;这是朝着提高可用性的多模式BCI系统迈出的一步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Predictive Simulation of Human Walking Augmented by a Powered Ankle Exoskeleton Pattern recognition and direct control home use of a multi-articulating hand prosthesis Feasibility study: Towards Estimation of Fatigue Level in Robot-Assisted Exercise for Cardiac Rehabilitation Performance Evaluation of EEG/EMG Fusion Methods for Motion Classification Texture Discrimination using a Soft Biomimetic Finger for Prosthetic Applications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1