L. Tøttrup, Kasper Leerskov, J. T. Hadsund, E. Kamavuako, R. L. Kæseler, M. Jochumsen
{"title":"Decoding covert speech for intuitive control of brain-computer interfaces based on single-trial EEG: a feasibility study","authors":"L. Tøttrup, Kasper Leerskov, J. T. Hadsund, E. Kamavuako, R. L. Kæseler, M. Jochumsen","doi":"10.1109/ICORR.2019.8779499","DOIUrl":null,"url":null,"abstract":"For individuals with severe motor deficiencies, controlling external devices such as robotic arms or wheelchairs can be challenging, as many devices require some degree of motor control to be operated, e.g. when controlled using a joystick. A brain-computer interface (BCI) relies only on signals from the brain and may be used as a controller instead of muscles. Motor imagery (MI) has been used in many studies as a control signal for BCIs. However, MI may not be suitable for all control purposes, and several people cannot obtain BCI control with MI. In this study, the aim was to investigate the feasibility of decoding covert speech from single-trial EEG and compare and combine it with MI. In seven healthy subjects, EEG was recorded with twenty-five channels during six different actions: Speaking three words (both covert and overt speech), two arm movements (both motor imagery and execution), and one idle class. Temporal and spectral features were derived from the epochs and classified with a random forest classifier. The average classification accuracy was $67 \\pm 9$ % and $75\\pm 7$ % for covert and overt speech, respectively; this was 5–10 % lower than the movement classification. The performance of the combined movement-speech decoder was $61 \\pm 9$ % and $67\\pm 7$ % (covert and overt), but it is possible to have more classes available for control. The possibility of using covert speech for controlling a BCI was outlined; this is a step towards a multimodal BCI system for improved usability.","PeriodicalId":130415,"journal":{"name":"2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICORR.2019.8779499","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
For individuals with severe motor deficiencies, controlling external devices such as robotic arms or wheelchairs can be challenging, as many devices require some degree of motor control to be operated, e.g. when controlled using a joystick. A brain-computer interface (BCI) relies only on signals from the brain and may be used as a controller instead of muscles. Motor imagery (MI) has been used in many studies as a control signal for BCIs. However, MI may not be suitable for all control purposes, and several people cannot obtain BCI control with MI. In this study, the aim was to investigate the feasibility of decoding covert speech from single-trial EEG and compare and combine it with MI. In seven healthy subjects, EEG was recorded with twenty-five channels during six different actions: Speaking three words (both covert and overt speech), two arm movements (both motor imagery and execution), and one idle class. Temporal and spectral features were derived from the epochs and classified with a random forest classifier. The average classification accuracy was $67 \pm 9$ % and $75\pm 7$ % for covert and overt speech, respectively; this was 5–10 % lower than the movement classification. The performance of the combined movement-speech decoder was $61 \pm 9$ % and $67\pm 7$ % (covert and overt), but it is possible to have more classes available for control. The possibility of using covert speech for controlling a BCI was outlined; this is a step towards a multimodal BCI system for improved usability.