Motor imagery with cues in virtual reality, audio and screen.

Sonal Santosh Baberwal, Luz Alejandra Magre, K R Sanjaya D Gunawardhana, Michael Parkinson, Tomas Ward, Shirley Coyle
{"title":"Motor imagery with cues in virtual reality, audio and screen.","authors":"Sonal Santosh Baberwal, Luz Alejandra Magre, K R Sanjaya D Gunawardhana, Michael Parkinson, Tomas Ward, Shirley Coyle","doi":"10.1088/1741-2552/ad775e","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>Training plays a significant role in motor imagery (MI), particularly in applications such as Motor Imagery-based Brain-Computer Interface (MIBCI) systems and rehabilitation systems. Previous studies have investigated the intricate relationship between cues and MI signals. However, the medium of presentation still remains an emerging area to be explored, as possible factors to enhance Motor Imagery signals..&#xD;Approach: We hypothesise that the medium used for cue presentation can significantly influence both performance and training outcomes in MI tasks. To test this hypothesis, we designed and executed an experiment implementing no- feedback MI. Our investigation focused on three distinct cue presentation mediums -audio, screen, and virtual reality(VR) headsets-all of which have potential implications for BCI use in the Activities of Daily Lives.&#xD;Main Results: The results of our study uncovered notable variations in MI signals depending on the medium of cue presentation, where the analysis is based on 3 EEG channels. To substantiate our findings, we employed a comprehensive approach, utilizing various evaluation metrics including Event- Related Synchronisation(ERS)/Desynchronisation(ERD), Feature Extraction (using Recursive Feature Elimination (RFE)), Machine Learning methodologies (using Ensemble Learning), and participant Questionnaires. All the approaches signify that Motor Imagery signals are enhanced when presented in VR, followed by audio, and lastly screen. Applying a Machine Learning approach across all subjects, the mean cross-validation accuracy (Mean ± Std. Error) was 69.24 ± 3.12, 68.69 ± 3.3 and 66.1±2.59 when for the VR, audio-based, and screen-based instructions respectively.&#xD;Significance: This multi-faceted exploration provides evidence to inform MI- based BCI design and advocates the incorporation of different mediums into the design of MIBCI systems, experimental setups, and user studies. The influence of the medium used for cue presentation may be applied to develop more effective and inclusive MI applications in the realm of human-computer interaction and rehabilitation.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1741-2552/ad775e","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: Training plays a significant role in motor imagery (MI), particularly in applications such as Motor Imagery-based Brain-Computer Interface (MIBCI) systems and rehabilitation systems. Previous studies have investigated the intricate relationship between cues and MI signals. However, the medium of presentation still remains an emerging area to be explored, as possible factors to enhance Motor Imagery signals.. Approach: We hypothesise that the medium used for cue presentation can significantly influence both performance and training outcomes in MI tasks. To test this hypothesis, we designed and executed an experiment implementing no- feedback MI. Our investigation focused on three distinct cue presentation mediums -audio, screen, and virtual reality(VR) headsets-all of which have potential implications for BCI use in the Activities of Daily Lives. Main Results: The results of our study uncovered notable variations in MI signals depending on the medium of cue presentation, where the analysis is based on 3 EEG channels. To substantiate our findings, we employed a comprehensive approach, utilizing various evaluation metrics including Event- Related Synchronisation(ERS)/Desynchronisation(ERD), Feature Extraction (using Recursive Feature Elimination (RFE)), Machine Learning methodologies (using Ensemble Learning), and participant Questionnaires. All the approaches signify that Motor Imagery signals are enhanced when presented in VR, followed by audio, and lastly screen. Applying a Machine Learning approach across all subjects, the mean cross-validation accuracy (Mean ± Std. Error) was 69.24 ± 3.12, 68.69 ± 3.3 and 66.1±2.59 when for the VR, audio-based, and screen-based instructions respectively. Significance: This multi-faceted exploration provides evidence to inform MI- based BCI design and advocates the incorporation of different mediums into the design of MIBCI systems, experimental setups, and user studies. The influence of the medium used for cue presentation may be applied to develop more effective and inclusive MI applications in the realm of human-computer interaction and rehabilitation.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过虚拟现实、音频和屏幕的提示进行运动想象。
目的:训练在运动想象(MI)中发挥着重要作用,尤其是在基于运动想象的脑机接口(MIBCI)系统和康复系统等应用中。以往的研究已经探究了线索与运动想象信号之间错综复杂的关系。然而,作为增强运动想象信号的可能因素,呈现媒介仍然是一个有待探索的新兴领域:我们假设,提示呈现的媒介会显著影响运动智能任务的表现和训练结果。为了验证这一假设,我们设计并实施了一项无反馈移动迷宫实验。我们的研究侧重于三种不同的提示呈现媒介--音频、屏幕和虚拟现实(VR)头盔--所有这些媒介都对在日常生活活动中使用生物识别(BCI)具有潜在影响:我们的研究结果揭示了 MI 信号的显著差异,这取决于提示呈现的媒介,其中分析基于 3 个脑电图通道。为了证实我们的研究结果,我们采用了一种综合方法,利用各种评估指标,包括事件相关同步(ERS)/不同步(ERD)、特征提取(使用递归特征消除(RFE))、机器学习方法(使用集合学习)和参与者问卷调查。所有方法都表明,运动图像信号在 VR 中呈现时会得到增强,其次是音频,最后是屏幕。在所有受试者中应用机器学习方法,VR、音频和屏幕指令的交叉验证平均准确率(平均值±标准误差)分别为 69.24 ± 3.12、68.69 ± 3.3 和 66.1 ± 2.59:这一多方面的探索为基于 MI 的 BCI 设计提供了依据,并倡导在 MIBCI 系统设计、实验设置和用户研究中采用不同的媒介。在人机交互和康复领域,提示呈现媒介的影响可用于开发更有效、更具包容性的多元智能应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Attention demands modulate brain electrical microstates and mental fatigue induced by simulated flight tasks. Temporal attention fusion network with custom loss function for EEG-fNIRS classification. Classification of hand movements from EEG using a FusionNet based LSTM network. Frequency-dependent phase entrainment of cortical cell types during tACS: computational modeling evidence. Patient-specific visual neglect severity estimation for stroke patients with neglect using EEG.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1