运动图像脑机接口的运动任务到任务转移学习

IF 4.7 2区 医学 Q1 NEUROIMAGING NeuroImage Pub Date : 2024-10-28 DOI:10.1016/j.neuroimage.2024.120906
{"title":"运动图像脑机接口的运动任务到任务转移学习","authors":"","doi":"10.1016/j.neuroimage.2024.120906","DOIUrl":null,"url":null,"abstract":"<div><div>Motor imagery (MI) is one of the popular control paradigms in the non-invasive brain-computer interface (BCI) field. MI-BCI generally requires users to conduct the imagination of movement (e.g., left or right hand) to collect training data for generating a classification model during the calibration phase. However, this calibration phase is generally time-consuming and tedious, as users conduct the imagination of hand movement several times without being given feedback for an extended period. This obstacle makes MI-BCI non user-friendly and hinders its use. On the other hand, motor execution (ME) and motor observation (MO) are relatively easier tasks, yield lower fatigue than MI, and share similar neural mechanisms to MI. However, few studies have integrated these three tasks into BCIs. In this study, we propose a new task-to-task transfer learning approach of 3-motor tasks (ME, MO, and MI) for building a better user-friendly MI-BCI. For this study, 28 subjects participated in 3-motor tasks experiment, and electroencephalography (EEG) was acquired. User opinions regarding the 3-motor tasks were also collected through questionnaire survey. The 3-motor tasks showed a power decrease in the alpha rhythm, known as event-related desynchronization, but with slight differences in the temporal patterns. In the classification analysis, the cross-validated accuracy (within-task) was 67.05 % for ME, 65.93 % for MI, and 73.16 % for MO on average. Consistently with the results, the subjects scored MI (3.16) as the most difficult task compared with MO (1.42) and ME (1.41), with <em>p</em> &lt; 0.05. In the analysis of task-to-task transfer learning, where training and testing are performed using different task datasets, the ME–trained model yielded an accuracy of 65.93 % (MI test), which is statistically similar to the within-task accuracy (<em>p</em> &gt; 0.05). The MO–trained model achieved an accuracy of 60.82 % (MI test). On the other hand, combining two datasets yielded interesting results. ME and 50 % of the MI–trained model (50-shot) classified MI with a 69.21 % accuracy, which outperformed the within-task accuracy (<em>p</em> &lt; 0.05), and MO and 50 % of the MI–trained model showed an accuracy of 66.75 %. Of the low performers with a within-task accuracy of 70 % or less, 90 % (<em>n</em> = 21) of the subjects improved in training with ME, and 76.2 % (<em>n</em> = 16) improved in training with MO on the MI test at 50-shot. These results demonstrate that task-to-task transfer learning is possible and could be a promising approach to building a user-friendly training protocol in MI-BCI.</div></div>","PeriodicalId":19299,"journal":{"name":"NeuroImage","volume":null,"pages":null},"PeriodicalIF":4.7000,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Motor task-to-task transfer learning for motor imagery brain-computer interfaces\",\"authors\":\"\",\"doi\":\"10.1016/j.neuroimage.2024.120906\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Motor imagery (MI) is one of the popular control paradigms in the non-invasive brain-computer interface (BCI) field. MI-BCI generally requires users to conduct the imagination of movement (e.g., left or right hand) to collect training data for generating a classification model during the calibration phase. However, this calibration phase is generally time-consuming and tedious, as users conduct the imagination of hand movement several times without being given feedback for an extended period. This obstacle makes MI-BCI non user-friendly and hinders its use. On the other hand, motor execution (ME) and motor observation (MO) are relatively easier tasks, yield lower fatigue than MI, and share similar neural mechanisms to MI. However, few studies have integrated these three tasks into BCIs. In this study, we propose a new task-to-task transfer learning approach of 3-motor tasks (ME, MO, and MI) for building a better user-friendly MI-BCI. For this study, 28 subjects participated in 3-motor tasks experiment, and electroencephalography (EEG) was acquired. User opinions regarding the 3-motor tasks were also collected through questionnaire survey. The 3-motor tasks showed a power decrease in the alpha rhythm, known as event-related desynchronization, but with slight differences in the temporal patterns. In the classification analysis, the cross-validated accuracy (within-task) was 67.05 % for ME, 65.93 % for MI, and 73.16 % for MO on average. Consistently with the results, the subjects scored MI (3.16) as the most difficult task compared with MO (1.42) and ME (1.41), with <em>p</em> &lt; 0.05. In the analysis of task-to-task transfer learning, where training and testing are performed using different task datasets, the ME–trained model yielded an accuracy of 65.93 % (MI test), which is statistically similar to the within-task accuracy (<em>p</em> &gt; 0.05). The MO–trained model achieved an accuracy of 60.82 % (MI test). On the other hand, combining two datasets yielded interesting results. ME and 50 % of the MI–trained model (50-shot) classified MI with a 69.21 % accuracy, which outperformed the within-task accuracy (<em>p</em> &lt; 0.05), and MO and 50 % of the MI–trained model showed an accuracy of 66.75 %. Of the low performers with a within-task accuracy of 70 % or less, 90 % (<em>n</em> = 21) of the subjects improved in training with ME, and 76.2 % (<em>n</em> = 16) improved in training with MO on the MI test at 50-shot. These results demonstrate that task-to-task transfer learning is possible and could be a promising approach to building a user-friendly training protocol in MI-BCI.</div></div>\",\"PeriodicalId\":19299,\"journal\":{\"name\":\"NeuroImage\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.7000,\"publicationDate\":\"2024-10-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"NeuroImage\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1053811924004038\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"NEUROIMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"NeuroImage","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1053811924004038","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"NEUROIMAGING","Score":null,"Total":0}
引用次数: 0

摘要

运动想象(MI)是无创脑机接口(BCI)领域流行的控制范式之一。MI-BCI 通常要求用户进行运动想象(如左手或右手),以收集训练数据,从而在校准阶段生成分类模型。然而,这一校准阶段通常耗时且乏味,因为用户需要多次想象手的动作,而长时间得不到反馈。这一障碍使得 MI-BCI 对用户不友好,阻碍了它的使用。另一方面,运动执行(ME)和运动观察(MO)是相对较为简单的任务,产生的疲劳度比 MI 低,并且与 MI 有相似的神经机制。然而,很少有研究将这三种任务整合到 BCI 中。在本研究中,我们提出了一种新的任务到任务的转移学习方法,即3种运动任务(ME、MO和MI)的转移学习方法,以建立一种更好的用户友好型MI-BCI。在这项研究中,28 名受试者参加了 3 个运动任务实验,并采集了脑电图(EEG)。此外,还通过问卷调查收集了用户对 3 项运动任务的意见。3 项运动任务显示阿尔法节律的功率下降,即事件相关非同步化,但在时间模式上略有不同。在分类分析中,交叉验证准确率(任务内)平均为 ME 67.05%、MI 65.93%、MO 73.16%。与结果一致,受试者认为 MI(3.16)是最难的任务,而 MO(1.42)和 ME(1.41)为最难的任务,p < 0.05。在任务到任务的迁移学习分析中,即使用不同的任务数据集进行训练和测试时,ME 训练模型的准确率为 65.93%(MI 测试),与任务内准确率在统计学上相似(p >0.05)。MO训练模型的准确率为60.82%(MI测试)。另一方面,结合两个数据集也产生了有趣的结果。ME 和 50% 的 MI 训练模型(50-shot)对 MI 的分类准确率为 69.21%,超过了任务内准确率(p <0.05),而 MO 和 50% 的 MI 训练模型的准确率为 66.75%。在任务内准确率为 70% 或更低的低水平受试者中,90%(n = 21)的受试者在接受 ME 训练后有所改善,76.2%(n = 16)的受试者在接受 MO 训练后在 50 发子弹的 MI 测试中有所改善。这些结果表明,任务到任务的迁移学习是可能的,并且可能成为建立用户友好型 MI-BCI 训练方案的一种有前途的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Motor task-to-task transfer learning for motor imagery brain-computer interfaces
Motor imagery (MI) is one of the popular control paradigms in the non-invasive brain-computer interface (BCI) field. MI-BCI generally requires users to conduct the imagination of movement (e.g., left or right hand) to collect training data for generating a classification model during the calibration phase. However, this calibration phase is generally time-consuming and tedious, as users conduct the imagination of hand movement several times without being given feedback for an extended period. This obstacle makes MI-BCI non user-friendly and hinders its use. On the other hand, motor execution (ME) and motor observation (MO) are relatively easier tasks, yield lower fatigue than MI, and share similar neural mechanisms to MI. However, few studies have integrated these three tasks into BCIs. In this study, we propose a new task-to-task transfer learning approach of 3-motor tasks (ME, MO, and MI) for building a better user-friendly MI-BCI. For this study, 28 subjects participated in 3-motor tasks experiment, and electroencephalography (EEG) was acquired. User opinions regarding the 3-motor tasks were also collected through questionnaire survey. The 3-motor tasks showed a power decrease in the alpha rhythm, known as event-related desynchronization, but with slight differences in the temporal patterns. In the classification analysis, the cross-validated accuracy (within-task) was 67.05 % for ME, 65.93 % for MI, and 73.16 % for MO on average. Consistently with the results, the subjects scored MI (3.16) as the most difficult task compared with MO (1.42) and ME (1.41), with p < 0.05. In the analysis of task-to-task transfer learning, where training and testing are performed using different task datasets, the ME–trained model yielded an accuracy of 65.93 % (MI test), which is statistically similar to the within-task accuracy (p > 0.05). The MO–trained model achieved an accuracy of 60.82 % (MI test). On the other hand, combining two datasets yielded interesting results. ME and 50 % of the MI–trained model (50-shot) classified MI with a 69.21 % accuracy, which outperformed the within-task accuracy (p < 0.05), and MO and 50 % of the MI–trained model showed an accuracy of 66.75 %. Of the low performers with a within-task accuracy of 70 % or less, 90 % (n = 21) of the subjects improved in training with ME, and 76.2 % (n = 16) improved in training with MO on the MI test at 50-shot. These results demonstrate that task-to-task transfer learning is possible and could be a promising approach to building a user-friendly training protocol in MI-BCI.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
NeuroImage
NeuroImage 医学-核医学
CiteScore
11.30
自引率
10.50%
发文量
809
审稿时长
63 days
期刊介绍: NeuroImage, a Journal of Brain Function provides a vehicle for communicating important advances in acquiring, analyzing, and modelling neuroimaging data and in applying these techniques to the study of structure-function and brain-behavior relationships. Though the emphasis is on the macroscopic level of human brain organization, meso-and microscopic neuroimaging across all species will be considered if informative for understanding the aforementioned relationships.
期刊最新文献
Characterizing the role of the microbiota-gut-brain axis in cerebral small vessel disease: An integrative multi‑omics study. Sleep-spindles as a marker of attention and intelligence in dogs. Cerebral blood flow and arterial transit time responses to exercise training in older adults. Decoding Cortical Chronotopy - Comparing the Influence of Different Cortical Organizational Schemes. Neurophysiological dynamics of metacontrol states: EEG insights into conflict regulation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1