Modelling mPFC Activities in Reinforcement Learning Framework for Brain-Machine Interfaces

Xiang Shen, Xiang Zhang, Yifan Huang, Shuhang Chen, Yiwen Wang
{"title":"Modelling mPFC Activities in Reinforcement Learning Framework for Brain-Machine Interfaces","authors":"Xiang Shen, Xiang Zhang, Yifan Huang, Shuhang Chen, Yiwen Wang","doi":"10.1109/NER.2019.8717162","DOIUrl":null,"url":null,"abstract":"Reinforcement learning (RL) algorithm interprets the movement intentions in Brain-machine interfaces (BMIs) with a reward signal. This reward can be an external reward (food or water) or an internal representation which links the correct movement with the external reward. Medial prefrontal cortex (mPFC) has been demonstrated to be closely related to the reward-guided learning. In this paper, we propose to model mPFC activities as an internal representation of the reward associated with different actions in a RL framework. Support vector machine (SVM) is adopted to analyze mPFC activities to distinguish the rewarded and unrewarded trials based on mPFC signals considering corresponding actions. Then the discrimination result will be utilized to train a RL decoder. Here we introduce the attention-gated reinforcement learning (AGREL) as the decoder to generate a mapping between motor cortex(M1) and action states. To evaluate our approach, we test on in vivo neural physiological data collected from rats when performing a two-lever discrimination task. The RL decoder using the internal action-reward evaluation achieves a prediction accuracy of 94.8%, which is very close to the one using the external reward. This indicates the potentials of modelling mPFC activities as an internal representation to associate the correct action with the reward.","PeriodicalId":356177,"journal":{"name":"2019 9th International IEEE/EMBS Conference on Neural Engineering (NER)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 9th International IEEE/EMBS Conference on Neural Engineering (NER)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NER.2019.8717162","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Reinforcement learning (RL) algorithm interprets the movement intentions in Brain-machine interfaces (BMIs) with a reward signal. This reward can be an external reward (food or water) or an internal representation which links the correct movement with the external reward. Medial prefrontal cortex (mPFC) has been demonstrated to be closely related to the reward-guided learning. In this paper, we propose to model mPFC activities as an internal representation of the reward associated with different actions in a RL framework. Support vector machine (SVM) is adopted to analyze mPFC activities to distinguish the rewarded and unrewarded trials based on mPFC signals considering corresponding actions. Then the discrimination result will be utilized to train a RL decoder. Here we introduce the attention-gated reinforcement learning (AGREL) as the decoder to generate a mapping between motor cortex(M1) and action states. To evaluate our approach, we test on in vivo neural physiological data collected from rats when performing a two-lever discrimination task. The RL decoder using the internal action-reward evaluation achieves a prediction accuracy of 94.8%, which is very close to the one using the external reward. This indicates the potentials of modelling mPFC activities as an internal representation to associate the correct action with the reward.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在脑机接口强化学习框架中建模mPFC活动
强化学习(RL)算法通过奖励信号来解释脑机接口(bmi)中的运动意图。这种奖励可以是外部奖励(食物或水),也可以是将正确动作与外部奖励联系起来的内部表现形式。内侧前额叶皮层(mPFC)已被证明与奖励引导学习密切相关。在本文中,我们建议将mPFC活动建模为RL框架中与不同动作相关的奖励的内部表示。采用支持向量机(Support vector machine, SVM)对mPFC活动进行分析,根据mPFC信号考虑相应的动作,区分有奖励和无奖励的试验。然后将识别结果用于训练RL解码器。在这里,我们引入注意门控强化学习(AGREL)作为解码器来生成运动皮层(M1)和动作状态之间的映射。为了评估我们的方法,我们对大鼠在执行双水平识别任务时收集的体内神经生理数据进行了测试。使用内部动作奖励评价的RL解码器的预测准确率为94.8%,与使用外部奖励的RL解码器非常接近。这表明了将mPFC活动建模为将正确的行为与奖励联系起来的内部表征的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Activating a 2×2 Network of hNT Astrocytes with UV Laser Stimulation Investigation of Insertion Method to Achieve Chronic Recording Stability of a Semi-Rigid Implantable Neural Probe Assistive Robot Arm Controlled by a P300-based Brain Machine Interface for Daily Activities Single Cell Grid Networks of Human Astrocytes On Chip Modulation of neuronal input-output function by subthreshold electric fields from dendritic sublinear integration
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1