Jieyuan Tan;Xiang Zhang;Shenghui Wu;Zhiwei Song;Yiwen Wang
{"title":"在脑机接口中使用核反强化学习进行基于隐藏脑状态的内部评估","authors":"Jieyuan Tan;Xiang Zhang;Shenghui Wu;Zhiwei Song;Yiwen Wang","doi":"10.1109/TNSRE.2024.3503713","DOIUrl":null,"url":null,"abstract":"Reinforcement learning (RL)-based brain machine interfaces (BMIs) assist paralyzed people in controlling neural prostheses without the need for real limb movement as supervised signals. The design of reward signal significantly impacts the learning efficiency of the RL-based decoders. Existing reward designs in the RL-based BMI framework rely on external rewards or manually labeled internal rewards, unable to accurately extract subjects’ internal evaluation. In this paper, we propose a hidden brain state-based kernel inverse reinforcement learning (HBS-KIRL) method to accurately infer the subject-specific internal evaluation from neural activity during the BMI task. The state-space model is applied to project the neural state into low-dimensional hidden brain state space, which greatly reduces the exploration dimension. Then the kernel method is applied to speed up the convergence of policy, reward, and Q-value networks in reproducing kernel Hilbert space (RKHS). We tested our proposed algorithm on the data collected from the medial prefrontal cortex (mPFC) of rats when they were performing a two-lever-discrimination task. We assessed the state-value estimation performance of our proposed method and compared it with naïve IRL and PCA-based IRL. To validate that the extracted internal evaluation could contribute to the decoder training, we compared the decoding performance of decoders trained by different reward models, including manually designed reward, naïve IRL, PCA-IRL, and our proposed HBS-KIRL. The results show that the HBS-KIRL method can give a stable and accurate estimation of state-value distribution with respect to behavior. Compared with other methods, the decoder guided by HBS-KIRL achieves consistent and better decoding performance over days. This study reveals the potential of applying the IRL method to better extract subject-specific evaluation and improve the BMI decoding performance.","PeriodicalId":13419,"journal":{"name":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","volume":"32 ","pages":"4219-4229"},"PeriodicalIF":4.8000,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10759843","citationCount":"0","resultStr":"{\"title\":\"Hidden Brain State-Based Internal Evaluation Using Kernel Inverse Reinforcement Learning in Brain-Machine Interfaces\",\"authors\":\"Jieyuan Tan;Xiang Zhang;Shenghui Wu;Zhiwei Song;Yiwen Wang\",\"doi\":\"10.1109/TNSRE.2024.3503713\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reinforcement learning (RL)-based brain machine interfaces (BMIs) assist paralyzed people in controlling neural prostheses without the need for real limb movement as supervised signals. The design of reward signal significantly impacts the learning efficiency of the RL-based decoders. Existing reward designs in the RL-based BMI framework rely on external rewards or manually labeled internal rewards, unable to accurately extract subjects’ internal evaluation. In this paper, we propose a hidden brain state-based kernel inverse reinforcement learning (HBS-KIRL) method to accurately infer the subject-specific internal evaluation from neural activity during the BMI task. The state-space model is applied to project the neural state into low-dimensional hidden brain state space, which greatly reduces the exploration dimension. Then the kernel method is applied to speed up the convergence of policy, reward, and Q-value networks in reproducing kernel Hilbert space (RKHS). We tested our proposed algorithm on the data collected from the medial prefrontal cortex (mPFC) of rats when they were performing a two-lever-discrimination task. We assessed the state-value estimation performance of our proposed method and compared it with naïve IRL and PCA-based IRL. To validate that the extracted internal evaluation could contribute to the decoder training, we compared the decoding performance of decoders trained by different reward models, including manually designed reward, naïve IRL, PCA-IRL, and our proposed HBS-KIRL. The results show that the HBS-KIRL method can give a stable and accurate estimation of state-value distribution with respect to behavior. Compared with other methods, the decoder guided by HBS-KIRL achieves consistent and better decoding performance over days. This study reveals the potential of applying the IRL method to better extract subject-specific evaluation and improve the BMI decoding performance.\",\"PeriodicalId\":13419,\"journal\":{\"name\":\"IEEE Transactions on Neural Systems and Rehabilitation Engineering\",\"volume\":\"32 \",\"pages\":\"4219-4229\"},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2024-11-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10759843\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Neural Systems and Rehabilitation Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10759843/\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10759843/","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
Hidden Brain State-Based Internal Evaluation Using Kernel Inverse Reinforcement Learning in Brain-Machine Interfaces
Reinforcement learning (RL)-based brain machine interfaces (BMIs) assist paralyzed people in controlling neural prostheses without the need for real limb movement as supervised signals. The design of reward signal significantly impacts the learning efficiency of the RL-based decoders. Existing reward designs in the RL-based BMI framework rely on external rewards or manually labeled internal rewards, unable to accurately extract subjects’ internal evaluation. In this paper, we propose a hidden brain state-based kernel inverse reinforcement learning (HBS-KIRL) method to accurately infer the subject-specific internal evaluation from neural activity during the BMI task. The state-space model is applied to project the neural state into low-dimensional hidden brain state space, which greatly reduces the exploration dimension. Then the kernel method is applied to speed up the convergence of policy, reward, and Q-value networks in reproducing kernel Hilbert space (RKHS). We tested our proposed algorithm on the data collected from the medial prefrontal cortex (mPFC) of rats when they were performing a two-lever-discrimination task. We assessed the state-value estimation performance of our proposed method and compared it with naïve IRL and PCA-based IRL. To validate that the extracted internal evaluation could contribute to the decoder training, we compared the decoding performance of decoders trained by different reward models, including manually designed reward, naïve IRL, PCA-IRL, and our proposed HBS-KIRL. The results show that the HBS-KIRL method can give a stable and accurate estimation of state-value distribution with respect to behavior. Compared with other methods, the decoder guided by HBS-KIRL achieves consistent and better decoding performance over days. This study reveals the potential of applying the IRL method to better extract subject-specific evaluation and improve the BMI decoding performance.
期刊介绍:
Rehabilitative and neural aspects of biomedical engineering, including functional electrical stimulation, acoustic dynamics, human performance measurement and analysis, nerve stimulation, electromyography, motor control and stimulation; and hardware and software applications for rehabilitation engineering and assistive devices.