{"title":"Inverse reinforcement learning by expert imitation for the stochastic linear–quadratic optimal control problem","authors":"Zhongshi Sun , Guangyan Jia","doi":"10.1016/j.neucom.2025.129758","DOIUrl":null,"url":null,"abstract":"<div><div>This article studies inverse reinforcement learning (IRL) for the linear–quadratic stochastic optimal control problem, where two agents are considered. A learner agent lacks knowledge of the expert agent’s cost function, but it reconstructs an underlying cost function by observing the expert agent’s states and controls, thereby imitating the expert agent’s optimal feedback control. We initially present a model-based IRL method, which consists of a policy correction and a policy update from the policy iteration in reinforcement learning, as well as a cost function weight reconstruction informed by the inverse optimal control. Afterward, under this scheme, we propose a model-free off-policy IRL method, which requires no system identification, only collecting behavior data from the learner agent and expert agent once during the iteration process. Moreover, the proofs of the method’s convergence, stability, and non-unique solutions are given. Finally, a numerical example and an inverse mean–variance portfolio optimization example are provided to validate the effectiveness of the presented method.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"633 ","pages":"Article 129758"},"PeriodicalIF":5.5000,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225004308","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This article studies inverse reinforcement learning (IRL) for the linear–quadratic stochastic optimal control problem, where two agents are considered. A learner agent lacks knowledge of the expert agent’s cost function, but it reconstructs an underlying cost function by observing the expert agent’s states and controls, thereby imitating the expert agent’s optimal feedback control. We initially present a model-based IRL method, which consists of a policy correction and a policy update from the policy iteration in reinforcement learning, as well as a cost function weight reconstruction informed by the inverse optimal control. Afterward, under this scheme, we propose a model-free off-policy IRL method, which requires no system identification, only collecting behavior data from the learner agent and expert agent once during the iteration process. Moreover, the proofs of the method’s convergence, stability, and non-unique solutions are given. Finally, a numerical example and an inverse mean–variance portfolio optimization example are provided to validate the effectiveness of the presented method.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.