基于强化学习的线性离散系统最优跟踪控制

Bahare Kiumarsi-Khomartash, F. Lewis, M. Naghibi-Sistani, A. Karimpour
{"title":"基于强化学习的线性离散系统最优跟踪控制","authors":"Bahare Kiumarsi-Khomartash, F. Lewis, M. Naghibi-Sistani, A. Karimpour","doi":"10.1109/CDC.2013.6760476","DOIUrl":null,"url":null,"abstract":"This paper presents an online solution to the infinite-horizon linear quadratic tracker (LQT) using reinforcement learning. It is first assumed that the value function for the LQT is quadratic in terms of the reference trajectory and the state of the system. Then, using the quadratic form of the value function, an augmented algebraic Riccati equation (ARE) is derived to solve the LQT. Using this formulation, both feedback and feedforward parts of the optimal control solution are obtained simultaneously by solving the augmented ARE. To find the solution to the augmented ARE online, policy iteration as a class of reinforcement learning algorithms, is employed. This algorithm is implemented on an actor-critic structure by using two neural networks and it does not need the knowledge of the drift system dynamics or the command generator dynamics. A simulation example shows that the proposed algorithm works for a system with partially unknown dynamics.","PeriodicalId":415568,"journal":{"name":"52nd IEEE Conference on Decision and Control","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":"{\"title\":\"Optimal tracking control for linear discrete-time systems using reinforcement learning\",\"authors\":\"Bahare Kiumarsi-Khomartash, F. Lewis, M. Naghibi-Sistani, A. Karimpour\",\"doi\":\"10.1109/CDC.2013.6760476\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents an online solution to the infinite-horizon linear quadratic tracker (LQT) using reinforcement learning. It is first assumed that the value function for the LQT is quadratic in terms of the reference trajectory and the state of the system. Then, using the quadratic form of the value function, an augmented algebraic Riccati equation (ARE) is derived to solve the LQT. Using this formulation, both feedback and feedforward parts of the optimal control solution are obtained simultaneously by solving the augmented ARE. To find the solution to the augmented ARE online, policy iteration as a class of reinforcement learning algorithms, is employed. This algorithm is implemented on an actor-critic structure by using two neural networks and it does not need the knowledge of the drift system dynamics or the command generator dynamics. A simulation example shows that the proposed algorithm works for a system with partially unknown dynamics.\",\"PeriodicalId\":415568,\"journal\":{\"name\":\"52nd IEEE Conference on Decision and Control\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-12-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"26\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"52nd IEEE Conference on Decision and Control\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CDC.2013.6760476\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"52nd IEEE Conference on Decision and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CDC.2013.6760476","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 26

摘要

本文提出了一种基于强化学习的线性二次跟踪器(LQT)的在线求解方法。首先假设LQT的值函数是参考轨迹和系统状态的二次函数。然后,利用值函数的二次型,导出了求解LQT的增广代数Riccati方程(ARE)。利用该公式,通过求解增广are,同时得到最优控制解的反馈部分和前馈部分。为了找到在线增强ARE的解,策略迭代作为一类强化学习算法被使用。该算法采用两种神经网络,在参与者-批评结构上实现,不需要漂移系统动力学知识或命令生成器动力学知识。仿真实例表明,该算法适用于动态部分未知的系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Optimal tracking control for linear discrete-time systems using reinforcement learning
This paper presents an online solution to the infinite-horizon linear quadratic tracker (LQT) using reinforcement learning. It is first assumed that the value function for the LQT is quadratic in terms of the reference trajectory and the state of the system. Then, using the quadratic form of the value function, an augmented algebraic Riccati equation (ARE) is derived to solve the LQT. Using this formulation, both feedback and feedforward parts of the optimal control solution are obtained simultaneously by solving the augmented ARE. To find the solution to the augmented ARE online, policy iteration as a class of reinforcement learning algorithms, is employed. This algorithm is implemented on an actor-critic structure by using two neural networks and it does not need the knowledge of the drift system dynamics or the command generator dynamics. A simulation example shows that the proposed algorithm works for a system with partially unknown dynamics.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Bandits with budgets Decentralized control of partially observable Markov decision processes Torque allocation in electric vehicles with in-wheel motors: A performance-oriented approach A validated integration algorithm for nonlinear ODEs using Taylor models and ellipsoidal calculus Graphical FPGA design for a predictive controller with application to spacecraft rendezvous
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1