Q-Learning Methods for LQR Control of Completely Unknown Discrete-Time Linear Systems

IF 6.4 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS IEEE Transactions on Automation Science and Engineering Pub Date : 2024-08-02 DOI:10.1109/TASE.2024.3434533
Wenwu Fan;Junlin Xiong
{"title":"Q-Learning Methods for LQR Control of Completely Unknown Discrete-Time Linear Systems","authors":"Wenwu Fan;Junlin Xiong","doi":"10.1109/TASE.2024.3434533","DOIUrl":null,"url":null,"abstract":"This paper focuses on solving the linear quadratic regulator problem for discrete-time linear systems without knowing system matrices. The classical Q-learning methods for linear systems can be divided into Q-learning value iteration and Q-learning policy iteration. Q-learning value iteration converges at a linear convergence rate. Q-learning policy iteration has a second-order convergence rate but requires an initial stabilizing control policy. This paper aims to propose efficient model-free algorithms for solving the optimal control problem without requiring an initial stabilizing control policy. In this paper, we first present an equivalent problem for an auxiliary system with the same optimal control policy as the LQR problem. A Q-learning algorithm is proposed to solve the equivalent problem, which is proven to converge monotonically to the optimal solution. The convergence rate of the Q-learning algorithm is heavily dependent on the auxiliary system, so we introduce a model-free homotopy method based on Q-learning to solve the LQR problem. This homotopy method can achieve the optimal solution in a finite number of iterations by solving an LQR problem in each iteration. Additionally, we propose a Q-learning Lyapunov iteration algorithm to solve the equivalent problem for an auxiliary system and analyze its properties. Finally, two examples are provided to demonstrate our results. Note to Practitioners—This paper proposes several Q-learning methods to solve the linear quadratic regulator problem for discrete-time linear systems. On the one hand, it is difficult to know the exact system dynamics knowledge in actual engineering, so this paper is devoted to developing model-free algorithms. On the other hand, this paper focuses on the LQR problem because it is widely spread in practical applications. We propose several model-free algorithms to solve the LQR problem, which provides the basis for optimal control of actual applications. Similar to policy iteration, our algorithms need to solve the Lyapunov equation. The advantage of our methods is that all of our algorithms do not have strict constraints on initial conditions compared with policy iteration. The properties of every algorithm proposed in this paper are provided. In addition, we focus on the efficiency of algorithms to obtain the optimal control policy faster. Two practical examples are used to verify the effectiveness of our methods. Finally, the applicable situations of each algorithm are summarized in the conclusion.","PeriodicalId":51060,"journal":{"name":"IEEE Transactions on Automation Science and Engineering","volume":"22 ","pages":"5933-5943"},"PeriodicalIF":6.4000,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automation Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10622003/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

This paper focuses on solving the linear quadratic regulator problem for discrete-time linear systems without knowing system matrices. The classical Q-learning methods for linear systems can be divided into Q-learning value iteration and Q-learning policy iteration. Q-learning value iteration converges at a linear convergence rate. Q-learning policy iteration has a second-order convergence rate but requires an initial stabilizing control policy. This paper aims to propose efficient model-free algorithms for solving the optimal control problem without requiring an initial stabilizing control policy. In this paper, we first present an equivalent problem for an auxiliary system with the same optimal control policy as the LQR problem. A Q-learning algorithm is proposed to solve the equivalent problem, which is proven to converge monotonically to the optimal solution. The convergence rate of the Q-learning algorithm is heavily dependent on the auxiliary system, so we introduce a model-free homotopy method based on Q-learning to solve the LQR problem. This homotopy method can achieve the optimal solution in a finite number of iterations by solving an LQR problem in each iteration. Additionally, we propose a Q-learning Lyapunov iteration algorithm to solve the equivalent problem for an auxiliary system and analyze its properties. Finally, two examples are provided to demonstrate our results. Note to Practitioners—This paper proposes several Q-learning methods to solve the linear quadratic regulator problem for discrete-time linear systems. On the one hand, it is difficult to know the exact system dynamics knowledge in actual engineering, so this paper is devoted to developing model-free algorithms. On the other hand, this paper focuses on the LQR problem because it is widely spread in practical applications. We propose several model-free algorithms to solve the LQR problem, which provides the basis for optimal control of actual applications. Similar to policy iteration, our algorithms need to solve the Lyapunov equation. The advantage of our methods is that all of our algorithms do not have strict constraints on initial conditions compared with policy iteration. The properties of every algorithm proposed in this paper are provided. In addition, we focus on the efficiency of algorithms to obtain the optimal control policy faster. Two practical examples are used to verify the effectiveness of our methods. Finally, the applicable situations of each algorithm are summarized in the conclusion.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于完全未知离散时间线性系统 LQR 控制的 Q 学习方法
研究了在不知道系统矩阵的情况下求解离散线性系统的线性二次型调节器问题。经典的线性系统q学习方法可分为q学习值迭代和q学习策略迭代。q -学习值迭代以线性收敛速度收敛。q学习策略迭代具有二阶收敛速率,但需要初始稳定控制策略。本文旨在提出有效的无模型算法来解决最优控制问题,而不需要初始稳定控制策略。本文首先给出了一个与LQR问题具有相同最优控制策略的辅助系统的等价问题。提出了一种求解等效问题的q -学习算法,并证明了该算法单调收敛于最优解。Q-learning算法的收敛速度很大程度上依赖于辅助系统,因此我们引入了一种基于Q-learning的无模型同伦方法来解决LQR问题。这种同伦方法通过在每次迭代中求解一个LQR问题,可以在有限次迭代中获得最优解。此外,我们提出了一种q -学习Lyapunov迭代算法来解决辅助系统的等效问题,并分析了其性质。最后,给出了两个例子来证明我们的结果。本文提出了几种q -学习方法来解决离散线性系统的线性二次调节器问题。一方面,在实际工程中很难知道准确的系统动力学知识,因此本文致力于开发无模型算法。另一方面,由于LQR问题在实际应用中得到了广泛的应用,因此本文重点研究了LQR问题。提出了几种求解LQR问题的无模型算法,为实际应用中的最优控制提供了依据。与策略迭代类似,我们的算法需要解决Lyapunov方程。与策略迭代相比,我们的方法的优点是我们所有的算法都没有严格的初始条件约束。给出了本文提出的各种算法的性质。此外,我们还关注算法的效率,以便更快地获得最优控制策略。用两个实例验证了方法的有效性。最后,在结论部分总结了各算法的适用情况。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Automation Science and Engineering
IEEE Transactions on Automation Science and Engineering 工程技术-自动化与控制系统
CiteScore
12.50
自引率
14.30%
发文量
404
审稿时长
3.0 months
期刊介绍: The IEEE Transactions on Automation Science and Engineering (T-ASE) publishes fundamental papers on Automation, emphasizing scientific results that advance efficiency, quality, productivity, and reliability. T-ASE encourages interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, operations research, and other fields. T-ASE welcomes results relevant to industries such as agriculture, biotechnology, healthcare, home automation, maintenance, manufacturing, pharmaceuticals, retail, security, service, supply chains, and transportation. T-ASE addresses a research community willing to integrate knowledge across disciplines and industries. For this purpose, each paper includes a Note to Practitioners that summarizes how its results can be applied or how they might be extended to apply in practice.
期刊最新文献
Dynamic Programming based Fractional-order Compound Steering Control for Lateral Stabilization of DDEVs with Closed-loop Game Consensus Control for PDE-ODE MASs with Multi Delays: A Dual-Mode Adaptive Event-Triggered Strategy and Novel Stability Analysis Criterion Geometry-Aware Physics Informed PointNet (GeoPIPN) for Fast Thermal Distribution Prediction in Additive Manufacturing of Unseen Part Geometries Transfer Learning-Based Deep Reinforcement Learning for Adaptive Control of Maglev Trains Data-Driven Precision Velocity Control for Maglev Car Systems via Error-Scheduled Model-Free Adaptive Control
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1