具有鲁棒性保证的直接数据驱动贴现无限视距线性二次调节器

Ramin Esmzad, Hamidreza Modares
{"title":"具有鲁棒性保证的直接数据驱动贴现无限视距线性二次调节器","authors":"Ramin Esmzad, Hamidreza Modares","doi":"arxiv-2409.10703","DOIUrl":null,"url":null,"abstract":"This paper presents a one-shot learning approach with performance and\nrobustness guarantees for the linear quadratic regulator (LQR) control of\nstochastic linear systems. Even though data-based LQR control has been widely\nconsidered, existing results suffer either from data hungriness due to the\ninherently iterative nature of the optimization formulation (e.g., value\nlearning or policy gradient reinforcement learning algorithms) or from a lack\nof robustness guarantees in one-shot non-iterative algorithms. To avoid data\nhungriness while ensuing robustness guarantees, an adaptive dynamic programming\nformalization of the LQR is presented that relies on solving a Bellman\ninequality. The control gain and the value function are directly learned by\nusing a control-oriented approach that characterizes the closed-loop system\nusing data and a decision variable from which the control is obtained. This\nclosed-loop characterization is noise-dependent. The effect of the closed-loop\nsystem noise on the Bellman inequality is considered to ensure both robust\nstability and suboptimal performance despite ignoring the measurement noise. To\nensure robust stability, it is shown that this system characterization leads to\na closed-loop system with multiplicative and additive noise, enabling the\napplication of distributional robust control techniques. The analysis of the\nsuboptimality gap reveals that robustness can be achieved without the need for\nregularization or parameter tuning. The simulation results on the active car\nsuspension problem demonstrate the superiority of the proposed method in terms\nof robustness and performance gap compared to existing methods.","PeriodicalId":501175,"journal":{"name":"arXiv - EE - Systems and Control","volume":"31 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Direct Data-Driven Discounted Infinite Horizon Linear Quadratic Regulator with Robustness Guarantees\",\"authors\":\"Ramin Esmzad, Hamidreza Modares\",\"doi\":\"arxiv-2409.10703\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a one-shot learning approach with performance and\\nrobustness guarantees for the linear quadratic regulator (LQR) control of\\nstochastic linear systems. Even though data-based LQR control has been widely\\nconsidered, existing results suffer either from data hungriness due to the\\ninherently iterative nature of the optimization formulation (e.g., value\\nlearning or policy gradient reinforcement learning algorithms) or from a lack\\nof robustness guarantees in one-shot non-iterative algorithms. To avoid data\\nhungriness while ensuing robustness guarantees, an adaptive dynamic programming\\nformalization of the LQR is presented that relies on solving a Bellman\\ninequality. The control gain and the value function are directly learned by\\nusing a control-oriented approach that characterizes the closed-loop system\\nusing data and a decision variable from which the control is obtained. This\\nclosed-loop characterization is noise-dependent. The effect of the closed-loop\\nsystem noise on the Bellman inequality is considered to ensure both robust\\nstability and suboptimal performance despite ignoring the measurement noise. To\\nensure robust stability, it is shown that this system characterization leads to\\na closed-loop system with multiplicative and additive noise, enabling the\\napplication of distributional robust control techniques. The analysis of the\\nsuboptimality gap reveals that robustness can be achieved without the need for\\nregularization or parameter tuning. The simulation results on the active car\\nsuspension problem demonstrate the superiority of the proposed method in terms\\nof robustness and performance gap compared to existing methods.\",\"PeriodicalId\":501175,\"journal\":{\"name\":\"arXiv - EE - Systems and Control\",\"volume\":\"31 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Systems and Control\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10703\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Systems and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10703","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文针对随机线性系统的线性二次调节器(LQR)控制,提出了一种具有性能和稳健性保证的单次学习方法。尽管基于数据的 LQR 控制已被广泛考虑,但现有结果要么因优化公式固有的迭代性质(如值学习或策略梯度强化学习算法)而存在数据饥饿问题,要么因单次非迭代算法缺乏鲁棒性保证而受到影响。为了在保证鲁棒性的同时避免数据混乱,本文提出了一种 LQR 的自适应动态编程形式化,它依赖于贝尔曼方程的求解。控制增益和价值函数是通过使用面向控制的方法直接学习的,这种方法使用数据和决策变量来描述闭环系统,并从中获得控制。这种闭环特性取决于噪声。我们考虑了闭环系统噪声对贝尔曼不等式的影响,以确保鲁棒稳定性和次优性能,尽管忽略了测量噪声。为了确保鲁棒稳定性,研究表明这种系统特性会导致闭环系统出现乘法和加法噪声,从而使分布式鲁棒控制技术的应用成为可能。对次优差距的分析表明,鲁棒性可以在不需要规则化或参数调整的情况下实现。对主动汽车悬架问题的仿真结果表明,与现有方法相比,所提方法在鲁棒性和性能差距方面更具优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Direct Data-Driven Discounted Infinite Horizon Linear Quadratic Regulator with Robustness Guarantees
This paper presents a one-shot learning approach with performance and robustness guarantees for the linear quadratic regulator (LQR) control of stochastic linear systems. Even though data-based LQR control has been widely considered, existing results suffer either from data hungriness due to the inherently iterative nature of the optimization formulation (e.g., value learning or policy gradient reinforcement learning algorithms) or from a lack of robustness guarantees in one-shot non-iterative algorithms. To avoid data hungriness while ensuing robustness guarantees, an adaptive dynamic programming formalization of the LQR is presented that relies on solving a Bellman inequality. The control gain and the value function are directly learned by using a control-oriented approach that characterizes the closed-loop system using data and a decision variable from which the control is obtained. This closed-loop characterization is noise-dependent. The effect of the closed-loop system noise on the Bellman inequality is considered to ensure both robust stability and suboptimal performance despite ignoring the measurement noise. To ensure robust stability, it is shown that this system characterization leads to a closed-loop system with multiplicative and additive noise, enabling the application of distributional robust control techniques. The analysis of the suboptimality gap reveals that robustness can be achieved without the need for regularization or parameter tuning. The simulation results on the active car suspension problem demonstrate the superiority of the proposed method in terms of robustness and performance gap compared to existing methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Data-Efficient Quadratic Q-Learning Using LMIs On the Stability of Consensus Control under Rotational Ambiguities System-Level Efficient Performance of EMLA-Driven Heavy-Duty Manipulators via Bilevel Optimization Framework with a Leader--Follower Scenario ReLU Surrogates in Mixed-Integer MPC for Irrigation Scheduling Model-Free Generic Robust Control for Servo-Driven Actuation Mechanisms with Experimental Verification
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1