用于连续时间非线性系统优化自适应控制的固定时间稳定梯度流

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE International Journal of Intelligent Systems Pub Date : 2024-07-12 DOI:10.1155/2024/5241035
Mahdi Niroomand, Reihaneh Kardehi Moghaddam, Hamidreza Modares, Mohammad-Bagher Naghibi Sistani
{"title":"用于连续时间非线性系统优化自适应控制的固定时间稳定梯度流","authors":"Mahdi Niroomand,&nbsp;Reihaneh Kardehi Moghaddam,&nbsp;Hamidreza Modares,&nbsp;Mohammad-Bagher Naghibi Sistani","doi":"10.1155/2024/5241035","DOIUrl":null,"url":null,"abstract":"<div>\n <p>This paper introduces an inclusive class of fixed-time stable continuous-time gradient flows (GFs). This class of GFs is then leveraged to learn optimal control solutions for nonlinear systems in fixed time. It is shown that the presented GF guarantees convergence within a fixed time from any initial condition to the exact minimum of functions that satisfy the Polyak–Łojasiewicz (PL) inequality. The presented fixed-time GF is then utilized to design fixed-time optimal adaptive control algorithms. To this end, a fixed-time reinforcement learning (RL) algorithm is developed on the basis of a single network adaptive critic (SNAC) to learn the solution to an infinite-horizon optimal control problem in a fixed-time convergent, online, adaptive, and forward-in-time manner. It is shown that the PL inequality in the presented RL algorithm amounts to a mild inequality condition on a few collected samples. This condition is much weaker than the standard persistence of excitation (PE) and finite duration PE that relies on a rank condition of a dataset. This is crucial for learning-enabled control systems as control systems can commit to learning an optimal controller from the beginning, in sharp contrast to existing results that rely on the PE and rank condition, and can only commit to learning after rich data samples are collected. Simulation results are provided to validate the performance and efficacy of the presented fixed-time RL algorithm.</p>\n </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":5.0000,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/5241035","citationCount":"0","resultStr":"{\"title\":\"Fixed-Time Stable Gradient Flows for Optimal Adaptive Control of Continuous-Time Nonlinear Systems\",\"authors\":\"Mahdi Niroomand,&nbsp;Reihaneh Kardehi Moghaddam,&nbsp;Hamidreza Modares,&nbsp;Mohammad-Bagher Naghibi Sistani\",\"doi\":\"10.1155/2024/5241035\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n <p>This paper introduces an inclusive class of fixed-time stable continuous-time gradient flows (GFs). This class of GFs is then leveraged to learn optimal control solutions for nonlinear systems in fixed time. It is shown that the presented GF guarantees convergence within a fixed time from any initial condition to the exact minimum of functions that satisfy the Polyak–Łojasiewicz (PL) inequality. The presented fixed-time GF is then utilized to design fixed-time optimal adaptive control algorithms. To this end, a fixed-time reinforcement learning (RL) algorithm is developed on the basis of a single network adaptive critic (SNAC) to learn the solution to an infinite-horizon optimal control problem in a fixed-time convergent, online, adaptive, and forward-in-time manner. It is shown that the PL inequality in the presented RL algorithm amounts to a mild inequality condition on a few collected samples. This condition is much weaker than the standard persistence of excitation (PE) and finite duration PE that relies on a rank condition of a dataset. This is crucial for learning-enabled control systems as control systems can commit to learning an optimal controller from the beginning, in sharp contrast to existing results that rely on the PE and rank condition, and can only commit to learning after rich data samples are collected. Simulation results are provided to validate the performance and efficacy of the presented fixed-time RL algorithm.</p>\\n </div>\",\"PeriodicalId\":14089,\"journal\":{\"name\":\"International Journal of Intelligent Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2024-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/5241035\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1155/2024/5241035\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/2024/5241035","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

本文介绍了一类固定时间稳定连续时间梯度流(GFs)。然后利用该类梯度流学习固定时间内非线性系统的最优控制解。研究表明,所提出的 GF 能保证在固定时间内从任意初始条件收敛到满足 Polyak-Łojasiewicz (PL) 不等式的函数的精确最小值。提出的固定时间 GF 可用于设计固定时间最优自适应控制算法。为此,我们在单网络自适应批判者(SNAC)的基础上开发了一种固定时间强化学习(RL)算法,以固定时间收敛、在线、自适应和实时前进的方式学习无限视距最优控制问题的解。研究表明,所提出的 RL 算法中的 PL 不等式等同于少数采集样本上的温和不等式条件。这一条件比依赖于数据集等级条件的标准持续激励(PE)和有限持续时间 PE 弱得多。这对于支持学习的控制系统至关重要,因为控制系统可以从一开始就致力于学习最优控制器,这与依赖于 PE 和等级条件的现有结果形成鲜明对比,后者只能在收集到丰富的数据样本后才能致力于学习。仿真结果验证了所介绍的固定时间 RL 算法的性能和功效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Fixed-Time Stable Gradient Flows for Optimal Adaptive Control of Continuous-Time Nonlinear Systems

This paper introduces an inclusive class of fixed-time stable continuous-time gradient flows (GFs). This class of GFs is then leveraged to learn optimal control solutions for nonlinear systems in fixed time. It is shown that the presented GF guarantees convergence within a fixed time from any initial condition to the exact minimum of functions that satisfy the Polyak–Łojasiewicz (PL) inequality. The presented fixed-time GF is then utilized to design fixed-time optimal adaptive control algorithms. To this end, a fixed-time reinforcement learning (RL) algorithm is developed on the basis of a single network adaptive critic (SNAC) to learn the solution to an infinite-horizon optimal control problem in a fixed-time convergent, online, adaptive, and forward-in-time manner. It is shown that the PL inequality in the presented RL algorithm amounts to a mild inequality condition on a few collected samples. This condition is much weaker than the standard persistence of excitation (PE) and finite duration PE that relies on a rank condition of a dataset. This is crucial for learning-enabled control systems as control systems can commit to learning an optimal controller from the beginning, in sharp contrast to existing results that rely on the PE and rank condition, and can only commit to learning after rich data samples are collected. Simulation results are provided to validate the performance and efficacy of the presented fixed-time RL algorithm.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Intelligent Systems
International Journal of Intelligent Systems 工程技术-计算机:人工智能
CiteScore
11.30
自引率
14.30%
发文量
304
审稿时长
9 months
期刊介绍: The International Journal of Intelligent Systems serves as a forum for individuals interested in tapping into the vast theories based on intelligent systems construction. With its peer-reviewed format, the journal explores several fascinating editorials written by today''s experts in the field. Because new developments are being introduced each day, there''s much to be learned — examination, analysis creation, information retrieval, man–computer interactions, and more. The International Journal of Intelligent Systems uses charts and illustrations to demonstrate these ground-breaking issues, and encourages readers to share their thoughts and experiences.
期刊最新文献
A Novel Self-Attention Transfer Adaptive Learning Approach for Brain Tumor Categorization A Manifold-Guided Gravitational Search Algorithm for High-Dimensional Global Optimization Problems PU-GNN: A Positive-Unlabeled Learning Method for Polypharmacy Side-Effects Detection Based on Graph Neural Networks Real-World Image Deraining Using Model-Free Unsupervised Learning Complex Question Answering Method on Risk Management Knowledge Graph: Multi-Intent Information Retrieval Based on Knowledge Subgraphs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1