Relaxed Optimal Control With Self-Learning Horizon for Discrete-Time Stochastic Dynamics

IF 10.5 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS IEEE Transactions on Cybernetics Pub Date : 2025-02-04 DOI:10.1109/TCYB.2025.3530951
Ding Wang;Jiangyu Wang;Ao Liu;Derong Liu;Junfei Qiao
{"title":"Relaxed Optimal Control With Self-Learning Horizon for Discrete-Time Stochastic Dynamics","authors":"Ding Wang;Jiangyu Wang;Ao Liu;Derong Liu;Junfei Qiao","doi":"10.1109/TCYB.2025.3530951","DOIUrl":null,"url":null,"abstract":"The innovation of optimal learning control methods is profoundly propelled due to the improvement of the learning ability. In this article, we investigate the synthesis of initialization and acceleration for optimal learning control algorithms. This approach contrasts with traditional methods that concentrate solely on either the improvement of initialization or acceleration. Specifically, we establish a novel relaxed policy iteration (PI) algorithm with self-learning horizon for stochastic optimal control. Notably, by suitably utilizing self-learning horizon, we can directly evaluate inadmissible policies to reduce the initialization burden. Meanwhile, the inadmissible policy can be rapidly optimized with few learning iterations. Then, several critical conclusions of relaxed optimal control are established by discussing algorithm convergence and system stability. Furthermore, to provide the convincing application potentials, a class of unconventional problems is effectively solved by the relaxed PI algorithm, including the dynamics with external noises and nonzero equilibrium. Finally, we present a series of nonlinear benchmarks with practical applications to comprehensively evaluate the performance of relaxed PI. The experimental results obtained from these diverse benchmarks uniformly highlight the effectiveness of self-learning horizon mechanism.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"55 3","pages":"1183-1196"},"PeriodicalIF":10.5000,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10870428/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

The innovation of optimal learning control methods is profoundly propelled due to the improvement of the learning ability. In this article, we investigate the synthesis of initialization and acceleration for optimal learning control algorithms. This approach contrasts with traditional methods that concentrate solely on either the improvement of initialization or acceleration. Specifically, we establish a novel relaxed policy iteration (PI) algorithm with self-learning horizon for stochastic optimal control. Notably, by suitably utilizing self-learning horizon, we can directly evaluate inadmissible policies to reduce the initialization burden. Meanwhile, the inadmissible policy can be rapidly optimized with few learning iterations. Then, several critical conclusions of relaxed optimal control are established by discussing algorithm convergence and system stability. Furthermore, to provide the convincing application potentials, a class of unconventional problems is effectively solved by the relaxed PI algorithm, including the dynamics with external noises and nonzero equilibrium. Finally, we present a series of nonlinear benchmarks with practical applications to comprehensively evaluate the performance of relaxed PI. The experimental results obtained from these diverse benchmarks uniformly highlight the effectiveness of self-learning horizon mechanism.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
离散时间随机动力学的自学习视界松弛最优控制
学习能力的提高深刻地推动了最优学习控制方法的创新。在本文中,我们研究了最优学习控制算法的初始化和加速的综合。这种方法与传统方法形成对比,传统方法只关注初始化或加速的改进。具体来说,我们建立了一种新的具有自学习视界的宽松策略迭代(PI)算法用于随机最优控制。值得注意的是,通过适当地利用自学习视界,我们可以直接评估不允许的策略,以减少初始化负担。同时,不允许策略可以快速优化,学习迭代次数少。然后,通过讨论算法的收敛性和系统的稳定性,得到了松弛最优控制的几个关键结论。此外,为了提供令人信服的应用潜力,松弛PI算法有效地解决了一类非常规问题,包括带有外部噪声的动力学和非零平衡。最后,我们提出了一系列具有实际应用的非线性基准,以全面评估松弛PI的性能。这些不同基准的实验结果一致地突出了自学习视界机制的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Cybernetics
IEEE Transactions on Cybernetics COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
25.40
自引率
11.00%
发文量
1869
期刊介绍: The scope of the IEEE Transactions on Cybernetics includes computational approaches to the field of cybernetics. Specifically, the transactions welcomes papers on communication and control across machines or machine, human, and organizations. The scope includes such areas as computational intelligence, computer vision, neural networks, genetic algorithms, machine learning, fuzzy systems, cognitive systems, decision making, and robotics, to the extent that they contribute to the theme of cybernetics or demonstrate an application of cybernetics principles.
期刊最新文献
Enhancing the Capability and Accuracy of Motor Imagery Classification: A Deep Neural Network-Powered Multifaceted Strategy Model. Secure Synchronization of Complex Dynamic Networks via Adaptive Event-Triggered Control Against Dual-Channel Cyber Threats. Interval Type-2 Fuzzy-Based Dynamic Event-Triggered Distributed Control for Multiarea Power Systems Against Hybrid Attacks. Distributed Interval Estimation for Continuous-Time Linear Systems Based on Robust Observer and Interval Analysis. Nonfragile Fault-Tolerant Control for Power Cyber-Physical Systems With Cyber Attacks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1