深度强化学习还是李亚普诺夫分析?事件触发优化控制的初步比较研究

IF 15.3 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Ieee-Caa Journal of Automatica Sinica Pub Date : 2024-06-12 DOI:10.1109/JAS.2024.124434
Jingwei Lu;Lefei Li;Qinglai Wei;Fei–Yue Wang
{"title":"深度强化学习还是李亚普诺夫分析?事件触发优化控制的初步比较研究","authors":"Jingwei Lu;Lefei Li;Qinglai Wei;Fei–Yue Wang","doi":"10.1109/JAS.2024.124434","DOIUrl":null,"url":null,"abstract":"Dear Editor, This letter develops a novel method to implement event-triggered optimal control (ETOC) for discrete-time nonlinear systems using parallel control and deep reinforcement learning (DRL), referred to as Deep-ETOC. The developed Deep-ETOC method introduces the communication cost into the performance index through parallel control, so that the developed method enables control systems to learn ETOC policies directly without triggering conditions. Then, dueling double deep Q-network (D3QN) is utilized to achieve our method. In simulations, we present a preliminary comparative study of DRL and Lyapunov analysis for ETOC.","PeriodicalId":54230,"journal":{"name":"Ieee-Caa Journal of Automatica Sinica","volume":"11 7","pages":"1702-1704"},"PeriodicalIF":15.3000,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10555241","citationCount":"0","resultStr":"{\"title\":\"Deep Reinforcement Learning or Lyapunov Analysis? A Preliminary Comparative Study on Event-Triggered Optimal Control\",\"authors\":\"Jingwei Lu;Lefei Li;Qinglai Wei;Fei–Yue Wang\",\"doi\":\"10.1109/JAS.2024.124434\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Dear Editor, This letter develops a novel method to implement event-triggered optimal control (ETOC) for discrete-time nonlinear systems using parallel control and deep reinforcement learning (DRL), referred to as Deep-ETOC. The developed Deep-ETOC method introduces the communication cost into the performance index through parallel control, so that the developed method enables control systems to learn ETOC policies directly without triggering conditions. Then, dueling double deep Q-network (D3QN) is utilized to achieve our method. In simulations, we present a preliminary comparative study of DRL and Lyapunov analysis for ETOC.\",\"PeriodicalId\":54230,\"journal\":{\"name\":\"Ieee-Caa Journal of Automatica Sinica\",\"volume\":\"11 7\",\"pages\":\"1702-1704\"},\"PeriodicalIF\":15.3000,\"publicationDate\":\"2024-06-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10555241\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ieee-Caa Journal of Automatica Sinica\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10555241/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ieee-Caa Journal of Automatica Sinica","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10555241/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

亲爱的编辑,这封信提出了一种利用并行控制和深度强化学习(DRL)实现离散时间非线性系统事件触发最优控制(ETOC)的新方法,简称为 Deep-ETOC。所开发的深度-ETOC方法通过并行控制将通信成本引入性能指标,因此所开发的方法可以使控制系统在没有触发条件的情况下直接学习ETOC策略。然后,我们利用决斗双深 Q 网络(D3QN)来实现我们的方法。在仿真中,我们对 ETOC 的 DRL 和 Lyapunov 分析进行了初步比较研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep Reinforcement Learning or Lyapunov Analysis? A Preliminary Comparative Study on Event-Triggered Optimal Control
Dear Editor, This letter develops a novel method to implement event-triggered optimal control (ETOC) for discrete-time nonlinear systems using parallel control and deep reinforcement learning (DRL), referred to as Deep-ETOC. The developed Deep-ETOC method introduces the communication cost into the performance index through parallel control, so that the developed method enables control systems to learn ETOC policies directly without triggering conditions. Then, dueling double deep Q-network (D3QN) is utilized to achieve our method. In simulations, we present a preliminary comparative study of DRL and Lyapunov analysis for ETOC.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Ieee-Caa Journal of Automatica Sinica
Ieee-Caa Journal of Automatica Sinica Engineering-Control and Systems Engineering
CiteScore
23.50
自引率
11.00%
发文量
880
期刊介绍: The IEEE/CAA Journal of Automatica Sinica is a reputable journal that publishes high-quality papers in English on original theoretical/experimental research and development in the field of automation. The journal covers a wide range of topics including automatic control, artificial intelligence and intelligent control, systems theory and engineering, pattern recognition and intelligent systems, automation engineering and applications, information processing and information systems, network-based automation, robotics, sensing and measurement, and navigation, guidance, and control. Additionally, the journal is abstracted/indexed in several prominent databases including SCIE (Science Citation Index Expanded), EI (Engineering Index), Inspec, Scopus, SCImago, DBLP, CNKI (China National Knowledge Infrastructure), CSCD (Chinese Science Citation Database), and IEEE Xplore.
期刊最新文献
Inside front cover Inside back cover Back cover Front cover On Zero Dynamics and Controllable Cyber-Attacks in Cyber-Physical Systems and Dynamic Coding Schemes as Their Countermeasures
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1