Reinforcement learning based MPC with neural dynamical models

IF 2.5 3区 计算机科学 Q2 AUTOMATION & CONTROL SYSTEMS European Journal of Control Pub Date : 2024-11-01 DOI:10.1016/j.ejcon.2024.101048
Saket Adhau , Sébastien Gros , Sigurd Skogestad
{"title":"Reinforcement learning based MPC with neural dynamical models","authors":"Saket Adhau ,&nbsp;Sébastien Gros ,&nbsp;Sigurd Skogestad","doi":"10.1016/j.ejcon.2024.101048","DOIUrl":null,"url":null,"abstract":"<div><div>This paper presents an end-to-end learning approach to developing a Nonlinear Model Predictive Control (NMPC) policy, which does not require an explicit first-principles model and assumes that the system dynamics are either unknown or partially known. The paper proposes the use of available measurements to identify a nominal Recurrent Neural Network (RNN) model to capture the nonlinear dynamics, which includes constraints on the state variables and inputs. To address the issue of suboptimal control policies resulting from simply fitting the model to the data, this paper uses Reinforcement learning (RL) to tune the NMPC scheme and generate an optimal policy for the real system. The approach’s novelty lies in the use of RL to overcome the limitations of the nominal RNN model and generate a more accurate control policy. The paper discusses the implementation aspects of initial state estimation for RNN models and integration of neural models in MPC. The presented method is demonstrated on a classic benchmark control problem: cascaded two tank system (CTS).</div></div>","PeriodicalId":50489,"journal":{"name":"European Journal of Control","volume":"80 ","pages":"Article 101048"},"PeriodicalIF":2.5000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Control","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0947358024001080","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

This paper presents an end-to-end learning approach to developing a Nonlinear Model Predictive Control (NMPC) policy, which does not require an explicit first-principles model and assumes that the system dynamics are either unknown or partially known. The paper proposes the use of available measurements to identify a nominal Recurrent Neural Network (RNN) model to capture the nonlinear dynamics, which includes constraints on the state variables and inputs. To address the issue of suboptimal control policies resulting from simply fitting the model to the data, this paper uses Reinforcement learning (RL) to tune the NMPC scheme and generate an optimal policy for the real system. The approach’s novelty lies in the use of RL to overcome the limitations of the nominal RNN model and generate a more accurate control policy. The paper discusses the implementation aspects of initial state estimation for RNN models and integration of neural models in MPC. The presented method is demonstrated on a classic benchmark control problem: cascaded two tank system (CTS).
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于神经动力学模型的强化学习 MPC
本文提出了一种开发非线性模型预测控制(NMPC)策略的端到端学习方法,该方法不需要明确的第一原理模型,并假定系统动态是未知或部分已知的。本文建议利用现有的测量数据来确定一个名义递归神经网络 (RNN) 模型,以捕捉非线性动态,其中包括状态变量和输入的约束条件。为了解决简单地根据数据拟合模型所产生的次优控制策略问题,本文使用强化学习(RL)来调整 NMPC 方案,并为实际系统生成最优策略。这种方法的新颖之处在于利用 RL 克服了名义 RNN 模型的局限性,并生成了更精确的控制策略。论文讨论了 RNN 模型的初始状态估计和 MPC 中神经模型集成的实施问题。本文提出的方法在一个经典的基准控制问题上进行了演示:级联双油箱系统 (CTS)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
European Journal of Control
European Journal of Control 工程技术-自动化与控制系统
CiteScore
5.80
自引率
5.90%
发文量
131
审稿时长
1 months
期刊介绍: The European Control Association (EUCA) has among its objectives to promote the development of the discipline. Apart from the European Control Conferences, the European Journal of Control is the Association''s main channel for the dissemination of important contributions in the field. The aim of the Journal is to publish high quality papers on the theory and practice of control and systems engineering. The scope of the Journal will be wide and cover all aspects of the discipline including methodologies, techniques and applications. Research in control and systems engineering is necessary to develop new concepts and tools which enhance our understanding and improve our ability to design and implement high performance control systems. Submitted papers should stress the practical motivations and relevance of their results. The design and implementation of a successful control system requires the use of a range of techniques: Modelling Robustness Analysis Identification Optimization Control Law Design Numerical analysis Fault Detection, and so on.
期刊最新文献
Editorial Board Data-driven event-triggering mechanism for linear systems subject to input saturation Towards fully autonomous orbit management for low-earth orbit satellites based on neuro-evolutionary algorithms and deep reinforcement learning Communication-aware formation control for networks of AUVs Scaled graphs for reset control system analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1