{"title":"A stabilizing reinforcement learning approach for sampled systems with partially unknown models","authors":"Lukas Beckenbach, Pavel Osinenko, Stefan Streif","doi":"10.1002/rnc.7626","DOIUrl":null,"url":null,"abstract":"<p>Reinforcement learning is commonly associated with training of reward-maximizing (or cost-minimizing) agents, in other words, controllers. It can be applied in model-free or model-based fashion, using a priori or online collected system data to train involved parametric architectures. In general, online reinforcement learning does not guarantee closed loop stability unless special measures are taken, for instance, through learning constraints or tailored training rules. Particularly promising are hybrids of reinforcement learning with classical control approaches. In this work, we suggest a method to guarantee practical stability of the system-controller closed loop in a purely online learning setting, in other words, without offline training. Moreover, we assume only partial knowledge of the system model. To achieve the claimed results, we employ techniques of classical adaptive control. The implementation of the overall control scheme is provided explicitly in a digital, sampled setting. That is, the controller receives the state of the system and computes the control action at discrete, specifically, equidistant moments in time. The method is tested in adaptive traction control and cruise control where it proved to significantly reduce the cost.</p>","PeriodicalId":50291,"journal":{"name":"International Journal of Robust and Nonlinear Control","volume":"34 18","pages":"12389-12412"},"PeriodicalIF":3.2000,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Robust and Nonlinear Control","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/rnc.7626","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Reinforcement learning is commonly associated with training of reward-maximizing (or cost-minimizing) agents, in other words, controllers. It can be applied in model-free or model-based fashion, using a priori or online collected system data to train involved parametric architectures. In general, online reinforcement learning does not guarantee closed loop stability unless special measures are taken, for instance, through learning constraints or tailored training rules. Particularly promising are hybrids of reinforcement learning with classical control approaches. In this work, we suggest a method to guarantee practical stability of the system-controller closed loop in a purely online learning setting, in other words, without offline training. Moreover, we assume only partial knowledge of the system model. To achieve the claimed results, we employ techniques of classical adaptive control. The implementation of the overall control scheme is provided explicitly in a digital, sampled setting. That is, the controller receives the state of the system and computes the control action at discrete, specifically, equidistant moments in time. The method is tested in adaptive traction control and cruise control where it proved to significantly reduce the cost.
期刊介绍:
Papers that do not include an element of robust or nonlinear control and estimation theory will not be considered by the journal, and all papers will be expected to include significant novel content. The focus of the journal is on model based control design approaches rather than heuristic or rule based methods. Papers on neural networks will have to be of exceptional novelty to be considered for the journal.