Edward H. Bras, Tobias M. Louw, Steven M. Bradshaw
{"title":"Safe, visualizable reinforcement learning for process control with a warm-started actor network based on PI-control","authors":"Edward H. Bras, Tobias M. Louw, Steven M. Bradshaw","doi":"10.1016/j.jprocont.2024.103340","DOIUrl":null,"url":null,"abstract":"<div><div>The adoption of reinforcement learning (RL) in chemical process industries is currently hindered by the use of black-box models that cannot be easily visualized or interpreted as well as the challenge of balancing safe control with exploration. Clearly illustrating the similarities between classical control- and RL theory, as well as demonstrating the possibility of maintaining process safety under RL-based control, will go a long way towards bridging the gap between academic research and industry practice. In this work, a simple approach to the dynamic online adaptation of a non-linear control policy initialised using PI control through RL is introduced. The familiar PI controller is represented as a plane in the state-action space, where the states comprise the error and integral error, and the action is the control input. The plane was recreated using a neural network and this recreated plane served as a readily visualizable initial “warm-started” policy for the RL agent. The actor-critic algorithm was applied to adapt the policy non-linearly during interaction with the controlled process, thereby leveraging the flexibility of the neural network to improve performance. Inherently safe control during training is ensured by introducing a soft active region component in the actor neural network. Finally, the use of cold connections is proposed whereby the state space can be augmented at any stage of training (e.g., through the incorporation of measurements to facilitate feedforward control) while fully preserving the agent’s training progress to date. By ensuring controller safety, the proposed methods are applicable to the dynamic adaptation of any process where stable PI control is feasible at nominal initial conditions.</div></div>","PeriodicalId":50079,"journal":{"name":"Journal of Process Control","volume":"144 ","pages":"Article 103340"},"PeriodicalIF":3.3000,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Process Control","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S095915242400180X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The adoption of reinforcement learning (RL) in chemical process industries is currently hindered by the use of black-box models that cannot be easily visualized or interpreted as well as the challenge of balancing safe control with exploration. Clearly illustrating the similarities between classical control- and RL theory, as well as demonstrating the possibility of maintaining process safety under RL-based control, will go a long way towards bridging the gap between academic research and industry practice. In this work, a simple approach to the dynamic online adaptation of a non-linear control policy initialised using PI control through RL is introduced. The familiar PI controller is represented as a plane in the state-action space, where the states comprise the error and integral error, and the action is the control input. The plane was recreated using a neural network and this recreated plane served as a readily visualizable initial “warm-started” policy for the RL agent. The actor-critic algorithm was applied to adapt the policy non-linearly during interaction with the controlled process, thereby leveraging the flexibility of the neural network to improve performance. Inherently safe control during training is ensured by introducing a soft active region component in the actor neural network. Finally, the use of cold connections is proposed whereby the state space can be augmented at any stage of training (e.g., through the incorporation of measurements to facilitate feedforward control) while fully preserving the agent’s training progress to date. By ensuring controller safety, the proposed methods are applicable to the dynamic adaptation of any process where stable PI control is feasible at nominal initial conditions.
期刊介绍:
This international journal covers the application of control theory, operations research, computer science and engineering principles to the solution of process control problems. In addition to the traditional chemical processing and manufacturing applications, the scope of process control problems involves a wide range of applications that includes energy processes, nano-technology, systems biology, bio-medical engineering, pharmaceutical processing technology, energy storage and conversion, smart grid, and data analytics among others.
Papers on the theory in these areas will also be accepted provided the theoretical contribution is aimed at the application and the development of process control techniques.
Topics covered include:
• Control applications• Process monitoring• Plant-wide control• Process control systems• Control techniques and algorithms• Process modelling and simulation• Design methods
Advanced design methods exclude well established and widely studied traditional design techniques such as PID tuning and its many variants. Applications in fields such as control of automotive engines, machinery and robotics are not deemed suitable unless a clear motivation for the relevance to process control is provided.