Yunduan Cui, Lingwei Zhu, Morihiro Fujisaki, H. Kanokogi, Takamitsu Matsubara
{"title":"Factorial Kernel Dynamic Policy Programming for Vinyl Acetate Monomer Plant Model Control","authors":"Yunduan Cui, Lingwei Zhu, Morihiro Fujisaki, H. Kanokogi, Takamitsu Matsubara","doi":"10.1109/COASE.2018.8560593","DOIUrl":null,"url":null,"abstract":"This research focuses on applying reinforcement learning towards chemical plant control problems in order to optimize production while maintaining plant stability without requiring knowledge of the plant models. Since a typical chemical plant has a large number of sensors and actuators, the control problem of such a plant can be formulated as a Markov decision process involving high-dimensional state and a huge number of actions that might be difficult to solve by previous methods due to computational complexity and sample insufficiency. To overcome these issues, we propose a new reinforcement learning method, Factorial Kernel Dynamic Policy Programming, that employs 1) a factorial policy model and 2) a factor-wise kernel-based smooth policy update by regularization with the Kullback-Leibler divergence between the current and updated policies. To validate its effectiveness, FKDPP is evaluated via the Vinyl Acetate Monomer plant (VAM) model, a popular benchmark chemical plant control problem. Compared with previous methods that cannot directly process a huge number of actions, our proposed method leverages the same number of training samples and achieves a better control strategy for VAM yield, quality, and plant stability.","PeriodicalId":6518,"journal":{"name":"2018 IEEE 14th International Conference on Automation Science and Engineering (CASE)","volume":"59 1","pages":"304-309"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 14th International Conference on Automation Science and Engineering (CASE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COASE.2018.8560593","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14
Abstract
This research focuses on applying reinforcement learning towards chemical plant control problems in order to optimize production while maintaining plant stability without requiring knowledge of the plant models. Since a typical chemical plant has a large number of sensors and actuators, the control problem of such a plant can be formulated as a Markov decision process involving high-dimensional state and a huge number of actions that might be difficult to solve by previous methods due to computational complexity and sample insufficiency. To overcome these issues, we propose a new reinforcement learning method, Factorial Kernel Dynamic Policy Programming, that employs 1) a factorial policy model and 2) a factor-wise kernel-based smooth policy update by regularization with the Kullback-Leibler divergence between the current and updated policies. To validate its effectiveness, FKDPP is evaluated via the Vinyl Acetate Monomer plant (VAM) model, a popular benchmark chemical plant control problem. Compared with previous methods that cannot directly process a huge number of actions, our proposed method leverages the same number of training samples and achieves a better control strategy for VAM yield, quality, and plant stability.