Kinga Szatmári , Gergely Horváth , Sándor Németh , Wenshuai Bai , Alex Kummer
{"title":"Resilience-based explainable reinforcement learning in chemical process safety","authors":"Kinga Szatmári , Gergely Horváth , Sándor Németh , Wenshuai Bai , Alex Kummer","doi":"10.1016/j.compchemeng.2024.108849","DOIUrl":null,"url":null,"abstract":"<div><p>For future applications of artificial intelligence, namely reinforcement learning (RL), we develop a resilience-based explainable RL agent to make decisions about the activation of mitigation systems. The applied reinforcement learning algorithm is Deep Q-learning and the reward function is resilience. We investigate two explainable reinforcement learning methods, which are the decision tree, as a policy-explaining method, and the Shapley value as a state-explaining method.</p><p>The policy can be visualized in the agent’s state space using a decision tree for better understanding. We compare the agent’s decision boundary with the runaway boundaries defined by runaway criteria, namely the divergence criterion and modified dynamic condition. Shapley value explains the contribution of the state variables on the behavior of the agent over time. The results show that the decisions of the artificial agent in a resilience-based mitigation system can be explained and can be presented in a transparent way.</p></div>","PeriodicalId":286,"journal":{"name":"Computers & Chemical Engineering","volume":"191 ","pages":"Article 108849"},"PeriodicalIF":3.9000,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Chemical Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0098135424002679","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
For future applications of artificial intelligence, namely reinforcement learning (RL), we develop a resilience-based explainable RL agent to make decisions about the activation of mitigation systems. The applied reinforcement learning algorithm is Deep Q-learning and the reward function is resilience. We investigate two explainable reinforcement learning methods, which are the decision tree, as a policy-explaining method, and the Shapley value as a state-explaining method.
The policy can be visualized in the agent’s state space using a decision tree for better understanding. We compare the agent’s decision boundary with the runaway boundaries defined by runaway criteria, namely the divergence criterion and modified dynamic condition. Shapley value explains the contribution of the state variables on the behavior of the agent over time. The results show that the decisions of the artificial agent in a resilience-based mitigation system can be explained and can be presented in a transparent way.
期刊介绍:
Computers & Chemical Engineering is primarily a journal of record for new developments in the application of computing and systems technology to chemical engineering problems.