Ammar N. Abbas , Georgios C. Chasparis , John D. Kelleher
{"title":"基于可解释和专业化深度强化学习的预测性维护的分层框架","authors":"Ammar N. Abbas , Georgios C. Chasparis , John D. Kelleher","doi":"10.1016/j.datak.2023.102240","DOIUrl":null,"url":null,"abstract":"<div><p>Deep reinforcement learning holds significant potential for application in industrial decision-making, offering a promising alternative to traditional physical models. However, its black-box learning approach presents challenges for real-world and safety-critical systems, as it lacks interpretability and explanations for the derived actions. Moreover, a key research question in deep reinforcement learning is how to focus policy learning on critical decisions within sparse domains. This paper introduces a novel approach that combines probabilistic modeling and reinforcement learning, providing interpretability and addressing these challenges in the context of safety-critical predictive maintenance. The methodology is activated in specific situations identified through the input–output hidden Markov model, such as critical conditions or near-failure scenarios. To mitigate the challenges associated with deep reinforcement learning in safety-critical predictive maintenance, the approach is initialized with a baseline policy using behavioral cloning, requiring minimal interactions with the environment. The effectiveness of this framework is demonstrated through a case study on predictive maintenance for turbofan engines, outperforming previous approaches and baselines, while also providing the added benefit of interpretability. Importantly, while the framework is applied to a specific use case, this paper aims to present a general methodology that can be applied to diverse predictive maintenance applications.</p></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"149 ","pages":"Article 102240"},"PeriodicalIF":2.7000,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hierarchical framework for interpretable and specialized deep reinforcement learning-based predictive maintenance\",\"authors\":\"Ammar N. Abbas , Georgios C. Chasparis , John D. Kelleher\",\"doi\":\"10.1016/j.datak.2023.102240\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Deep reinforcement learning holds significant potential for application in industrial decision-making, offering a promising alternative to traditional physical models. However, its black-box learning approach presents challenges for real-world and safety-critical systems, as it lacks interpretability and explanations for the derived actions. Moreover, a key research question in deep reinforcement learning is how to focus policy learning on critical decisions within sparse domains. This paper introduces a novel approach that combines probabilistic modeling and reinforcement learning, providing interpretability and addressing these challenges in the context of safety-critical predictive maintenance. The methodology is activated in specific situations identified through the input–output hidden Markov model, such as critical conditions or near-failure scenarios. To mitigate the challenges associated with deep reinforcement learning in safety-critical predictive maintenance, the approach is initialized with a baseline policy using behavioral cloning, requiring minimal interactions with the environment. The effectiveness of this framework is demonstrated through a case study on predictive maintenance for turbofan engines, outperforming previous approaches and baselines, while also providing the added benefit of interpretability. Importantly, while the framework is applied to a specific use case, this paper aims to present a general methodology that can be applied to diverse predictive maintenance applications.</p></div>\",\"PeriodicalId\":55184,\"journal\":{\"name\":\"Data & Knowledge Engineering\",\"volume\":\"149 \",\"pages\":\"Article 102240\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2023-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Data & Knowledge Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0169023X23001003\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data & Knowledge Engineering","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0169023X23001003","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Hierarchical framework for interpretable and specialized deep reinforcement learning-based predictive maintenance
Deep reinforcement learning holds significant potential for application in industrial decision-making, offering a promising alternative to traditional physical models. However, its black-box learning approach presents challenges for real-world and safety-critical systems, as it lacks interpretability and explanations for the derived actions. Moreover, a key research question in deep reinforcement learning is how to focus policy learning on critical decisions within sparse domains. This paper introduces a novel approach that combines probabilistic modeling and reinforcement learning, providing interpretability and addressing these challenges in the context of safety-critical predictive maintenance. The methodology is activated in specific situations identified through the input–output hidden Markov model, such as critical conditions or near-failure scenarios. To mitigate the challenges associated with deep reinforcement learning in safety-critical predictive maintenance, the approach is initialized with a baseline policy using behavioral cloning, requiring minimal interactions with the environment. The effectiveness of this framework is demonstrated through a case study on predictive maintenance for turbofan engines, outperforming previous approaches and baselines, while also providing the added benefit of interpretability. Importantly, while the framework is applied to a specific use case, this paper aims to present a general methodology that can be applied to diverse predictive maintenance applications.
期刊介绍:
Data & Knowledge Engineering (DKE) stimulates the exchange of ideas and interaction between these two related fields of interest. DKE reaches a world-wide audience of researchers, designers, managers and users. The major aim of the journal is to identify, investigate and analyze the underlying principles in the design and effective use of these systems.