{"title":"Dynamic Inspection and Maintenance Scheduling for Multi-State Systems Under Time-Varying Demand: Proximal Policy Optimization","authors":"Yiming Chen, Yu Liu, Tangfan Xiahou","doi":"10.1080/24725854.2023.2259949","DOIUrl":null,"url":null,"abstract":"AbstractInspection and maintenance activities are effective ways to reveal and restore the health conditions of many industrial systems, respectively. Most extant works on inspection and maintenance optimization problems assumed that systems operate under a time-invariant demand. Such a simplified assumption is oftentimes violated by a changeable market environment, seasonal factors, and even unexpected emergencies. In this article, with the aim of minimizing the expected total cost associated with inspections, maintenance, and unsupplied demand, a dynamic inspection and maintenance scheduling model is put forth for multi-state systems (MSSs) under a time-varying demand. Non-periodic inspections are performed on the components of an MSS and imperfect maintenance actions are dynamically scheduled based on the inspection results. By introducing the concept of decision epochs, the resulting inspection and maintenance scheduling problem is formulated as a Markov decision process (MDP). The deep reinforcement learning (DRL) method with a proximal policy optimization (PPO) algorithm is customized to cope with the “curse of dimensionality” of the resulting sequential decision problem. As an extra input feature for the agent, the category of decision epochs is formulated to improve the effectiveness of the customized DRL method. A six-component MSS, along with a multi-state coal transportation system, is given to demonstrate the effectiveness of the proposed method.Keywords: multi-state systemdeep reinforcement learningdynamic inspection and maintenance schedulingproximal policy optimizationtime-varying demandDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. Additional informationNotes on contributorsYiming ChenYiming Chen received the Ph.D. degrees in mechanical engineering from the University of Electronic Science and Technology of China in 2022. He is currently a Lecturer with the College of Marine Equipment and Mechanical Engineering, Jimei University. His research interests include maintenance decisions, stochastic dynamic programming, and deep reinforcement learning.Yu LiuYu Liu is a professor of industrial engineering with the School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China. He received a PhD degree from the University of Electronic Science and Technology of China in 2010. He was a Visiting Predoctoral Fellow with the Department of Mechanical Engineering, Northwestern University, USA, from 2008 to 2010 and a Postdoctoral Research Fellow with the Department of Mechanical Engineering, University of Alberta, Canada, from 2012 to 2013. He has authored or coauthored more than 90 peer reviewed papers in international journals. His research interests include system reliability modeling and analysis, maintenance decisions, prognostics and health management, and design under uncertainty. He is an editorial board member of several international journals, such as Reliability Engineering & System Safety, Quality and Reliability Engineering International and an Associate Editor of IISE Transactions and IEEE Transactions on Reliability.Tangfan XiahouTangfan Xiahou received the M.Sc. and Ph.D. degrees in mechanical engineering from the University of Electronic Science and Technology of China in 2018 and 2022, respectively. He is currently a Lecturer with the School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China. His research interests include reliability modeling under uncertainty, Dempster–Shafer evidence theory, and prognostics and health management.","PeriodicalId":56039,"journal":{"name":"IISE Transactions","volume":null,"pages":null},"PeriodicalIF":2.0000,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IISE Transactions","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/24725854.2023.2259949","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 0
Abstract
AbstractInspection and maintenance activities are effective ways to reveal and restore the health conditions of many industrial systems, respectively. Most extant works on inspection and maintenance optimization problems assumed that systems operate under a time-invariant demand. Such a simplified assumption is oftentimes violated by a changeable market environment, seasonal factors, and even unexpected emergencies. In this article, with the aim of minimizing the expected total cost associated with inspections, maintenance, and unsupplied demand, a dynamic inspection and maintenance scheduling model is put forth for multi-state systems (MSSs) under a time-varying demand. Non-periodic inspections are performed on the components of an MSS and imperfect maintenance actions are dynamically scheduled based on the inspection results. By introducing the concept of decision epochs, the resulting inspection and maintenance scheduling problem is formulated as a Markov decision process (MDP). The deep reinforcement learning (DRL) method with a proximal policy optimization (PPO) algorithm is customized to cope with the “curse of dimensionality” of the resulting sequential decision problem. As an extra input feature for the agent, the category of decision epochs is formulated to improve the effectiveness of the customized DRL method. A six-component MSS, along with a multi-state coal transportation system, is given to demonstrate the effectiveness of the proposed method.Keywords: multi-state systemdeep reinforcement learningdynamic inspection and maintenance schedulingproximal policy optimizationtime-varying demandDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. Additional informationNotes on contributorsYiming ChenYiming Chen received the Ph.D. degrees in mechanical engineering from the University of Electronic Science and Technology of China in 2022. He is currently a Lecturer with the College of Marine Equipment and Mechanical Engineering, Jimei University. His research interests include maintenance decisions, stochastic dynamic programming, and deep reinforcement learning.Yu LiuYu Liu is a professor of industrial engineering with the School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China. He received a PhD degree from the University of Electronic Science and Technology of China in 2010. He was a Visiting Predoctoral Fellow with the Department of Mechanical Engineering, Northwestern University, USA, from 2008 to 2010 and a Postdoctoral Research Fellow with the Department of Mechanical Engineering, University of Alberta, Canada, from 2012 to 2013. He has authored or coauthored more than 90 peer reviewed papers in international journals. His research interests include system reliability modeling and analysis, maintenance decisions, prognostics and health management, and design under uncertainty. He is an editorial board member of several international journals, such as Reliability Engineering & System Safety, Quality and Reliability Engineering International and an Associate Editor of IISE Transactions and IEEE Transactions on Reliability.Tangfan XiahouTangfan Xiahou received the M.Sc. and Ph.D. degrees in mechanical engineering from the University of Electronic Science and Technology of China in 2018 and 2022, respectively. He is currently a Lecturer with the School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China. His research interests include reliability modeling under uncertainty, Dempster–Shafer evidence theory, and prognostics and health management.
IISE TransactionsEngineering-Industrial and Manufacturing Engineering
CiteScore
5.70
自引率
7.70%
发文量
93
期刊介绍:
IISE Transactions is currently abstracted/indexed in the following services: CSA/ASCE Civil Engineering Abstracts; CSA-Computer & Information Systems Abstracts; CSA-Corrosion Abstracts; CSA-Electronics & Communications Abstracts; CSA-Engineered Materials Abstracts; CSA-Materials Research Database with METADEX; CSA-Mechanical & Transportation Engineering Abstracts; CSA-Solid State & Superconductivity Abstracts; INSPEC Information Services and Science Citation Index.
Institute of Industrial and Systems Engineers and our publisher Taylor & Francis make every effort to ensure the accuracy of all the information (the "Content") contained in our publications. However, Institute of Industrial and Systems Engineers and our publisher Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Institute of Industrial and Systems Engineers and our publisher Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Institute of Industrial and Systems Engineers and our publisher Taylor & Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to, or arising out of the use of the Content. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions .