基于状态维修和基于状态生产的联合动态最优策略的强化学习算法

H. Rasay, Fariba Azizi, Mehrnaz Salmani, F. Naderkhani
{"title":"基于状态维修和基于状态生产的联合动态最优策略的强化学习算法","authors":"H. Rasay, Fariba Azizi, Mehrnaz Salmani, F. Naderkhani","doi":"10.1109/ICPHM57936.2023.10194057","DOIUrl":null,"url":null,"abstract":"This paper focuses on development of joint optimal maintenance and production policy for a specific type of production system that allows for adjustable production rates. The rate of deterioration of the system is directly related to the production rate, with higher production rates resulting in greater expected deterioration. The system's deterioration can be controlled through two main actions: (1) scheduling and conducting maintenance actions referred to as maintenance policy; and (2) adjusting the production rate referred to as production policy. To determine the optimal actions given the system's state, a Markov decision process (MDP) is developed and a reinforcement learning algorithm, specifically a Q-learning algorithm, is utilized. The algorithm's hyper parameters are tuned using a value-iteration algorithm of dynamic programming. The goal is to minimize expected costs for the system over a finite planning horizon.","PeriodicalId":169274,"journal":{"name":"2023 IEEE International Conference on Prognostics and Health Management (ICPHM)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Reinforcement Learning Algorithm for Optimal Dynamic Policies of Joint Condition-based Maintenance and Condition-based Production\",\"authors\":\"H. Rasay, Fariba Azizi, Mehrnaz Salmani, F. Naderkhani\",\"doi\":\"10.1109/ICPHM57936.2023.10194057\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper focuses on development of joint optimal maintenance and production policy for a specific type of production system that allows for adjustable production rates. The rate of deterioration of the system is directly related to the production rate, with higher production rates resulting in greater expected deterioration. The system's deterioration can be controlled through two main actions: (1) scheduling and conducting maintenance actions referred to as maintenance policy; and (2) adjusting the production rate referred to as production policy. To determine the optimal actions given the system's state, a Markov decision process (MDP) is developed and a reinforcement learning algorithm, specifically a Q-learning algorithm, is utilized. The algorithm's hyper parameters are tuned using a value-iteration algorithm of dynamic programming. The goal is to minimize expected costs for the system over a finite planning horizon.\",\"PeriodicalId\":169274,\"journal\":{\"name\":\"2023 IEEE International Conference on Prognostics and Health Management (ICPHM)\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Conference on Prognostics and Health Management (ICPHM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPHM57936.2023.10194057\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Prognostics and Health Management (ICPHM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPHM57936.2023.10194057","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文的重点是开发联合最优维护和生产政策,为特定类型的生产系统,允许可调的生产率。系统的劣化率与生产率直接相关,生产率越高,预期劣化率越高。系统的劣化可以通过两种主要行动来控制:(1)计划和实施维护行动,即维护策略;(2)调整生产速度,即生产政策。为了确定给定系统状态下的最优行为,开发了马尔可夫决策过程(MDP),并使用了强化学习算法,特别是q -学习算法。该算法的超参数采用动态规划的值迭代算法进行调优。目标是在有限的规划范围内使系统的预期成本最小化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Reinforcement Learning Algorithm for Optimal Dynamic Policies of Joint Condition-based Maintenance and Condition-based Production
This paper focuses on development of joint optimal maintenance and production policy for a specific type of production system that allows for adjustable production rates. The rate of deterioration of the system is directly related to the production rate, with higher production rates resulting in greater expected deterioration. The system's deterioration can be controlled through two main actions: (1) scheduling and conducting maintenance actions referred to as maintenance policy; and (2) adjusting the production rate referred to as production policy. To determine the optimal actions given the system's state, a Markov decision process (MDP) is developed and a reinforcement learning algorithm, specifically a Q-learning algorithm, is utilized. The algorithm's hyper parameters are tuned using a value-iteration algorithm of dynamic programming. The goal is to minimize expected costs for the system over a finite planning horizon.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Modeling Operational Risk to Improve Reliability of Unmanned Aerial Vehicles Optimizing Flight Control of Unmanned Aerial Vehicles with Physics-Based Reliability Models A Comprehensive Approach for Gearbox Fault Detection and Diagnosis Using Sequential Neural Networks Bearing compound fault diagnosis based on enhanced variational mode extraction algorithm Fault State Prediction Model of Repaired Equipment Considering Maintenance Effect
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1