{"title":"Design patterns of deep reinforcement learning models for job shop scheduling problems","authors":"Shiyong Wang, Jiaxian Li, Qingsong Jiao, Fang Ma","doi":"10.1007/s10845-024-02454-8","DOIUrl":null,"url":null,"abstract":"<p>Production scheduling has a significant role when optimizing production objectives such as production efficiency, resource utilization, cost control, energy-saving, and emission reduction. Currently, deep reinforcement learning-based production scheduling methods achieve roughly equivalent precision as the widely used meta-heuristic algorithms while exhibiting higher efficiency, along with powerful generalization abilities. Therefore, this new paradigm has drawn much attention and plenty of research results have been reported. By reviewing available deep reinforcement learning models for the job shop scheduling problems, the typical design patterns and pattern combinations of the common components, i.e., agent, environment, state, action, and reward, were identified. Around this essential contribution, the architecture and procedure of training deep reinforcement learning scheduling models and applying resultant scheduling solvers were generalized. Furthermore, the key evaluation indicators were summarized and the promising research areas were outlined. This work surveys several deep reinforcement learning models for a range of production scheduling problems.</p>","PeriodicalId":16193,"journal":{"name":"Journal of Intelligent Manufacturing","volume":"18 1","pages":""},"PeriodicalIF":5.9000,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Intelligent Manufacturing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s10845-024-02454-8","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Production scheduling has a significant role when optimizing production objectives such as production efficiency, resource utilization, cost control, energy-saving, and emission reduction. Currently, deep reinforcement learning-based production scheduling methods achieve roughly equivalent precision as the widely used meta-heuristic algorithms while exhibiting higher efficiency, along with powerful generalization abilities. Therefore, this new paradigm has drawn much attention and plenty of research results have been reported. By reviewing available deep reinforcement learning models for the job shop scheduling problems, the typical design patterns and pattern combinations of the common components, i.e., agent, environment, state, action, and reward, were identified. Around this essential contribution, the architecture and procedure of training deep reinforcement learning scheduling models and applying resultant scheduling solvers were generalized. Furthermore, the key evaluation indicators were summarized and the promising research areas were outlined. This work surveys several deep reinforcement learning models for a range of production scheduling problems.
期刊介绍:
The Journal of Nonlinear Engineering aims to be a platform for sharing original research results in theoretical, experimental, practical, and applied nonlinear phenomena within engineering. It serves as a forum to exchange ideas and applications of nonlinear problems across various engineering disciplines. Articles are considered for publication if they explore nonlinearities in engineering systems, offering realistic mathematical modeling, utilizing nonlinearity for new designs, stabilizing systems, understanding system behavior through nonlinearity, optimizing systems based on nonlinear interactions, and developing algorithms to harness and leverage nonlinear elements.