{"title":"基于离散事件仿真的动态作业车间调度问题深度q -网络模型","authors":"Y. Turgut, C. Bozdag","doi":"10.1109/WSC48552.2020.9383986","DOIUrl":null,"url":null,"abstract":"In the last few decades, dynamic job scheduling problems (DJSPs) has received more attention from researchers and practitioners. However, the potential of reinforcement learning (RL) methods has not been exploited adequately for solving DJSPs. In this work deep Q-network (DQN) model is applied to train an agent to learn how to schedule the jobs dynamically by minimizing the delay time of jobs. The DQN model is trained based on a discrete event simulation experiment. The model is tested by comparing the trained DQN model against two popular dispatching rules, shortest processing time and earliest due date. The obtained results indicate that the DQN model has a better performance than these dispatching rules.","PeriodicalId":6692,"journal":{"name":"2020 Winter Simulation Conference (WSC)","volume":"1 1","pages":"1551-1559"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Deep Q-Network Model for Dynamic Job Shop Scheduling Pproblem Based on Discrete Event Simulation\",\"authors\":\"Y. Turgut, C. Bozdag\",\"doi\":\"10.1109/WSC48552.2020.9383986\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the last few decades, dynamic job scheduling problems (DJSPs) has received more attention from researchers and practitioners. However, the potential of reinforcement learning (RL) methods has not been exploited adequately for solving DJSPs. In this work deep Q-network (DQN) model is applied to train an agent to learn how to schedule the jobs dynamically by minimizing the delay time of jobs. The DQN model is trained based on a discrete event simulation experiment. The model is tested by comparing the trained DQN model against two popular dispatching rules, shortest processing time and earliest due date. The obtained results indicate that the DQN model has a better performance than these dispatching rules.\",\"PeriodicalId\":6692,\"journal\":{\"name\":\"2020 Winter Simulation Conference (WSC)\",\"volume\":\"1 1\",\"pages\":\"1551-1559\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 Winter Simulation Conference (WSC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/WSC48552.2020.9383986\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Winter Simulation Conference (WSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WSC48552.2020.9383986","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Q-Network Model for Dynamic Job Shop Scheduling Pproblem Based on Discrete Event Simulation
In the last few decades, dynamic job scheduling problems (DJSPs) has received more attention from researchers and practitioners. However, the potential of reinforcement learning (RL) methods has not been exploited adequately for solving DJSPs. In this work deep Q-network (DQN) model is applied to train an agent to learn how to schedule the jobs dynamically by minimizing the delay time of jobs. The DQN model is trained based on a discrete event simulation experiment. The model is tested by comparing the trained DQN model against two popular dispatching rules, shortest processing time and earliest due date. The obtained results indicate that the DQN model has a better performance than these dispatching rules.