{"title":"基于强化学习的生产调度阿尔法列表迭代贪婪法","authors":"Kuo-Ching Ying , Pourya Pourhejazy , Shih-Han Cheng","doi":"10.1016/j.iswa.2024.200451","DOIUrl":null,"url":null,"abstract":"<div><div>Metaheuristics can benefit from analyzing patterns and regularities in data to perform more effective searches in the solution space. In line with the emerging trend in the optimization literature, this study introduces the Reinforcement-learning-based Alpha-List Iterated Greedy (RAIG) algorithm to contribute to the advances in machine learning-based optimization, notably for solving combinatorial problems. RAIG uses an <em>N</em>-List mechanism for solution initialization and its solution improvement procedure is enhanced by Reinforcement Learning and an Alpha-List mechanism for more effective searches. A classic engineering optimization problem, the Permutation Flowshop Scheduling Problem (PFSP), is considered for numerical experiments to evaluate RAIG's performance. Highly competitive solutions to the classic scheduling problem are identified, with up to 9% improvement compared to the baseline, when solving large-size instances. Experimental results also show that the RAIG algorithm performs more robustly than the baseline algorithm. Statistical tests confirm that RAIG is superior and hence can be introduced as a strong benchmark for future studies.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"24 ","pages":"Article 200451"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning-based alpha-list iterated greedy for production scheduling\",\"authors\":\"Kuo-Ching Ying , Pourya Pourhejazy , Shih-Han Cheng\",\"doi\":\"10.1016/j.iswa.2024.200451\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Metaheuristics can benefit from analyzing patterns and regularities in data to perform more effective searches in the solution space. In line with the emerging trend in the optimization literature, this study introduces the Reinforcement-learning-based Alpha-List Iterated Greedy (RAIG) algorithm to contribute to the advances in machine learning-based optimization, notably for solving combinatorial problems. RAIG uses an <em>N</em>-List mechanism for solution initialization and its solution improvement procedure is enhanced by Reinforcement Learning and an Alpha-List mechanism for more effective searches. A classic engineering optimization problem, the Permutation Flowshop Scheduling Problem (PFSP), is considered for numerical experiments to evaluate RAIG's performance. Highly competitive solutions to the classic scheduling problem are identified, with up to 9% improvement compared to the baseline, when solving large-size instances. Experimental results also show that the RAIG algorithm performs more robustly than the baseline algorithm. Statistical tests confirm that RAIG is superior and hence can be introduced as a strong benchmark for future studies.</div></div>\",\"PeriodicalId\":100684,\"journal\":{\"name\":\"Intelligent Systems with Applications\",\"volume\":\"24 \",\"pages\":\"Article 200451\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Intelligent Systems with Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S266730532400125X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent Systems with Applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S266730532400125X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
元启发式算法可以从分析数据中的模式和规律性中获益,从而在解空间中进行更有效的搜索。根据优化文献中的新兴趋势,本研究引入了基于强化学习的阿尔法列表迭代贪婪算法(RAIG),为基于机器学习的优化(尤其是解决组合问题)的进步做出贡献。RAIG 采用 N 列表机制进行求解初始化,其求解改进程序通过强化学习和 Alpha 列表机制得到增强,从而实现更有效的搜索。为了评估 RAIG 的性能,我们在数值实验中考虑了一个经典的工程优化问题,即 Permutation Flowshop Scheduling Problem (PFSP)。与基线相比,在求解大型实例时,RAIG 的性能提高了 9%。实验结果还表明,RAIG 算法比基准算法更稳健。统计测试证实了 RAIG 算法的优越性,因此可以作为未来研究的有力基准。
Reinforcement learning-based alpha-list iterated greedy for production scheduling
Metaheuristics can benefit from analyzing patterns and regularities in data to perform more effective searches in the solution space. In line with the emerging trend in the optimization literature, this study introduces the Reinforcement-learning-based Alpha-List Iterated Greedy (RAIG) algorithm to contribute to the advances in machine learning-based optimization, notably for solving combinatorial problems. RAIG uses an N-List mechanism for solution initialization and its solution improvement procedure is enhanced by Reinforcement Learning and an Alpha-List mechanism for more effective searches. A classic engineering optimization problem, the Permutation Flowshop Scheduling Problem (PFSP), is considered for numerical experiments to evaluate RAIG's performance. Highly competitive solutions to the classic scheduling problem are identified, with up to 9% improvement compared to the baseline, when solving large-size instances. Experimental results also show that the RAIG algorithm performs more robustly than the baseline algorithm. Statistical tests confirm that RAIG is superior and hence can be introduced as a strong benchmark for future studies.