优化家具行业的作业车间调度:考虑机器设置、批次变异性和内部物流的强化学习方法

Malte Schneevogt, Karsten Binninger, Noah Klarmann
{"title":"优化家具行业的作业车间调度:考虑机器设置、批次变异性和内部物流的强化学习方法","authors":"Malte Schneevogt, Karsten Binninger, Noah Klarmann","doi":"arxiv-2409.11820","DOIUrl":null,"url":null,"abstract":"This paper explores the potential application of Deep Reinforcement Learning\nin the furniture industry. To offer a broad product portfolio, most furniture\nmanufacturers are organized as a job shop, which ultimately results in the Job\nShop Scheduling Problem (JSSP). The JSSP is addressed with a focus on extending\ntraditional models to better represent the complexities of real-world\nproduction environments. Existing approaches frequently fail to consider\ncritical factors such as machine setup times or varying batch sizes. A concept\nfor a model is proposed that provides a higher level of information detail to\nenhance scheduling accuracy and efficiency. The concept introduces the\nintegration of DRL for production planning, particularly suited to batch\nproduction industries such as the furniture industry. The model extends\ntraditional approaches to JSSPs by including job volumes, buffer management,\ntransportation times, and machine setup times. This enables more precise\nforecasting and analysis of production flows and processes, accommodating the\nvariability and complexity inherent in real-world manufacturing processes. The\nRL agent learns to optimize scheduling decisions. It operates within a discrete\naction space, making decisions based on detailed observations. A reward\nfunction guides the agent's decision-making process, thereby promoting\nefficient scheduling and meeting production deadlines. Two integration\nstrategies for implementing the RL agent are discussed: episodic planning,\nwhich is suitable for low-automation environments, and continuous planning,\nwhich is ideal for highly automated plants. While episodic planning can be\nemployed as a standalone solution, the continuous planning approach\nnecessitates the integration of the agent with ERP and Manufacturing Execution\nSystems. This integration enables real-time adjustments to production schedules\nbased on dynamic changes.","PeriodicalId":501175,"journal":{"name":"arXiv - EE - Systems and Control","volume":"12 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Optimizing Job Shop Scheduling in the Furniture Industry: A Reinforcement Learning Approach Considering Machine Setup, Batch Variability, and Intralogistics\",\"authors\":\"Malte Schneevogt, Karsten Binninger, Noah Klarmann\",\"doi\":\"arxiv-2409.11820\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper explores the potential application of Deep Reinforcement Learning\\nin the furniture industry. To offer a broad product portfolio, most furniture\\nmanufacturers are organized as a job shop, which ultimately results in the Job\\nShop Scheduling Problem (JSSP). The JSSP is addressed with a focus on extending\\ntraditional models to better represent the complexities of real-world\\nproduction environments. Existing approaches frequently fail to consider\\ncritical factors such as machine setup times or varying batch sizes. A concept\\nfor a model is proposed that provides a higher level of information detail to\\nenhance scheduling accuracy and efficiency. The concept introduces the\\nintegration of DRL for production planning, particularly suited to batch\\nproduction industries such as the furniture industry. The model extends\\ntraditional approaches to JSSPs by including job volumes, buffer management,\\ntransportation times, and machine setup times. This enables more precise\\nforecasting and analysis of production flows and processes, accommodating the\\nvariability and complexity inherent in real-world manufacturing processes. The\\nRL agent learns to optimize scheduling decisions. It operates within a discrete\\naction space, making decisions based on detailed observations. A reward\\nfunction guides the agent's decision-making process, thereby promoting\\nefficient scheduling and meeting production deadlines. Two integration\\nstrategies for implementing the RL agent are discussed: episodic planning,\\nwhich is suitable for low-automation environments, and continuous planning,\\nwhich is ideal for highly automated plants. While episodic planning can be\\nemployed as a standalone solution, the continuous planning approach\\nnecessitates the integration of the agent with ERP and Manufacturing Execution\\nSystems. This integration enables real-time adjustments to production schedules\\nbased on dynamic changes.\",\"PeriodicalId\":501175,\"journal\":{\"name\":\"arXiv - EE - Systems and Control\",\"volume\":\"12 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Systems and Control\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11820\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Systems and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11820","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文探讨了深度强化学习在家具行业的潜在应用。为了提供广泛的产品组合,大多数家具制造商都是以作业车间的形式组织起来的,这最终导致了作业车间调度问题(JSSP)。解决 JSSP 问题的重点是扩展传统模型,以更好地体现现实世界生产环境的复杂性。现有的方法往往没有考虑到机器设置时间或不同批量大小等关键因素。本文提出了一种模型概念,可提供更高级别的信息细节,以提高调度的准确性和效率。该概念引入了用于生产计划的 DRL 集成,尤其适用于批量生产行业,如家具行业。该模型通过将作业量、缓冲管理、运输时间和机器设置时间纳入其中,扩展了 JSSP 的传统方法。这样就能对生产流程和工艺流程进行更精确的预测和分析,并适应现实世界中生产流程固有的多变性和复杂性。RL 代理通过学习来优化调度决策。它在离散的行动空间内运行,根据详细的观察结果做出决策。一个奖励函数可以指导代理的决策过程,从而提高排产效率,满足生产截止日期的要求。本文讨论了实现 RL 代理的两种集成策略:适用于低自动化环境的偶发规划和适用于高自动化工厂的连续规划。偶发规划可以作为独立的解决方案使用,而连续规划方法则需要将代理与 ERP 和制造执行系统集成。通过集成,可根据动态变化实时调整生产计划。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Optimizing Job Shop Scheduling in the Furniture Industry: A Reinforcement Learning Approach Considering Machine Setup, Batch Variability, and Intralogistics
This paper explores the potential application of Deep Reinforcement Learning in the furniture industry. To offer a broad product portfolio, most furniture manufacturers are organized as a job shop, which ultimately results in the Job Shop Scheduling Problem (JSSP). The JSSP is addressed with a focus on extending traditional models to better represent the complexities of real-world production environments. Existing approaches frequently fail to consider critical factors such as machine setup times or varying batch sizes. A concept for a model is proposed that provides a higher level of information detail to enhance scheduling accuracy and efficiency. The concept introduces the integration of DRL for production planning, particularly suited to batch production industries such as the furniture industry. The model extends traditional approaches to JSSPs by including job volumes, buffer management, transportation times, and machine setup times. This enables more precise forecasting and analysis of production flows and processes, accommodating the variability and complexity inherent in real-world manufacturing processes. The RL agent learns to optimize scheduling decisions. It operates within a discrete action space, making decisions based on detailed observations. A reward function guides the agent's decision-making process, thereby promoting efficient scheduling and meeting production deadlines. Two integration strategies for implementing the RL agent are discussed: episodic planning, which is suitable for low-automation environments, and continuous planning, which is ideal for highly automated plants. While episodic planning can be employed as a standalone solution, the continuous planning approach necessitates the integration of the agent with ERP and Manufacturing Execution Systems. This integration enables real-time adjustments to production schedules based on dynamic changes.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Data-Efficient Quadratic Q-Learning Using LMIs On the Stability of Consensus Control under Rotational Ambiguities System-Level Efficient Performance of EMLA-Driven Heavy-Duty Manipulators via Bilevel Optimization Framework with a Leader--Follower Scenario ReLU Surrogates in Mixed-Integer MPC for Irrigation Scheduling Model-Free Generic Robust Control for Servo-Driven Actuation Mechanisms with Experimental Verification
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1