{"title":"优化家具行业的作业车间调度:考虑机器设置、批次变异性和内部物流的强化学习方法","authors":"Malte Schneevogt, Karsten Binninger, Noah Klarmann","doi":"arxiv-2409.11820","DOIUrl":null,"url":null,"abstract":"This paper explores the potential application of Deep Reinforcement Learning\nin the furniture industry. To offer a broad product portfolio, most furniture\nmanufacturers are organized as a job shop, which ultimately results in the Job\nShop Scheduling Problem (JSSP). The JSSP is addressed with a focus on extending\ntraditional models to better represent the complexities of real-world\nproduction environments. Existing approaches frequently fail to consider\ncritical factors such as machine setup times or varying batch sizes. A concept\nfor a model is proposed that provides a higher level of information detail to\nenhance scheduling accuracy and efficiency. The concept introduces the\nintegration of DRL for production planning, particularly suited to batch\nproduction industries such as the furniture industry. The model extends\ntraditional approaches to JSSPs by including job volumes, buffer management,\ntransportation times, and machine setup times. This enables more precise\nforecasting and analysis of production flows and processes, accommodating the\nvariability and complexity inherent in real-world manufacturing processes. The\nRL agent learns to optimize scheduling decisions. It operates within a discrete\naction space, making decisions based on detailed observations. A reward\nfunction guides the agent's decision-making process, thereby promoting\nefficient scheduling and meeting production deadlines. Two integration\nstrategies for implementing the RL agent are discussed: episodic planning,\nwhich is suitable for low-automation environments, and continuous planning,\nwhich is ideal for highly automated plants. While episodic planning can be\nemployed as a standalone solution, the continuous planning approach\nnecessitates the integration of the agent with ERP and Manufacturing Execution\nSystems. This integration enables real-time adjustments to production schedules\nbased on dynamic changes.","PeriodicalId":501175,"journal":{"name":"arXiv - EE - Systems and Control","volume":"12 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Optimizing Job Shop Scheduling in the Furniture Industry: A Reinforcement Learning Approach Considering Machine Setup, Batch Variability, and Intralogistics\",\"authors\":\"Malte Schneevogt, Karsten Binninger, Noah Klarmann\",\"doi\":\"arxiv-2409.11820\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper explores the potential application of Deep Reinforcement Learning\\nin the furniture industry. To offer a broad product portfolio, most furniture\\nmanufacturers are organized as a job shop, which ultimately results in the Job\\nShop Scheduling Problem (JSSP). The JSSP is addressed with a focus on extending\\ntraditional models to better represent the complexities of real-world\\nproduction environments. Existing approaches frequently fail to consider\\ncritical factors such as machine setup times or varying batch sizes. A concept\\nfor a model is proposed that provides a higher level of information detail to\\nenhance scheduling accuracy and efficiency. The concept introduces the\\nintegration of DRL for production planning, particularly suited to batch\\nproduction industries such as the furniture industry. The model extends\\ntraditional approaches to JSSPs by including job volumes, buffer management,\\ntransportation times, and machine setup times. This enables more precise\\nforecasting and analysis of production flows and processes, accommodating the\\nvariability and complexity inherent in real-world manufacturing processes. The\\nRL agent learns to optimize scheduling decisions. It operates within a discrete\\naction space, making decisions based on detailed observations. A reward\\nfunction guides the agent's decision-making process, thereby promoting\\nefficient scheduling and meeting production deadlines. Two integration\\nstrategies for implementing the RL agent are discussed: episodic planning,\\nwhich is suitable for low-automation environments, and continuous planning,\\nwhich is ideal for highly automated plants. While episodic planning can be\\nemployed as a standalone solution, the continuous planning approach\\nnecessitates the integration of the agent with ERP and Manufacturing Execution\\nSystems. This integration enables real-time adjustments to production schedules\\nbased on dynamic changes.\",\"PeriodicalId\":501175,\"journal\":{\"name\":\"arXiv - EE - Systems and Control\",\"volume\":\"12 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Systems and Control\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11820\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Systems and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11820","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Optimizing Job Shop Scheduling in the Furniture Industry: A Reinforcement Learning Approach Considering Machine Setup, Batch Variability, and Intralogistics
This paper explores the potential application of Deep Reinforcement Learning
in the furniture industry. To offer a broad product portfolio, most furniture
manufacturers are organized as a job shop, which ultimately results in the Job
Shop Scheduling Problem (JSSP). The JSSP is addressed with a focus on extending
traditional models to better represent the complexities of real-world
production environments. Existing approaches frequently fail to consider
critical factors such as machine setup times or varying batch sizes. A concept
for a model is proposed that provides a higher level of information detail to
enhance scheduling accuracy and efficiency. The concept introduces the
integration of DRL for production planning, particularly suited to batch
production industries such as the furniture industry. The model extends
traditional approaches to JSSPs by including job volumes, buffer management,
transportation times, and machine setup times. This enables more precise
forecasting and analysis of production flows and processes, accommodating the
variability and complexity inherent in real-world manufacturing processes. The
RL agent learns to optimize scheduling decisions. It operates within a discrete
action space, making decisions based on detailed observations. A reward
function guides the agent's decision-making process, thereby promoting
efficient scheduling and meeting production deadlines. Two integration
strategies for implementing the RL agent are discussed: episodic planning,
which is suitable for low-automation environments, and continuous planning,
which is ideal for highly automated plants. While episodic planning can be
employed as a standalone solution, the continuous planning approach
necessitates the integration of the agent with ERP and Manufacturing Execution
Systems. This integration enables real-time adjustments to production schedules
based on dynamic changes.