Chenhao Zhou, Aloisius Stephen, Kok Choon Tan, Ek Peng Chew, Loo Hay Lee
{"title":"集装箱码头电动自动导引车充电调度的多代理 Q 学习方法","authors":"Chenhao Zhou, Aloisius Stephen, Kok Choon Tan, Ek Peng Chew, Loo Hay Lee","doi":"10.1287/trsc.2022.0113","DOIUrl":null,"url":null,"abstract":"In recent years, advancements in battery technology have led to increased adoption of electric automated guided vehicles in container terminals. Given how critical these vehicles are to terminal operations, this trend requires efficient recharging scheduling for automated guided vehicles, and the main challenges arise from limited charging station capacity and tight vehicle schedules. Motivated by the dynamic nature of the problem, the recharging scheduling problem for an entire vehicle fleet given capacitated stations is formulated as a Markov decision process model. Then, it is solved using a multiagent Q-learning (MAQL) approach to produce a recharging schedule that minimizes the delay of jobs. Numerical experiments show that under a stochastic environment in terms of vehicle travel time, MAQL enables the exploration of better scheduling by coordinating across the entire vehicle fleet and charging facilities and outperforms various benchmark approaches, with an additional improvement of 18.8% on average over the best rule-based heuristic and 5.4% over the predetermined approach.Funding: This work was supported by the National Natural Science Foundation of China [Grant 72101203], the Shaanxi Provincial Key R&D Program, China [Grant 2022KW-02], and the Singapore Maritime Institute [Grant SMI-2017-SP-002].Supplemental Material: The online appendix is available at https://doi.org/10.1287/trsc.2022.0113 .","PeriodicalId":51202,"journal":{"name":"Transportation Science","volume":"46 1","pages":""},"PeriodicalIF":4.4000,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multiagent Q-Learning Approach for the Recharging Scheduling of Electric Automated Guided Vehicles in Container Terminals\",\"authors\":\"Chenhao Zhou, Aloisius Stephen, Kok Choon Tan, Ek Peng Chew, Loo Hay Lee\",\"doi\":\"10.1287/trsc.2022.0113\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, advancements in battery technology have led to increased adoption of electric automated guided vehicles in container terminals. Given how critical these vehicles are to terminal operations, this trend requires efficient recharging scheduling for automated guided vehicles, and the main challenges arise from limited charging station capacity and tight vehicle schedules. Motivated by the dynamic nature of the problem, the recharging scheduling problem for an entire vehicle fleet given capacitated stations is formulated as a Markov decision process model. Then, it is solved using a multiagent Q-learning (MAQL) approach to produce a recharging schedule that minimizes the delay of jobs. Numerical experiments show that under a stochastic environment in terms of vehicle travel time, MAQL enables the exploration of better scheduling by coordinating across the entire vehicle fleet and charging facilities and outperforms various benchmark approaches, with an additional improvement of 18.8% on average over the best rule-based heuristic and 5.4% over the predetermined approach.Funding: This work was supported by the National Natural Science Foundation of China [Grant 72101203], the Shaanxi Provincial Key R&D Program, China [Grant 2022KW-02], and the Singapore Maritime Institute [Grant SMI-2017-SP-002].Supplemental Material: The online appendix is available at https://doi.org/10.1287/trsc.2022.0113 .\",\"PeriodicalId\":51202,\"journal\":{\"name\":\"Transportation Science\",\"volume\":\"46 1\",\"pages\":\"\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2024-04-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Transportation Science\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1287/trsc.2022.0113\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"OPERATIONS RESEARCH & MANAGEMENT SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transportation Science","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1287/trsc.2022.0113","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPERATIONS RESEARCH & MANAGEMENT SCIENCE","Score":null,"Total":0}
Multiagent Q-Learning Approach for the Recharging Scheduling of Electric Automated Guided Vehicles in Container Terminals
In recent years, advancements in battery technology have led to increased adoption of electric automated guided vehicles in container terminals. Given how critical these vehicles are to terminal operations, this trend requires efficient recharging scheduling for automated guided vehicles, and the main challenges arise from limited charging station capacity and tight vehicle schedules. Motivated by the dynamic nature of the problem, the recharging scheduling problem for an entire vehicle fleet given capacitated stations is formulated as a Markov decision process model. Then, it is solved using a multiagent Q-learning (MAQL) approach to produce a recharging schedule that minimizes the delay of jobs. Numerical experiments show that under a stochastic environment in terms of vehicle travel time, MAQL enables the exploration of better scheduling by coordinating across the entire vehicle fleet and charging facilities and outperforms various benchmark approaches, with an additional improvement of 18.8% on average over the best rule-based heuristic and 5.4% over the predetermined approach.Funding: This work was supported by the National Natural Science Foundation of China [Grant 72101203], the Shaanxi Provincial Key R&D Program, China [Grant 2022KW-02], and the Singapore Maritime Institute [Grant SMI-2017-SP-002].Supplemental Material: The online appendix is available at https://doi.org/10.1287/trsc.2022.0113 .
期刊介绍:
Transportation Science, published quarterly by INFORMS, is the flagship journal of the Transportation Science and Logistics Society of INFORMS. As the foremost scientific journal in the cross-disciplinary operational research field of transportation analysis, Transportation Science publishes high-quality original contributions and surveys on phenomena associated with all modes of transportation, present and prospective, including mainly all levels of planning, design, economic, operational, and social aspects. Transportation Science focuses primarily on fundamental theories, coupled with observational and experimental studies of transportation and logistics phenomena and processes, mathematical models, advanced methodologies and novel applications in transportation and logistics systems analysis, planning and design. The journal covers a broad range of topics that include vehicular and human traffic flow theories, models and their application to traffic operations and management, strategic, tactical, and operational planning of transportation and logistics systems; performance analysis methods and system design and optimization; theories and analysis methods for network and spatial activity interaction, equilibrium and dynamics; economics of transportation system supply and evaluation; methodologies for analysis of transportation user behavior and the demand for transportation and logistics services.
Transportation Science is international in scope, with editors from nations around the globe. The editorial board reflects the diverse interdisciplinary interests of the transportation science and logistics community, with members that hold primary affiliations in engineering (civil, industrial, and aeronautical), physics, economics, applied mathematics, and business.